Thursday, February 12, 2015

eConnect will import data with leading spaces: Why should you care?

By Steve Endow 

 One fine day, you joyfully work on a simple Dynamics GP customer query.  You just want to look up the record for a customer.


Couldn't get much simpler.

But when you run the query, you get zero results.

Hmm, that's odd.  You open GP to verify the customer exists.

Yup, there it is in the correct company database.  You double check your query and make sure you are querying the correct database in SQL Server Management Studio--and everything looks okay.

So, what is going on?  How can the customer exist in GP, but not show up in a simple SQL query?

Look a little closer at the customer record in GP.

Well, there's the problem right there.  Do you see it?  

When entering data directly into Dynamics GP, if you try and type a space as the first character in any field, nothing happens.  You just can't enter a field value that starts with a space in the GP user interface.

But I have discovered that eConnect doesn't have such a restriction, and it will dutifully import data values that begin with a space.  It doesn't seem like a big deal, until you try and query that data.

In SQL, these two queries are different:



The leading space on the customer ID makes all the difference.  With the first query, I get no results.  With the second query I find the record that has a leading space on the customer ID.

Honestly, I don't know that I actually realized this distinction previously--it just isn't something I have had to try, use, or deal with.  Trimming white space is such second nature for a developer that I can't remember ever thinking about it.  It seems obvious in retrospect, but I think it's just one of those incredibly elementary assumptions that becomes invisible to you after so many years of development. 

When you open the Customer lookup window, you can see that a leading space also affects sorting.

I made up this specific example to demonstrate the issue with eConnect and GP.  In all of my integrations and code, I habitually use a Trim() command on all strings to trim leading and trailing white space, so I was actually quite surprised that eConnect doesn't trim leading spaces.

But this topic came up because of a similar leading space issue showed up on a customer integration this week, and I was quite surprised.

I was surprised because having done software development for about 20 years now, I can't recall encountering this issue before.  While I may have encountered data that had a leading space, my code always trimmed leading and trailing spaces, so a leading space never resulted in a problem.

But in the case of my customer, they had populated a custom table with a few rows that had a leading space.  That leading space prevented a SQL WHERE clause from working, preventing records from being retrieved and imported.

My integration retrieved invoice " ABC123" (with leading space) from the invoice header table in SQL.  It then trimmed the invoice number and queried the invoice line table for all lines related to invoice "ABC123".  As you can guess, the line query retrieved 0 records.

The client and I spent several minutes reviewing the integration and the data trying to figure out why one invoice wouldn't import.  I eventually noticed that the data looked slightly shifted in the SQL query results grid.

It's subtle, but if you have other records above and below, it is much easier to spot.

Once we discovered the leading space in the invoice data, the client removed the spaces and the integration worked fine.

Paradoxically, the issue was caused because my integration code was trimming the invoice number.  Instead of using the actual " ABC123" value from the invoice header table, the trimmed version of "ABC123" caused the problem.  But that turned out to be fortunate, since I now know eConnect would have imported the value with a space, which would have certainly caused headaches in GP later on.

So, the lessons are:

1. Leading spaces in data can be problematic in GP (or just about any database or application)
2. The GP application doesn't allow leading spaces, but eConnect will import them
3. Always trim leading and trailing white space for all of your imported data

Keep learning!

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter

Who deleted my Dynamics GP database table and stored procedure.?

By Steve Endow

An unusual situation has come up at a client.  While reviewing some integration logs, I saw that a custom database table had been deleted from the production Dynamics GP company database.  The integration log recorded an error from a stored procedure--the proc was trying to insert a record into the table, but the table no longer existed.

An unexpected error occurred in SaveUpdateCCHistory: Invalid object name 'cstb_Update_CC_History'.

Since the procedure was still present, but the table wasn't, we were pretty confident the table had been deleted, since you can't create a proc that refers to a non-existent table.

Very strange.  We recreated the table, and moved on.

Then a few days later, while reviewing the integration logs in the Test company database, we saw an error indicating that a call to a custom stored procedure was failing.

An unexpected error occurred in InsertCustomerEmailOptions: Could not find stored procedure 'cssp_New_Customer_Email'

Sure enough, that procedure was missing from the Test database.

These objects have been in both databases for weeks, if not several months, so we hadn't touched them, and certainly didn't have any processes or scripts that would delete them.

The client said they couldn't think of any way the objects would be deleted.

These mysteries are always difficult to research "after the fact".  The ideal solution is to have third party SQL Server auditing software that records such activity and lets you review it later when a problem occurs.  But since such SQL administration is relatively rare with GP clients, we usually have limited tools to research such issues.

After some Googling, I found a few articles on how you can query a database transaction log to determine when a database object was dropped and who dropped it.

But there are two big caveats:

1.  The fn_dblog and fn_dump_dblog functions are not documented or supported, have been shown to have bugs, and can result in some unexpected consequences.  So you should use them very cautiously.

2.  The fn_dblog and fn_dump_dblog functions read database transaction log activity.  So if the database has had one or more full backups since the drop, you are likely out of luck and will not find any information about the dropped objects.

Technically it is possible to read from a database backup file, but such files typically do not have much log data to work with, so the odds of finding the drop data in a backup file are slim.

Also, technically it is possible to use the functions to read directly from transaction logs, but I don't think I've ever seen a GP client that intentionally or systematically backs up their SQL transaction logs, so that is typically a long shot as well.  Usually, once a full DB backup is performed, the transaction logs get cleared.

But, aside from those rather significant limitations, I was still pretty impressed that it is possible to determine when an object was dropped, and who dropped it.  I'm guessing it will not be very helpful in a real environment where you may not know an object was dropped for a few days, but if you discover the problem quickly, you can try it and see if it works.

Below is the script that I used to test with the TWO database.  I create a test table and insert some rows.  I then backup the database and restore to a different database name.  I then drop the table.

After the table is dropped, I can query the transaction log to see that an object was dropped, who dropped it, but I can't tell which object was dropped (since it no longer exists).

To get the name of the dropped object, you have to restore a backup of the database, then use the dropped object ID to query the database (where the object still exists), to see the object name.

--Create table
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[TestTable]') AND type IN (N'U'))
CREATE TABLE [dbo].[TestTable](
[FirstName] [varchar](30) NOT NULL,
[LastName] [varchar](30) NOT NULL,
[RowID] [int] IDENTITY(1,1) NOT NULL
--Populate table
INSERT INTO TestTable (FirstName, LastName) VALUES  ('Chuck', 'Norris')
INSERT INTO TestTable (FirstName, LastName) VALUES  ('George', 'Washington')
INSERT INTO TestTable (FirstName, LastName) VALUES  ('Britney', 'Spears')
--Verify table

--Drop the table in TWO

--Get info on drop from transaction log
[Begin Time],
[Transaction Name],
SUSER_SNAME([Transaction SID]) AS UserName,
[Transaction Id],
[Transaction SID],
(SELECT TOP (1) [Lock Information] FROM fn_dblog (NULL, NULL)
WHERE [Transaction Id] = fndb.[Transaction Id]
AND [Lock Information] LIKE '%SCH_M OBJECT%') AS ObjectID
FROM fn_dblog (NULL, NULL) AS fndb
WHERE [Transaction Name] = 'DROPOBJ'

--In the prior query, review the ObjectID field values.  The object ID is the numeric value at the end, in between colons
--HoBt 0:ACQUIRE_LOCK_SCH_M OBJECT: 6:1554182124:0
--In this example, the object ID is:  1554182124

--Insert numeric object ID and run on restored copy of DB
SELECT OBJECT_NAME(1554182124) AS ObjectName

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter

Monday, February 9, 2015

How do you backup your Hyper-V virtual machines?

By Steve Endow

So, how do you backup your Hyper-V virtual machines?  You DO back them up, don't you?

Today I spent a rare day working on my "data center infrastructure".

Specifically, I was working on my Hyper-V backups.  For the last several years, I've used batch files and VBS scripts to save my VMs and then copy the VHDs to a file server.  While not fancy, this actually worked okay for me for several years.

But since I retired my Windows file server and replaced it with a Synology NAS, I stopped compressing my VHD files, and started to also implement offsite backups as well.  So my VHD files get copied to the Synology, then the Synology has a backup job to copy the VHDs to external drives that are rotated each week.  Since I was no longer compressing the VHD files, my 2TB external drives were filling up, so I had to finally come up with a better system.

While copying VHD files seems adequate for basic situations, there are some challenges.  The obvious downside is compression.  Even if you have plenty of storage, having terabytes of backups sitting around isn't ideal.  And in my case, my limitation was not my main storage, but my external drives that I rotate each week.  In addition to capacity, managing multiple backup versions on the external drives was a hassle.

I looked into using Windows Backup on each VM, but if you attempt to backup to a network share (a NAS, in my case), Windows Backup only supports a single full backup.  It will not do differential backups, and it will not retain multiple backups on a share.  So that option was quickly ruled out.

I then searched for Hyper-V backup software.  After perusing the search results and a few forum posts, I probably found a dozen different names mentioned.  There are a lot of options and I couldn't possibly evaluate them all, or even a half dozen.  But two names seemed to have a focus on Hyper-V, and seemed to have positive feedback.

The two I chose to evaluate are:

I've done some initial testing of both products, and both vendors offer a free limited version, as well as a full trial version.  Both appear to work well, but they seem to be pretty different products in terms of their features and presumed target market.

Veeam seems to be focused on "higher end" features and enterprise requirements with a full set of configuration options for nearly everything, while Altaro has a very good set of features and a simpler design that can get you running backups in a few minutes with less configuration.

Veeam seems to install a ton of "stuff"--it's installer is over 800 MB, with a 139 MB service pack.  The Veeam solution takes quite a while to install, requiring .NET 4 and SQL Server Express, among other things.  Altaro is a relatively svelte 144 MB, with a very fast, simple installation.  It seems obvious that the Veeam installation is larger and more involved because it has some pretty advanced and powerful features--a few of which I don't understand.  The question I have to answer is whether I want those features and whether the potential overhead is worthwhile.

I don't believe that my comparison of the two products will be focused on whether one product is "better" than the other.  It will be a question of whether they can completely fulfill my current requirements, fulfill any possible future requirements I can think of, how easy they are to use and manage, and how much they cost.

From the list prices I've seen, Veeam is a little bit more expensive than Altaro, but offers some additional feature and more flexibility, such as with the configuration of multiple backups and destinations.  The difference is only a few hundred dollars, so price alone isn't a primary driver for me between the two.

My initial impressions are that, objectively, Altaro is probably the more appropriate solution for me.  While I may like a few of the more advanced features and configurability of Veeam, those items are probably not essential for my business.  Nice to have?  Sure.  Critical?  No.  Overhead that I probably don't need to be thinking about given my simple requirements?  Probably.

But would I like just a little more configurability in Altaro?  Yes.  And while it is probably not important technically, I find Veeam's backup files and folder structure to be fantastically simple, while Altaro produces a messy pile of directories and files that look baffling.

My guess is that this difference is due to how the two manage their backup logs and histories.  Veeam appears to use SQL Server Express, so it can maintain metadata in SQL and have a very clean backup file and folder structure, whereas my guess is that Altaro stores the backup metadata on the backup drive, so that's why you see dat files and GUIDs on the backup drive.  Once I do some restores I suspect I'll have a better feel for the merits of each approach.

I just started evaluating the two products today, so I'm going to compare them over the next two weeks to see which one might be a better fit.

UPDATE: 3/10/2015:  After testing both Veeam and Altaro, I ended up purchasing Veeam.

To my surprise, the biggest difference that I noticed between the two was performance.  Veeam was significantly faster, particularly with differential backups that required a compare process.  Also, it seems that Veeam has very good deduping logic when backing up multiple VMs, so if you have 20 GB of identical files on 5 VMs, it doesn't backup 5 copies of the same files, resulting in very efficient use of disk space.

And as I suspected, Veeam offers much more detailed control over everything from backup destinations, archive destinations, and schedules, which makes it more flexible and configurable.

When I went to compare the final prices of the two, Veeam was only slightly more expensive, so the price difference was negligible for me.

I now have almost all of my VMs setup for backup to my main NAS and rotating external drives and it has been working flawlessly.  I'm very impressed, and I wish I would have made this purchase years ago.

UPDATE: 1/16/2015:  I've been using Veeam for months now, and honestly, it appears to be working so well that I've forgotten about it, and completely forgotten about my Hyper-V backups.  I recently purchased a new NAS, so I had move my backup files and then open up Veeam to point it to the new NAS share.  But other than that, I haven't touch Veeam in months.

I did find one minor limitation with Veeam.  It has a feature that allows you to archive backups to a second backup destination.  This is a really nice feature that lets you write one copy of the backup to your main backup repository, and a second copy off site or to a set of backup drives that are rotated off site.  The issue I had was that Veeam expects all backup destinations to be online at all times.  So if I didn't have my external archive drives turned on, Veeam would constantly send me error messages warning me that the drives were not available.  After working with Veeam support over the course of a few days, the tech concluded that there was no way to change how Veeam checks for the backup destinations.  It was a little surprising that my small environment had a configuration that stumped the software, but I suspect that my external USB archive drives (that are typically powered off) are not the typical configuration for Veeam, as it is more enterprise oriented.

Overall, I think it's a great solution and have recommended it to others who want care free VM backups.  And for those who have more demanding VM management needs, the features of the Enterprise version of the Veeam software, such as Instant Recovery, are very impressive.

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter

Thursday, February 5, 2015

The value of logging and diagnostics in Dynamics GP integrations

By Steve Endow
I developed a custom web service for a customer that allows one of their internal web sites to create customers in GP and store customer credit card data in a third party credit card module for GP.

During the testing process, we had to troubleshoot various issues with code, APIs, data, etc.  As part of that process I used my standard logging library, which writes messages and errors to a simple daily text log file on the server.  I used the log to troubleshoot issues, then I resolved them, and eventually the project went live and I pretty much forgot about the detailed logging that I had setup.

Today, the customer did some additional testing on their Test web server, and he said that he encountered a problem.  He sent a screen shot from his web site, but it didn't provide any information or error text.

I then reviewed the GP web service log file and was surprised to see all of the detailed logging, which is automatically enabled in the test environment.

2/5/2015 10:56:18 AM: UpdateCC Instantiating UpdateCCResponse
2/5/2015 10:56:18 AM: UpdateCC Calling ValidateUpdateCCRequestHMAC
2/5/2015 10:56:18 AM: UpdateCC Calling ProcessUpdateCCRequest
2/5/2015 10:56:18 AM: UpdateCC Calling UpdateCustomer
2/5/2015 10:56:23 AM: UpdateCC Calling GetCustomerOpenBalance
2/5/2015 10:56:23 AM: UpdateCC UpdateCC Setting regRequest values
2/5/2015 10:56:23 AM: UpdateCC Getting company ID
2/5/2015 10:56:23 AM: UpdateCC Getting customer profile ID
2/5/2015 10:56:24 AM: UpdateCC Getting CC expiration
2/5/2015 10:56:24 AM: UpdateCC Getting customer payment profile ID
2/5/2015 10:56:24 AM: UpdateCC Getting customer payment profile info
2/5/2015 10:56:24 AM: UpdateCC Updating Authnet payment profile
2/5/2015 10:56:29 AM: UpdateCC Calling ImportCC
2/5/2015 10:56:29 AM: UpdateCC Saving update CC history
2/5/2015 10:56:29 AM: UpdateCC Checking amount paid and approval code
2/5/2015 10:56:29 AM: UpdateCC completed ProcessUpdateCCRequest
2/5/2015 10:56:29 AM: UpdateCC Process UpdateCCRequest returned True
2/5/2015 10:56:29 AM: WARNING: UpdateCC elapsed time: 10.57

Reviewing the entries, there were no errors, and it looks like everything worked properly.  But on the last line, I saw the warning, indicating that the process took over 10 seconds.  That likely explained the problem the customer experienced.  I added a timer to each method in the web service to write to the log any time it takes longer than 10 seconds to process the request.

If you have ever worked with eConnect, you likely know that the first time you call an eConnect method, it can take 5-10 seconds for it to respond.  Once the eConnect service is 'alive', it responds very quickly, but eventually it shuts down again, and there will be a delay when it processes the next request.

Since the log file told me that this was the first request of the day, I suspect that eConnect took several seconds to respond when I updated the customer information, and after 10 seconds, the customer's web site timed out, thinking that the GP request had failed.  Looking at the time stamp in the log, you can see that the eConnect request started at 10:56:18, and was completed 5 seconds later by 10:56:23.  We also see that the call to took another 5 seconds, pushing the process over the 10 second threshold.

The eConnect delay occurred in the test environment because there isn't much regular activity.  In the production environment, we have a process that checks the status of eConnect, SQL,, and the third party credit card module every minute.  Because that process talks with eConnect every minute, the eConnect service always stays 'alive' and we avoid the startup delay.

Just a quick example of the value of logging and some basic diagnostics in a Dynamics GP integration.

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter