Friday, November 10, 2017

Free Precipio SFTP file transfer and data export tool - New Version 1.41 released

By Steve Endow

I have released a new version of my free SFTP file transfer and data export tool for Dynamics GP.

The new version 1.41 can be downloaded from my web site:

           http://precipioservices.com/sftp/


Version 1.41 includes the following enhancements:


  • Add support for optional SQLTimeout setting in config file to increase SQL command timeout
  • Set default SQLTimeout to 60 seconds if setting is not present in config file
  • Increase SFTP Connection Timeout from 5 seconds to 30 seconds, and Idle Timeout from 10 seconds to 30 seconds



The SQL Timeout setting allows for longer running queries, or queries that result in larger export files. 

The SFTP Connection Timeout was increased to accommodate some SFTP servers that might not complete the connection process in 5 seconds.


If you use the SFTP application, please let me know! I'd love to hear how you are using it and if it is working well for you.



You can also find him on Twitter, YouTube, and Google+








Dynamics GP BlackLine Integration Upload
Dynamics GP Coupa Integration Upload
Dynamics GP IQ BackOffice Integration Upload
Dynamics GP SFTP Integration Upload
Dynamics GP SFTP File Transfer Upload

Wednesday, November 1, 2017

Beware of UTC time zone on dates when importing data into Dynamics GP!

By Steve Endow

Prior to this year, I rarely had to deal with time zones when developing integrations for Dynamics GP.

The customer was typically using GP in a US time zone, the SQL Server was on premise in that time zone, and all of their data usually related to that same time zone.  Nice and simple.

Dynamics GP then introduced the DEX_ROW_TS field to several tables, and I would regularly forget that field used a UTC timestamp.  That was relatively minor and easy to work around.

But with the increasing popularity of Software As A Service (SaaS) platforms, I'm seeing more and more data that includes UTC timestamps.  I didn't think too much about this until today, when I found an issue with how a SaaS platform provided transaction dates in their export files.

Here is a sample data from a file that contains AP Invoices:

    2017-09-05T14:26:05Z

This is a typical date time value, provided in what I generically call "Zulu time" format.  Apparently this format is defined in ISO 8601.

The format includes date and time, separated by the letter T, with a Z at the end, indicating that the time is based on the UTC time zone.

So why do we care?

Until today, I didn't think much of it, as my C# .NET code converts the full date time string to a DateTime value based on the local time zone, something like this:

string docDate = header["invoice-date"].ToString().Trim();
DateTime invoiceDate;
success = DateTime.TryParse(docDate, out invoiceDate);
if (!success)
    {
        Log.Write("Failed to parse date for invoice " + docNumber + ": " + docDate, true);
    }

This seemed to work fine.

But after a few weeks of using this integration, the customer noticed that a few invoices appeared to have the incorrect date.  So an 8/1/2017 invoice would be dated 7/31/2017.  Weird.

Looking at the data this morning, I noticed this in the SaaS data file for the Invoice Date field:

2017-08-25T06:00:00Z
2017-08-21T06:00:00Z
2017-08-23T06:00:00Z


Do you see the problem?

The SaaS vendor is taking the invoice date that the user in Colorado enters, and is simply appending "T06:00:00Z" to the end of all of the invoice dates.

Why is that a problem?

Well, when a user in Colorado enters an invoice dated 8/25/2017, they want the invoice date to be 8/25/2017 (UTC-7 time zone).  When the SaaS vendor adds an arbitrary time stamp of 6am UTC time, my GP integration will dutifully convert that date into 8/24/2017 11pm Colorado time.

For invoices dated 8/25, that may not matter too much, but if the invoice is dated 9/1/2017, the date will get converted to 8/31/2017 and post to the wrong fiscal period.

To make things even more fun, I found that the SaaS vendor is also storing other dates in local time.

2017-09-05T08:24:36-07:00
2017-09-05T08:26:22-07:00
2017-09-05T08:28:13-07:00


So I have to be careful about which dates I convert from UTC to local time, and which ones I truncate the time to just get the date, and which ones are local time.  In theory, the .NET date parsing should handle the conversion properly, assuming the time zone is correct, but I now know that I have to keep an eye on the vendor data.

I will be contacting the vendor to have them fix the issue with the invoice dates--there is no good reason why they should be appending "T06:00:00Z" to dates.

Expect to see a lot more of this date format and related date issues as more customers adopt cloud-based solutions and services.



You can also find him on Twitter, YouTube, and Google+




Tuesday, October 24, 2017

Why I don't accept foreign checks (aka North American banking is a mess)

By Steve Endow

Several years ago, I received a paper check, in the mail, from a Dynamics partner in Canada.  The partner was paying my US dollar invoice, and thought they were doing me a favor by drafting the check from their US dollar bank account at their Canadian bank.

Send a check in US dollars to pay a USD invoice--makes sense, right?

Nosiree.

I attempted to deposit the check at my local Bank of America branch using the ATM.  The ATM would not accept the check.  So I went inside the bank, stood in line, and then told the teller I wanted to deposit the check.  The teller looked at the check, and confusion ensued.

Eventually a manager came over and explained to me, with full confidence, and in no uncertain terms, that they were unable to accept the check.  He explained that the problem was not that the check was from a Canadian bank.  He said that the problem was that the Canadian check was issued in US Dollars.  He claimed that because the country of origin did not match the check currency, the branch could not accept the check.  That's the policy. (no it isn't)

So...how can I deposit the check?

The manager handed me a special envelope and a triplicate carbon copy form.  He said I needed to fill out the form and mail it with the check to a super special obscure department at Bank of America called "Foreign Clean Collections"--whatever that means.  Once the check is received by that department, it will review the check and coordinate with the foreign bank to get the funds transferred.  This process will take 6-8 WEEKS.

You're kidding me, right?  Nope.

So, being curious about this banking train wreck, I gave it a try.  I filled out the form and mailed the USD $1,000 check off to the super special department.

A few weeks later, a deposit shows up in my account for $800.  Yup, $200 less than the check.  In addition to having to wait several weeks for the deposit, I was charged $200 in bank fees!

After that nightmare, I stopped accepting any foreign checks.  I put a big red note on my invoice that says that I only accept credit cards and wire transfers from non-US customers. And guess what: That process has been working just fine for years.

This week, a Canadian partner didn't read my invoice, and didn't read my email with the invoice, and they mailed me a paper check.  The check is from their Canadian bank, issued in US Dollars.  Great.

So I contacted a colleague who regularly receives Canadian checks, and she said that she routinely deposits Canadian checks issued in USD at her local BofA branch without any issues.  Huh.

But having paid my $200 entrance fee to the Bank of America Foreign Clean Collections Club, I wasn't about to just deposit this new check, wait several weeks, and see how much I get charged.

So I did the obvious thing:  I called my local Bank of America branch.

First customer service rep:  "Sorry, I don't deal with those things. Let me transfer you to our back office."  Apparently the back office doesn't have voicemail and is out to lunch at 9am, as the phone rang for 3 minutes with no answer.  I tried calling the branch back, but this time nobody answered and I got a voice response system.  So the local bank branches are useless when inquiring about these things.

So I then called the main BofA customer service 800 number.  I spoke with someone who tried very hard to help, but she was unable to find any information and her computer and phone were unable to contact the department who might be able to help.  So she gave me the phone number to the Bank of America Foreign Exchange Call Center.

I then directly called the illustrious Foreign Exchange Call Center and spoke with someone who, for the first time, sounded like he understood the mysterious process of depositing foreign checks with Bank of America.

"Can I deposit this Canadian check drafted in US Dollars at my local California branch?", I asked

"Every check is reviewed on a case by case basis.", he replied

What?  What does that even mean?

"Every check is reviewed on a case by case basis.", he replied

So you have no consistent policy about depositing foreign checks?

"Yes, we have a very consistent policy that I just explained to you.  Every check is reviewed on a case by case basis.", he replied


After speaking with him for several minutes and apparently annoying him, here is my understanding of the official Bank of America policy / procedure for foreign checks.

1. Acceptance of a foreign check is completely up to the discretion of the BofA branch, and the inconsistent and incorrect training that a teller or branch manager may have received.  The branch can simply say they don't accept foreign checks. Or they can conjure up an excuse as to why they can't accept the check, like "the country of origin does not match the check currency".

2. If the branch is willing to try to accept the check, they can scan the check in their "system".  This "system" then determines if Bank of America is willing to accept the check at that branch.  Apparently this involves super secret algorithms about my "relationship" with the bank, the physical check, the bank that issued the check, the country of origin, the currency, the amount, etc. 

3. If the "system" determines that the branch can accept the specific check, apparently the check will be deposited in a fairly normal manner.

4. If the "system" determines that the branch cannot accept the check, then the magical process with the Foreign Clean Collections department kicks in, and you get the multi-part form, special envelope, a 6-8 WEEK processing time, and hundreds of dollars in fees that you will not be able to determine in advance.

5. The representative claimed that Bank of America only charges a flat $40 for the Foreign Clean Collections process, but that the issuing bank can charge their own fees for having to process the foreign check.  In my case, I was charged around USD $150 by the issuing Canadian bank just for the honor of cashing their USD check.  There is realistically no way for you to know how much the foreign bank will charge in advance.

6. I asked the representative how I was supposed to accept payments given the uncertainty and fees involved in this process.  He told me that they recommend wire transfers for foreign payments, and basically told me not to accept foreign checks.

What a shocking conclusion.

Naturally, I have received several responses from people saying that they accept foreign checks all the time at their bank and never have an issue.  Good for you, I say, enjoy the 1900s!  The Pony Express loves you!

I rarely receive such checks, don't want to have to drive to the bank to deposit them, and don't want to deal with clueless bank employees and the nightmare game-of-chance process outlined above.

Checks are a vestigial organ of banking and are a testament to the absurdly anachronistic North American banking system.  Talk to someone from any country with a modern banking system and ask them how many checks they issue.  "Checks?  What?" will be the response.  People from Singapore and Australia literally laugh in disbelief when I mention that the US still uses paper checks.

Wire transfers have been well established since the late 1800s and now provide same day international funds transfers, usually for a reasonable fixed fee.  Credit cards are a defacto payment method for a massive volume of transactions for many countries, and have benefits like fraud protection and points, and the merchant pays the fees for those transactions--which I am happy to do.

And services like the excellent TransferWise provide very low cost EFT funds transfers to dozens of countries with an excellent exchange rate.

The only reason I have to explain why North American consumers and businesses seem to cling to checks is because our backwards banking system does not (yet) charge fees to shuffle around millions of pieces of paper with ink on them, pay the postage to mail them, scan those papers into digital images, and then perform an electronic funds transfer behind the scenes.  But they do charge a fee if customers initiate a payment electronically through EFT / ACH or a wire transfer and no paper is involved.  It's crazy.

So, after wasting a few more hours researching this topic, I now have a clear decree, straight from the heart of Bank of America, and will continue to accept only credit card and wire transfer payments from non-US customers.  If it's good enough for the rest of the world, it's good enough for me.



You can also find him on Twitter, YouTube, and Google+


Thursday, September 28, 2017

Back up your Dynamics GP SQL Server databases directly to Azure Storage in minutes!

By Steve Endow

At the GP Tech Conference 2017 in lovely Fargo, ND, Windi Epperson from Advanced Integrators had a great session about Disaster Recovery. One topic she discussed was the ability to use the Dynamics GP Back Up Company feature to save SQL backups directly to Azure.


I think doing SQL backups to Azure is a great idea. There are countless tales of SQL backups not being done properly or being lost or not being retained, and having an option to send an occasional SQL backup to Azure is great.

But this option is a manual process from the Dynamics GP client application, it is not scheduled, and it does not use the "Copy-only backup" option, so the backups will be part of the SQL backup chain if the customer also has a scheduled SQL backup job.  So as Windi explained, it may be a great option for very small customers who can reliably complete the task manually on a regular basis.

But how about setting up a backup job in SQL Server that will occasionally send a backup to Azure?

It turns out that the process is remarkably easy and takes just a few minutes to setup and run your first backup to Azure Storage.

NOTE: From what I can tell, SQL backups to Azure are supported in SQL 2012 SP1 CU2 or later.  And it appears that the backup command syntax may be slightly different for SQL 2012 and 2014, versus a newer syntax for SQL 2016.


The hardest part is setting up your Azure account and creating the appropriate Azure Storage account.  It took me a few tries to find the correct settings.

First, you have to have an Azure account, which I won't cover here, but it should be a pretty simple process.  Here is the sign up page to get started:  https://azure.microsoft.com/en-us/free/

Once you have your Azure account setup and have logged in to the Azure Portal (https://portal.azure.com), click on the "More Services" option at the bottom of the services list on the left.  In the search box, type "storage" and a few options should be displayed.

I chose the newer "Storage Accounts" option (not "classic").  To pin this to your services list, click the star to the right.


Tuesday, September 26, 2017

Free SFTP file transfer and data export tool for Dynamics GP file-based integrations

By Steve Endow

A somewhat common requirement for file-based integrations between Dynamics GP and external services or SaaS solutions involves uploading or downloading files from an SFTP server (SSH File Transfer, completely different than the similarly named FTP or FTPS).  SFTP has some technical quirks, so it is often a hassle for customers to automate SFTP file transfers as part of their Dynamics GP integrations.

Some of those integrations also involve exporting data from Dynamics GP to a CSV file and uploading that data to an SFTP server.

To handle this task, I have developed an application that can export data from GP, save it to a CSV file, and upload it to an SFTP server.  It can also download files from an SFTP server.  The tool is fully automated, can be scheduled using Windows Task Scheduler, and it includes file archiving, logging, and email notification in case of errors.

If you use Blackline, Coupa, IQ BackOffice, or any other provider or service that requires upload or download of files with an SFTP server, this tool may be helpful.  It can be used in place of WinSCP or similar tools that require command line scripting.

I am offering this tool for free to the Dynamics GP community.  It can be downloaded from my web site at:

https://precipioservices.com/sftp/

The download includes a user guide and sample configuration file.  There are quite a few configuration settings, so please make sure to review the documentation to understand how the settings are used.

If you end up using the Precipio SFTP tool, I would be love to hear about which system or service you are using it with and how it ends up working for you.

I started a thread on the GPUG Open Forum if you want to discuss the SFTP tool:

https://www.gpug.com/communities/community-home/digestviewer/viewthread?GroupId=247&MessageKey=d6f5ce8b-1fdd-4fb1-abcc-9e7e529ce013&CommunityKey=4754a624-39c5-4458-8105-02b65a7e929e&tab=digestviewer


If you have questions or encounter issues, you can contact me through my web site at:

https://precipioservices.com/contact-us/



You can also find him on Twitter, YouTube, and Google+









Wednesday, September 20, 2017

The 10th and 11th ways you can lose your SQL data...

By Steve Endow

Brent Ozar has an excellent post where he shares 9 stories about how customers lost some or all of their SQL Server data.

https://www.brentozar.com/archive/2015/02/9-ways-to-lose-your-data/


What's great about his stories is that as I read each one, I thought "Yep, I can totally see that happening."  A simple oversight, a small mistake, one person making a change without realizing it affected other systems, or simply forgetting to change back a single setting in SQL Server.  The one about invalid SQL SMTP settings preventing error emails from going out reminded me of my recent Synology drive failures, as I also had invalid SMTP settings and hadn't received the hundreds of error emails telling me I had a problem--so I am certain that is a common symptom.

While stories about hurricanes, floods, tornadoes, or fires may provide great drama for discussion about disaster recovery, I suspect that there are far more disasters that are caused by a few clicks of a mouse, followed by "Ooops." (or "OH MY GOD WHAT HAVE I DONE???")

I have two data loss stories of my own to add to the SQL data loss lore.


Pulling the Wrong Drive

Many years ago, I was a "business systems consultant" for a Big 6 (at the time) consulting firm and somehow ended up helping a customer with their Solomon IV implementation after their sole IT employee quit.  I knew Solomon IV, knew VB, knew SQL, and knew hardware, so I was juggling everything and helping them finish their implementation.

Their Hewlett Packard server that hosted the Solomon IV databases was having some issues with its RAID array.  The server had mirrored drives that hosted the database files, and occasionally that mirror would 'break' for no good reason.  Windows would mark one drive as inactive, and the server would run on one of the drives until we removed the inactivated drive, reinserted it, and repaired the array.  This had happened once or twice before, and I was on site at the customer when it happened again.  I checked Windows, checked the array, confirmed the mirror had broken.  I then pulled the drive, reinserted the drive, and then started the array rebuild.  No problem.

Shortly after that, a user noticed that a transaction they entered that morning was no longer available in Solomon.  Then another user.  Then another.  We eventually discovered that all of the transactions and data that had been entered that day were gone.  What happened?

After pondering for a while, I realized what I had done.  When the RAID mirror broke, Windows would say that one drive had been inactivated, but it wasn't always clear which drive had been inactivated.  You had to poke around to figure out if it was the drive on the left or the drive on the right--I don't remember the process, and it might have even been as high tech as watching to see which blinky light on one of the drives wasn't blinking.

I had either mis-read the drive info or not looked carefully enough, and I had pulled out the wrong drive.  The active drive.  The one that was working and had been saving the transactions and data that day.  After I reinserted the drive, I then chose the 'bad' drive, the one that hadn't been active at all that day, marked it as the primary, and then rebuilt the mirror with the old data from that drive.  Thereby losing the data that had been entered that day.

This was pre-SQL Server, so we didn't have transaction log backups, so even if we had a full back up from the prior evening, it wouldn't have helped, as it was only that day's data that was lost.  Fortunately, I think it was only mid-day, so the users only lost the data from that morning and were able to reconstruct the transactions from paper, email, and memory.

Ever since I made that mistake, I am extremely paranoid about which physical drive is mapped to RAID arrays or Windows drive letters.  If you've built a PC or server in the last several years, you may know that Windows will assign drive letters semi-randomly to SATA drives.  And when I had two bad drives in my Synology, I double and triple checked that the drive numbers provided by the Synology did in fact map to the physical drives in the unit, from left to right.

I'm hoping that I never pull the wrong drive again.


Test vs. Production

In Brent's blog post, he shared a story about someone logging into the wrong server--they thought they had logged into a test environment, but were actually dropping databases in production.

I have a similar story, but it was much more subtle, and fortunately it had a happier ending.

I was testing a Dynamics GP Accounts Payable integration script.  I must have been testing importing AP invoices, and I had a script to delete all AP transactions from the test database and reload sample data.  So I'm running my scripts and doing my integration testing, and a user calls me to tell me that they can't find an AP transaction.  We then start looking, and the user tells me that transactions are disappearing.  What?

As we were talking, all of the posted AP transactions disappeared.  All AP history was gone.

Well, that's weird, I thought.

And then it hit me.  My script.  That deletes AP transactions.  That I ran on the Test database.

But how?

Somehow, I apparently ran that script against the production company database.  I was probably flipping between windows in SQL Management Studio and ended up with the wrong database selected in the UI.  And the customer had so much AP data that it took several minutes to delete it all, as I was talking to the user, and as we watched the data disappear.

You know that gut wrenching feeling of terror when your stomach feels like it's dropped out of your body?  Followed by sweat beading on your brow?  That's pretty much how I felt once I guessed that I had probably accidentally run my Test Delete script on the production database.  Terror.

In a mad scramble that amazes me to this day, I somehow kept my sanity, figured out what happened, and came up with an insane plan to restore the AP data.  Fortunately, the customer had good SQL backups and had SQL transaction logs.  For some reason, I didn't consider a full database restore--I don't recall why--perhaps it was because it would require all users to stop their work and we would have lost some sales data.  So I instead came up with the crazy idea of reading the activity in the SQL log files.  Like I said, insane.

So I found an application called SQL Log Rescue by RedGate Software that allowed me to view the raw activity in SQL Server log files.  I was able to open the latest log file, read all of the activity, see my fateful script that deleted all of the data.  I was also able to view the full data of the records that were deleted and generate SQL scripts that would re-insert the deleted data.  Miraculously, that crazy plan worked, and SQL Log Rescue saved me.  I was able to insert all of the data back into the Accounts Payables tables, and then restart my heart.

Thinking back on it, I suspect that the more proper approach would have been do to a SQL transaction log backup and then perform a proper point in time recovery of the entire database.  Or I could have restored to a new database and then copied the data from the restore into production.  But as Brent's stories also demonstrate, we don't always think clearly when working through a problem.


So when you're planning your backup routines and disaster recovery scenarios, review the stores that Brent shares and see if your backup plans would handle each of them.  And then revisit them again occasionally to make sure the backups are working and you are still able to handle those scenarios.


Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.


You can also find him on Twitter, YouTube, and Google+



Tuesday, September 19, 2017

eConnect error: The target principal name is incorrect. Cannot generate SSPI context.

By Steve Endow

A customer recently encountered this error with a Dynamics GP eConnect integration:


The target principal name is incorrect. Cannot generate SSPI context.

Just before this error was reported, a new version of a custom Dynamics GP AddIn had been deployed, so I got the support call, as the partner and customer thought the error was released to the new AddIn.

But this error is related to the eConnect user authentication with SQL Server, so deploying a new DLL shouldn't have affected that authentication.

I recommended that the customer's IT team check the status of the eConnect windows service on the terminal server and try restarting it.  The eConnect service was running, but when they restarted the service, they received a login error.

It seems that some other process on the client's network was attempting to use the Active Directory account assigned to the eConnect service on the terminal server.  That other process, whatever it is, apparently has an invalid or old password for the domain account.  So it was failing to login and locking the Active Directory account.

Once the account was locked, the eConnect service on the terminal server would begin receiving the SSPI context errors, as its authentication with SQL Server would fail once the account was locked.

The IT team had previously tried to reset the eConnect account password, but it would just get locked out again by the mystery app or process that was still trying to use the same domain account.  So I recommended that they create a new dedicated domain account for use by the eConnect windows service on the terminal server.

Once they setup the new domain account and updated the eConnect windows service to use the new account, the problem went away.

However, this morning the error seemed to occur again, but restarting the eConnect service appears to have resolved it.  Given this odd recurrence, there may be some other cause or details that may be contributing to the problem.

Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+