Wednesday, February 14, 2018

My latest rookie SQL mistake...

By Steve Endow

I just discovered a fun mistake that I made in a SQL script.  It's a rookie mistake, but it's one of those somewhat novel mistakes that I think is easily missed in many projects.

I developed a Dynamics GP SOP Invoice import for a customer using .NET and eConnect.  It has been in use for over 3 years and working great, but recently they finally had a scenario where they uncovered the latent bug.

After reviewing my code and looking at the data that triggered the bug, I found that I had a design flaw in a SQL statement.  The flaw wasn't discovered during testing because I never anticipated a specific use case and boundary condition, so I never tested the scenario, and it took over 3 years for the customer to encounter it.

The customer is unique in that they will import an invoice, such as invoice number 123456, that relates to contract number 123456.  Then a few days later they will need to make an adjustment to the contract, so they will issue a related invoice to add services to the contract.  To help track the related transaction, the new invoice is imported into GP with a suffix on the invoice number, such as 123456-1.  A few days later, they will issue a credit memo to decrease the contract amount, and that CM will be imported as document number 123456-2, etc.  These numeric suffixes are added to the document number by the eConnect import.

Last week, the customer emailed me with a problem.  They were getting this eConnect error:

Wednesday, January 31, 2018

Are you "closing the loop" with your Dynamics GP system integrations?

By Steve Endow

I've been developing system integrations for so long that I sometimes forget that some parts of system integration design may not be obvious to customers I work with.

I have been working with a company that is integrating their document management solution into Dynamics GP.  Their software will capture a scan of a document, such as a vendor invoice, their workflow will route the document image, a user will verify the vendor and the expense accounts, and once the workflow is complete, the software will import the invoice data into Dynamics GP using eConnect.  Once the invoice is imported, a GP user can view the Payables Transaction invoice in Dynamics GP, and also open the scanned invoice image directly from GP.

Okay, that sounds pretty straightforward...

The initial version of the integration with Dynamics GP is asynchronous, using XML files.  The document management system exports an XML file containing the "metadata" for the vendor invoice, and the GP import is a scheduled task that regularly picks up those XML files and imports the data into Dynamics GP as transactions.  (Aside: This design was chosen for its simplicity, as the initial version of the integration was a prototype--a proof of concept to explore how the system could be integrated with Dynamics GP.)

Okay, so...your point is...?

Saturday, January 20, 2018

Are SQL Server subqueries bad? Let's find out!

By Steve Endow

For the past several years, I've noticed that I have generally avoided using subqueries, based on a suspicion that they are probably less efficient than a JOIN.  I do still use them for one-time or ad-hoc queries where performance isn't a concern, but I have been avoiding them for any production queries that I release.  But I haven't done any research to support my suspicion.

This Saturday morning, while working on another SQL query optimization issue, I figured I would try a test to compare the performance of a simple subquery vs. a JOIN.

What do you think?  Do you think that subqueries are typically slower than JOINs?  If so, how much slower?

Here's a YouTube video where I test my sample queries and review the results.

Before doing my test, I searched the interwebs and I found a post (unfortunately the images aren't showing for me) that appears to definitively demonstrate the correlated subqueries perform much worse than similar JOIN queries.  The explanation made sense.

To qualify my test setup: These queries were run on SQL Server 2014 SP1 using Management Studio 17.4 against the GP 2016 Fabrikam/TWO database. And also notable is that I have 73,752 records in my SOP30300 table--which is probably quite a bit more than a vanilla TWO database.  I suspect this is important, as the results may be different for other SQL Server versions, and may vary based on the number of tables and records.

Wednesday, January 17, 2018

Dynamics GP Integrations: Eliminate the need for perfection

By Steve Endow

I had a call with a customer this morning to review an error they were seeing with their AP Invoice integration.  The custom integration is moderately complex, importing invoices from 90 retail locations into 30 different Dynamics GP companies, with Intercompany distributions.

There's an interesting twist to this integration.  The invoice data contains a retail store number, but it doesn't tell the import which GP company the invoice belongs to.  And there is nothing in the GP companies indicating which retail stores belong to which database.  Some retail stores have their own dedicated GP company database, while other retail stores are managed together in a GP company database.  The users just know which retail store belongs to which GP company.

So how does the integration figure out which company an invoice belongs to?

We could have created a mapping table, listing each retail store ID number and the corresponding GP company database.  But the problem with mapping tables is that they have to be maintained.  When a new retail store is opened, or a new GP database is created, users will invariably forget to update the custom mapping table.

So for this integration, I tried something new.  The one clue in the invoice data file is a GL account number.  The first segment of the GL account is a two digit number that uniquely identifies the Dynamics GP company.  Like this:

01 = Company A
03 = Company B
06 = Company C
25, 31, 49 = Company D

So, the integration reads the GL expense account assigned to the invoice, and uses that to determine which company the invoice belongs to.

When the integration launches, it queries all of the GP databases to determine which Segment 1 values are used in each database.

DECLARE @INTERID varchar(10) = ''
DECLARE @SQL varchar(MAX) = ''



       IF @SQL <> '' BEGIN SET @SQL += ' UNION '; END
       SET @SQL += ' SELECT ''' + @INTERID + ''' AS INTERID, (SELECT COUNT(DISTINCT ACTNUMBR_1) FROM ' + @INTERID + '..GL00100) AS Segment1Values, (SELECT TOP 1 ACTNUMBR_1 FROM ' + @INTERID + '..GL00100) AS CompanyID';



It is then able to use this "mapping" to match invoices to databases based on the GL expense account.

But, this scheme is based on the critical assumption that in Company A, every single GL account will always have a first segment value of 01.  And Company B will always have a segment 1 value of 03.  Or Segment 1 value of 25, 31, and 49 will only ever exist in Company D.  For every account.  No exceptions.

I'll let you guess what happens next.

A user enters a "06" account in Company A.  And another user enters a "01" account in Company B.

Despite the customer's insistence that this would never happen, and that they always make sure that only one unique Segment 1 value is used in each company, someone ends up entering a Segment 1 value in the wrong company.

Am I surprised by this?  Not at all.  Whenever the word "never" is used during integration design discussions, that's always a clue.  I substitute it with "usually" or "mostly".  There are almost always exceptions, whether intentional or unintentional.

So now what?  If the program can't ensure that the Segment 1 values are unique to each company, what can it do?

Well, the second layer is that during the import process, the integration checks the destination company database to verify that the GL account exists.  If it queries Company A for a 06 GL account and doesn't find it, it logs an error and that invoice isn't imported.  This is the error that was logged this morning.

But then what?  The customer insists, again, that they only use the 06 accounts in Company C, so the import must be wrong.  So we run the above query again and find that someone accidentally entered a 06 account in Company A, which confused the import.  And the customer is shocked that such a mistake could happen.  For the third time.

But I'm not satisfied with this process.  Because 6 months from now, it's going to happen again.  And they'll blame the integration again.  And we'll have to manually run the query again and find which account was added to the wrong company.

So let's just assume that this mistake is going to continue to happen and deal with it.  I'm thinking that I need to modify the integration to have it review the results of the query above.  If it finds that 06 is present in more than one GP database, it needs to log an error and let the user know.

"Hey, I found account 06-5555-00 in Company A. That doesn't look right. Please look into it."

This will proactively identify that an issue exists, identify the specific account, identify the company, and give the user enough information to research and resolve the problem.

It assumes the mistake will happen. It eliminates the need for perfection in a complex process in a moderately complex environment, where employees have 90 other things on their minds.  And it should only take a few lines of code--a one time investment that will save time for years into the future.

So why not do this for other possible exceptions and issues?  If you can identify other potential mistakes or errors, why not just code for all of them?  Because there are endless possible exceptions, and it would cost a fortune to handle all of them, most of which will never occur.

I usually release an initial version of an integration, identify the exceptions, and quickly handle errors that do occur.  When a new exception comes up, handle it.  It's usually a surprisingly small number, like 3 or 5 different data issues that cause problems.

So that's my philosophy: Eliminate the need for perfection in integrations whenever possible or practical.

Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+

Friday, December 29, 2017

Implementing an Inbox Zero workflow using Outlook on Windows and iPhone

By Steve Endow

Uncle.  I give up.  I have lost the fight. 

Email has won.  I am defeated.

What was once a great tool for communication has become an overbearing hassle that has destroyed my productivity.

I receive around 50 to 75 emails every weekday.  On a very bad day, I'll hit 100 emails.  I've determined that 100 inbound emails a day is completely unmanageable for me.  With my current processes (or lack thereof), I cannot possibly be productive with that many emails coming at me.  The number of responses and tasks from 100 emails prevents me from doing any other work.

If all I did was "manage" my email all day, and do nothing else, I could probably wrangle my Inbox, but I wouldn't get any "real work" done.  When I focus on doing real work and ignore my email for a day, my Inbox explodes.

It isn't just the emails themselves.  It's also that many of the emails have some type of commitment attached to them.

"Hey Steve, please review this thread of 30 cryptic replies below and let me know what you think."

"Here's the 15 page document I created, please proofread it."

"When can you schedule a call?"

"We are getting an error.  What is causing this?"

"Here are links to a forum post and KB article. Does this explain the error I'm getting?"

"How many hours will it take you to do X?"

"I sent you an email earlier?  Did you get my email?  Can you reply to my email?"

People seem to expecting a relatively prompt reply to their emails--because they think their request is most important, naturally, and because I don't have any other work to do, right?

This week, a link to this article appeared in my Twitter feed:

One-Touch to Inbox Zero
By Tiago Forte of Forte Labs

I have heard of Inbox Zero previously, but I had dismissed it as a bit of a gimmick without fully understanding it.

This time, I actually read the article by Tiago Forte and his explanation finally clicked for me.  His examples and analogies made sense, and his emphasis on email as the first step of a more comprehensive communication and productivity workflow helped me build a new interpretation of Inbox Zero.

Thursday, December 21, 2017

Accepting help from experts and offering help as an expert

By Steve Endow

I've recently had two situations where someone asked for help with Dynamics GP, and when I provided guidance, the requester indicated that my suggestions were not relevant.  Without considering my suggestions or trying them, the requester immediately ruled them out.

They were simple suggestions, such as "please try making this change and perform the process again to see if that resolves the error", or "have you traced your source data to verify that it isn't the cause of the incorrect transaction that was imported?".

"That can't be the cause." was one response.

"My custom stored procedure that imports data into GP verifies everything, so I know it worked properly." was another response.

Another common response I receive when troubleshooting issues is, "We've already checked that and it's not the cause of the problem."

I don't consider myself an "expert" at anything, but there are some topics where I've done enough work to have a certain level of knowledge, intuition, and skills such that I'm generally able to narrow down causes to problems, and typically know some good places to start looking for causes.  I have enough successes solving problems in certain areas that it seems like my approach generally works.

When someone asks for help and then immediately dismisses my initial recommendations without even trying them, how can I help them?  Maybe they don't know who I am or what experience I have, and they're skeptical of my suggestions.  What can I do then?

Do I gently explain that I've worked with over 400 customers in this specific domain, and that my anecdotal statistics would not support the assertion that their integration is infallible or that Dynamics GP is at fault?  Is it my job to convince them that I tend to have a fairly good grasp of the subject matter and that they should reconsider my suggestion?  Is there any point in arguing with someone who has asked for help, but isn't accepting my help?

"Experts" don't know everything and can't always immediately pinpoint causes or solutions.  But if they ask questions, ask for more information, or ask you to test something, isn't it in your best interest to at least try working with them?  If you're not willing to work with an expert, what are your alternatives?

Instead of immediately ruling out suggestions, welcome them as opportunities to learn. Collect new data. Make new assessments. Understand what they are thinking.

Be inquisitive and curious and humble. Don't be defensive or righteous. This applies to the person asking for help, as well as the expert being asked.

Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+

Wednesday, December 20, 2017

Building a Dynamics GP test environment on a B-series Azure Virtual Machine: Not so fast!

By Steve Endow

With the recent release of Dynamics GP 2018, I wanted to setup a new virtual machine that I could use for testing and development.

I currently run my own Hyper-V server, which serves up 20 different virtual machines, and has been very low cost and is extremely fast.  I would be happy to outsource my VMs to the "cloud", but having looked into the cost several times over the last few years, it just isn't economical for me.  I previously estimated it would cost me over $300 a month to host just a few VMs.  That cost, on top of having to severely limit the number of VMs I can run just didn't make sense for hosting my internal development VMs.

But recently fellow MVP Beat Bucher told me about a new Azure VM that was lower cost:  the B-Series "burstable" VMs.

Beat explained that he was able to run two of the B4ms machines continuously for a cost of roughly $150 per month.  I was intrigued.

After reviewing the different sizes, I setup a new B2ms virtual machine on Azure, running Windows Server.  The provisioning process was very simple, easy, and fast, and I had a VM a few minutes later.

I then downloaded and installed SQL Server and SQL Management Studio.  There were a few subtle hints that something wasn't quite right, but at the time the machine seemed great.

I then downloaded the 1.6 GB Dynamics GP 2018 DVD as a zip file.  Like when I downloaded SQL Server, I noticed that when I downloaded the GP 2018 zip file, the Chrome browser didn't show the download status.  When I opened Windows File Explorer, nothing showed up in the download directory during the download or after the downloaded appeared to complete.  It took quite a while for Windows File Explorer to show the downloaded file.

I noticed Windows File Explorer seemed unresponsive as well.  It just didn't feel right, but I hadn't yet pieced together the clues.

I then tried to unzip the GP 2018 file.  That's when it was clear something was wrong.

This status window appeared, showing that it would take over 30 minutes to extract the 1.6 zip file.  What??  1.36MB/s?

I then did dozens of other tests, simply copying large (1GB+) files on the C: drive and between the C: and D: temporary drive.  The performance was abysmal.

After several tests, I noticed that on average, the file copies were clearly being throttled around 21-22MB/s.

What in the world was going on?

The B-Series VMs are supposed to have "Premium SSD" storage, and 21MB/s is definitely not SSD performance.

I submitted an Azure support case and after several days, received a response.  The support rep admitted that because the B-Series VMs were relatively new, he didn't have much experience with them and would need me to do some tests to narrow down the cause.  No problem.

He first had me "redeploy" the Azure VM, which apparently pushes the VM to a new "node" or physical host machine.  I completed that process and tested again, but got the same results: file copies were still painfully slow.

He then had me install the Performance Insights plugin on the VM, which apparently runs some automated performance tests and automatically submits the results to the support case (a very cool feature).  I completed that process and a few days later, he emailed me with an explanation for the slow disk performance I was seeing.

This is the critical information that I overlooked when selecting the B-Series VM:

Notice that the B2ms size has a maximum disk speed of 22.5 MB/s.  That is the maximum.

The B4ms offers 35MB/s and the B8ms tops out at 50MB/s.  50 sounds a lot better than 22.5, but even 50MB/s is horrifically slow compared to any competent modern storage.

Even if you add an additional high performance Premium SSD, such as a 1023GB drive with 5,000 IOPS and 200MB/s throughput (which is VERY expensive), if it is attached to a B2ms VM, you will still be limited to 22.5 MB/s.

For comparison, my local Hyper-V server can copy files at 100MB/s from my NAS, and the limiting factor is the gigabit network connection between the NAS and the server, not my NAS or the SSDs in my server.

Local file copies on the SSDs on my Hyper-V server can be as high as 1GB/s!! It's so fast that I had a very hard time getting a screen shot while copying the 1.6GB Dynamics GP 2018 zip file.

If you are used to even half-decent disk performance on a server, can you live with 22.5 or 35 MB/s on an Azure B-Series VM?

And am I willing to spend an extra hour or two setting up an Azure B-Series VM, due to its brutally slow disk IO, for a Dynamics GP 2018 test environment?  Am I confident that once I set it up and don't have to do many large file copies, that the disk performance will be sufficient for my needs?

Can SQL Server actually run well enough on a disk throttled at 22.5MB/s?  Now that I see the disk specs, I am pretty sure that the B-Series was never intended to ever run SQL Server.

And I'm not willing to waste my time to find out.  Those disk speeds are so slow that I am not confident that the B-Series VM will meet my needs even for a test + development server.  Even if I used the B4ms, that's roughly $75 a month for a potentially painfully slow VM.

So, I have ruled out the B-Series Azure VMs for now, and would have to look at the "standard" VMs, which would likely still cost $150-$300 per month for 1-2 non-production VMs.

Since I have a very fast Hyper-V server in my office that can easily host 20 VMs with a marginal cost of $0 per month per VM, it seems that I will be sticking with an on premises server for at least a few more years.

Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+