Friday, August 11, 2017

Tales of Dynamics GP backups and ransomware

By Steve Endow

At the excellent Dynamics GP Tech 2017 Conference this week, I heard a few interesting stories about ransomware at Dynamics GP customers.

One partner told me a very interesting story about ransomware at a customer that encrypted everything, including the customer's Dynamics GP database backups.  The Dynamics GP partner was called in and he assessed the catastrophe.  Nothing was recoverable.

But he noticed something strange.  Dynamics GP was still working.  He logged into the SQL Server, and he saw that the Dynamics GP databases were still intact and were not encrypted.  He speculated that because SQL Server tenaciously locks the MDF and LDF files, the ransomware was apparently unable to encrypt the live database files.

He was able to stop the SQL Service, quickly copy all of the database files, and attach them on a clean SQL Server.  Luckily, that copy process worked and the ransomware was either inactive at that point, or it didn't have time to encrypt the unlocked database files.  In hindsight, I think I would probably first try doing full backups of all of the databases to ensure the MDF and LDF files remained locked, but saving the backup files to a clean location that can't be accessed by the ransomware would probably still be tricky.

Next, during her "Microsoft Azure: Infrastructure, Disaster Recovery, and Backups", Windi Epperson shared some harrowing stories about tornadoes in Oklahoma.

Some of Windi's customers have had entire buildings vaporized by a tornado, so even the best on-site backup would have been insufficient.  Windi discussed the Azure Backup service, which I didn't even know about, as a flexible and economical way to get all types of backups off site.  She also demonstrated the Dynamics GP backup to Azure feature that she recommends for small customers who don't have the IT staff to handle off site backups.

I currently have a lot of my data backed up on Backblaze S2 storage through my Synology NAS device, but that is only through a connected sync process, and is not a true archive backup.  I've been looking for a more traditional disconnected off site backup storage service that is reasonably priced, so I'm going to look into Azure backup and see if I can setup a process that can automatically backup what I need.

You can also find him on Twitter, YouTube, and Google+

Adding or dropping SQL indexes temporarily on production database tables

By Steve Endow

During one of my presentations on Optimizing SQL Scripting at the GP Tech 2017 Conference this week at the Microsoft campus in Fargo, one of the attendees asked an interesting question:

Suppose you need to run a complex query just a few times, and you find that the query would benefit greatly from adding new indexes to one or more tables.  Rather than adding 'permanent' indexes, would it be prudent to temporarily add the indexes so that you could run the query faster, and then remove the indexes when you no longer need them?

I think it is great question. I immediately thought of one likely real world scenario for this, and coincidentally, I shared a Lyft back to the airport with a consultant who described a second scenario where a slightly different index management process was required.

Scenario 1:  Imagine that at the end of each financial quarter, dozens of large, complex financial and sales analysis reports are run against dozens of Dynamics GP company databases.  If some reports take a minute to run, and a few indexes can be added to reduce the report run times to a few seconds, that time savings could really add up.  I could definitely see the value of adding indexes to speed up this process.

But is it worth adding permanent indexes to tables to support the quarterly reports?  Or is it better to add the indexes once per quarter, run the reports, and then remove the indexes?

I don't currently know how to assess the actual costs vs. benefits of those situations, but given that Dynamics GP is already drowning in SQL indexes, and given that the indexes may be dropped during a GP upgrade (and it's easy to forget to recreate them), I think that creating the indexes temporarily seems like a reasonable solution for this hypothetical example.

The one concern I expressed about the solution was the potential for the CREATE INDEX process to lock the tables as they were being built.

I did some research, and confirmed my concern that a table will be locked and inaccessible during the CREATE INDEX process.

This is mentioned in the "Performance Considerations" section of this Books On Line page:

Most Dynamics GP customers use SQL Server Standard Edition, so indexes are created "offline", and the table is locked until the create index operation completes.

SQL Server Enterprise edition does have an "online" indexing option, but from what I have been able to find, even that feature doesn't provide 100% accessibility of the table during the indexing operation, so there may be some challenges in very high volume environments.

If the temporary indexes make sense, my recommendation would be to add the indexes during a maintenance window, such as late at night, and then run the queries the next day (or next few days), and then remove the index when they are no longer needed.

Scenario 2:  A Dynamics GP consultant told me a story about a prior job where he had to bulk load millions of records into a table on a regular basis.  The bulk load had a very limited time window, so the import had to be completed as quickly as possible.

In order to speed up the import process, they dropped the indexes on the table, imported the millions of additional records into the table, and then added the indexes back to the table.  I hadn't considered that scenario before, but he explained it worked very well.

I was able to find this Books Online article about it and recommendations on when to drop or not drop indexes for bulk load operations.  It provides recommendations depending on whether the table is empty or not, and how much new data is being imported.

So I learned a few interesting things myself during my session.  Hope this was helpful!

You can also find him on Twitter, YouTube, and Google+

Wednesday, June 21, 2017

Common GP integration error: Could not load file or assembly 'Microsoft.Dynamics.GP.eConnect, Version=XX.0.0.0

By Steve Endow

If you have a .NET integration for Dynamics GP that uses the eConnect .NET assemblies, this is a fairly common error:

Could not load file or assembly 'Microsoft.Dynamics.GP.eConnect, Version=

Could not load file or assembly 'Microsoft.Dynamics.GP.eConnect, Version=

Could not load file or assembly 'Microsoft.Dynamics.GP.eConnect, Version=

This usually indicates that the integration was compiled with an older (or different) version of the eConnect .NET assemblies.

Why does this happen?

In my experience, there are two situations where you will usually see this.

1. You upgraded Dynamics GP to a new version, but forgot to update your .NET eConnect integrations.  For instance, if you upgraded from GP 2013 to GP 2016, you would see the "Version 12" error message when you run your integration, as the integration is still trying to find the GP 2013 version of eConnect.

2. You are working with an application or product that is available for multiple versions of GP, and the version you have installed doesn't match your GP version

The good news is that this is simple to resolve.  In the first case, the developer just needs to update the Visual Studio project to point to the proper version of the eConnect DLLs.  Updating the .NET project shouldn't take very long--maybe 1-4 hours to update and test, depending on the complexity of the integration.  Or if you're using product, you just need to get the version of the integration that matches your GP version.

If you have a custom .NET integration, the potential bad news is that you, or your developer, or your GP partner, needs to have the .NET source code to update the integration.  Some customers encounter this error when they upgrade to a new version of GP, and realize that the developer who wrote the code left the company 3 years ago and they don't know where the source code might be.  Some customers change GP partners and didn't get a copy of the source code from their prior partner.

If you can't get a copy of the source code, it is theoretically possible to decompile most .NET applications to get some or most of the source code, but in my limited experience as a novice user of such tools, decompilation just doesn't provide a full .NET project that can be easily updated and recompiled.  Or if it does, the code is often barely readable, and would be very difficult to maintain without a rewrite.

You can also find him on Twitter, YouTube, and Google+

Saturday, June 10, 2017

ASP.NET Core and EF Core with Dynamics GP: Trim trailing spaces from char fields

By Steve Endow

UPDATE: A kind soul read this post, nodded his head in commiseration, and then offered a better solution.  See option 5 below for the best approach I've seen so far.

Anyone who has written SQL queries, integrated, or otherwise had to deal with Dynamics GP data certainly has warm feelings about the colonial era use of char data type for all string fields.

This has the lovely side effect of returning string values with trailing spaces that you invariably have to deal with in your query, report, application, XML, JSON, etc.

In the world of SQL queries, you can spot a Dynamics GP consultant a mile away by their prolific use of the RTRIM function in SQL queries.  .NET developers will similarly have Trim() statements thoroughly coating their data access code.

But in this bold new age of Microsoft development tools, where everything you have spent years learning and mastering is thrown out the window, those very simple solutions aren't readily available.

I am developing an ASP.NET Core web API for Dynamics GP, and being a sucker for punishment, I'm also using EF Core for data access.  In one sense, EF Core is like magic--you just create some entities, point it to your database, and presto, you've got data.  Zero SQL.  That's great and all if you have a nice, modern, clean, well designed database that might actually use the space age varchar data type.

But when you're dealing with a relic like a Dynamics GP database, EF Core has some shortcomings.  It isn't really designed to speak to a prehistoric database.  Skipping past the obvious hassles, like exposing the cryptic Dynamics GP field names, one thing you'll notice is that it dutifully spits out the char field values with trailing spaces in all of their glory.

When you convert that to JSON, you get this impolite response:

"itemnmbr": "100XLG                         ",
"itemdesc": "Green Phone                                                                                          ",

"itmshnam": "Phone          "

Yes, they're just spaces, and it's JSON--not a report output, so it's not the end of the world.  But in addition to looking like a mess, the spaces are useless, bloat the response, and may have to be trimmed by the consumer to ensure no issues on the other end.

So I just spent a few hours trying to figure out how to deal with this.  Yes, SpaceX is able to land freaking rockets on a floating barge in the middle of the ocean, while I'm having to figure out how to get rid of trailing spaces.  Sadly, I'm not the only one--this is a common issue for many people.

So how can we potentially deal with this?

1. Tell EF Core to trim the trailing spaces.  As far as I can tell, this isn't possible as of June 2017 (v1.1.1).  EF Core apparently doesn't have a mechanism to call a trim function, or any function, at the field level. It looks like even the full EF 6.1+ framework didn't support this, and you had to write your own code to handle it--and that code doesn't appear to work in EF Core as far as I can tell.

2. Tell ASP.NET Core to trim the trailing spaces, somewhere, somehow.  There may be a way to do this in some JSON formatter option, but I couldn't find any clues as to how.  If someone has a clever way to do this, I'm all ears, and I'll buy you a round at the next GP conference.

3. Use the Trim function in your class properties.  Ugh.  No.  This would involve using the old school method of adding backer fields to your DTO class properties and using the Trim function on every field. This is annoying in any situation, but to even propose this with ASP.NET Core and EF Core seems like sacrilege.  And if you have used scaffolding to build out your classes from an existing database, this is just crazy talk.  I'm not going to add hundreds of backer fields to hundreds of string properties and add hundreds of Trim calls.  Nope.

4. Use an extension method or a helper class.  This is what I ended up doing trying initially.  (see option 5 for a better solution)  This solution may seem somewhat obvious, but in the world of ASP.NET Core and EF Core, this feels like putting wagon wheels on a Tesla.  It's one step up from adding Trim in your classes, but looping through object properties and trimming every field is far from high tech.  Fortunately it was relatively painless, requires very minimal code changes, and is very easy to rip out if a better method comes along.

There are many ways to implement this, but I used the code from this post:

I created a new static class called TrimString, and I added the static method to the class.

    public static class TrimStrings
        public static TSelf TrimStringProperties<TSelf>(this TSelf input)
            var stringProperties = input.GetType().GetProperties()
                .Where(p => p.PropertyType == typeof(string));

            foreach (var stringProperty in stringProperties)
                string currentValue = (string)stringProperty.GetValue(input, null);
                if (currentValue != null)
                    stringProperty.SetValue(input, currentValue.Trim(), null);
            return input;

I then modified my controller to call TrimStringProperties before returning my DTO object.

    var item = _itemRepository.GetItem(Itemnmbr);

    if (item == null)
        return NotFound();
    var itemResult = Mapper.Map<ItemDto>(item);
    itemResult = TrimStrings.TrimStringProperties(itemResult);

    return Ok(itemResult);

And the new JSON output:

  "itemnmbr": "100XLG",
  "itemdesc": "Green Phone",
  "itmshnam": "Phone",
  "itemtype": 1,

  "itmgedsc": "Phone",

Fortunately this works, it's simple, and it's easy.  I guess that's all that I can ask for.

When using ASP.NET Core and EF Core, you end up using a lot of List<> objects, so I created an overload of TrimStringProperties that accepts a List.

    public static List<TSelf> TrimStringProperties<TSelf>(this List<TSelf> input)
        List<TSelf> result = new List<TSelf>();
        TSelf trimmed;

        foreach (var obj in input)
            trimmed = TrimStringProperties(obj);
        return result;


5.  Use AutoMapper!  Less than 24 hours after posting this article, kind reader Daniel Doyle recognized the problem I was trying to solve and keenly observed that I was using AutoMapper.  AutoMapper is a brilliant NuGet package that automates the mapping and transfer of data from one object to another similar object.  I have highlighted the call in green above.

Daniel posted a comment below and suggested using AutoMapper to trim the string values as it performed its mapping operation--something I would have never thought of, as this is the first time I've used it.  After some Googling, it looks like AutoMapper is commonly used for this type of data cleanup, and it seems like it is well suited to the task.  It's processing all of the object data as it maps from one class to another, so it seems like a great time to clean up the trailing spaces on the strings.

Daniel suggested a syntax that uses the C# "null coalescing operator" (double question marks), which makes the statement an extremely compact single line of code.

And with a single line added to my AutoMapper configuration block in Startup.cs, all of my objects will have the trailing spaces trimmed automatically.  No extra looping code, no extra method calls in each of my Controllers every time I want to return data.

    AutoMapper.Mapper.Initialize(cfg =>
        cfg.CreateMapItem, Models.ItemDto>();
        cfg.CreateMapItemSite, Models.ItemSiteDto>();
        cfg.CreateMapSalesOrder, Models.SalesOrderDto>();
        cfg.CreateMapSalesOrderLine, Models.SalesOrderLineDto>();
        cfg.CreateMapSalesOrderResponse, Models.SalesOrderResponseDto>();
        //Trim strings
        cfg.CreateMap<string, string>().ConvertUsing(str => (str ?? string.Empty).Trim());


This is super clean, and very easy.  No more wagon wheels on the Tesla!

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+

Changing Visual Studio keyboard shortcut for Comment and Uncomment

By Steve Endow

A very handy feature in Visual Studio is the Comment / Uncomment editing option.

There are two buttons that allow you to comment or uncomment code with a single click.

While those buttons are handy, they require you to use the mouse, and that can sometimes be tedious if you are having to also make multiple code selections with the mouse.

Visual Studio does have keyboard shortcuts for Comment and Uncomment, but they are the unfortunate double-shortcut combinations:  Ctrl+K, Ctrl+C to comment, and Ctrl+K, Ctrl+U to uncomment.

I find those shortcuts to be pretty annoying, as they require me to use both hands to press those key combinations.  It's not much of a "shortcut".

After several years of this nagging me, I finally bothered to lookup a better alternative.  Fortunately Visual Studio allows you to add your own keyboard shortcuts.  If you click on Tools -> Options, and then select Environment -> Keyboard, you can select a command and assign a new keyboard shortcut.

The one challenge is finding a decent keyboard shortcut that isn't already taken.

I entered the word "comment" and it displayed the relevant commands.  I then selected Edit.CommentSelection, selected Use new shortcut in Text Editor, pressed Alt+C, then clicked Assign.

Now I can comment a selection using the nice and simple Alt+C shortcut.  Big improvement.

I don't Uncomment as much, so for now I haven't assigned a custom shortcut to Edit.Uncomment, but at least I now know it's very easy to do.

Keep on coding...and commenting...

You can also find him on Twitter, YouTube, and Google+

Thursday, June 1, 2017

Filtering log entries in ASP.NET ILoggerFactory logs

By Steve Endow

I have two new projects that require web service APIs, so rather than actually use a tried and true tool that I am familiar with to develop these new projects, I am plunging into the dark depths of ASP.NET Core.

If you've played with ASP.NET Core, you may have noticed that Microsoft has decided that everything you have learned previously about developing web apps and web services should be discarded, making all of your prior knowledge and experience worthless.  And if you choose to venture in to new world of ASP.NET Core, you will be rewarded by not knowing how to do anything.   At all.  Awesome, can't wait!

One of those things that you'll likely need to re-learn from scratch is logging.  ASP.NET Core has a native logging framework, so rather than write your own or use a third party logging package, you can now use a built-in logger.  This sounds good, right?

Not so fast.  At this point, I have come to understand that nothing is easy or obvious with ASP.NET Core.

This article provides a basic overview showing how to perform logging in ASP.NET Core.

One thing it doesn't clearly explain is that if you want to have your logs capture Information level entries, it will quickly be filled with hundreds of entries from the ASP.NET Core engine / web server itself.  You will literally be unable to find your application entries in the log file if you log at the Information level.

So the article helpfully points out that ILoggerFactory supports filtering, allowing you to specify that you only want warnings or errors from the Microsoft tools/products, while logging Information or even Debug messages from your application.

You just add this .WithFilter section to your startup.cs Configure method:

loggerFactory .WithFilter(new FilterLoggerSettings { { "Microsoft", LogLevel.Warning }, { "System", LogLevel.Warning }, { "ToDoApi", LogLevel.Debug } })

Cool, that looks easy enough.

Except after I add that to my code, I see the red squigglies of doom:

Visual Studio 2017 is indicating that it doesn't recognize FilterLoggerSettings. At all.

Based on my experience with VS 2017 so far, it seems that it has lost the ability (that existed in VS 2015) to identify missing NuGet packages.  If you already have a NuGet package installed, it can detect that you need to add a using statement to your class, but if you don't have the NuGet package installed, it can't help you.  Hopefully this functionality is added back to VS 2017 in a service pack.

After many Google searches, I finally found this StackOverflow thread, and hidden in one of the post comments, someone helpfully notes that the WithFilter extension requires a separate NuGet package, Microsoft.Extensions.Logging.Filter.  If you didn't know that, you'd spend 15 minutes of frustration, like I did, wondering why the very simple Microsoft code sample doesn't work.

Once you add the Microsoft.Extensions.Logging.Filter NuGet package to your project, Visual Studio will recognize both WithFilter and FilterLoggerSettings.

And here is my log file with an Information and Warning message, but no ASP.NET Core messages.

And several wasted hours later, I am now able to read my log file and actually work on the real project code.

Best of luck with ASP.NET Core.  You'll need it.

You can also find him on Twitter, YouTube, and Google+

Thursday, May 11, 2017

Resolving the Veeam "Backup files are unavailable" message when restoring a VM

By Steve Endow

I'm a huge fan of the Veeam Backup and Replication product.  I've used it for several years now to backup my Hyper-V virtual machines to a Synology NAS, and it has been a huge improvement over the low-tech script based VM backups I was suffering with previously.

One quirk I have noticed with Veeam is that it seems to be very sensitive to any loss of connectivity with the backup infrastructure.  With a prior version, if Veeam was running but my file server was shut down, I would get error notifications indicating that it couldn't access the file server--even though backups were not scheduled to run.  I haven't noticed those messages lately, so I'm not sure if I just turned them off, or if I haven't been paying attention to them.

Since I don't need my servers running 24x7, I have them scheduled to shutdown in the evening, and then automatically turn on in the morning.  But sometimes if I wrap up my day early, I may shut down all of my servers and my desktop at, say, 8pm.  If I shut down my Synology NAS first, and Veeam detects that the file server is not accessible, it may log a warning or error.

Normally, this isn't a big deal, but I found one situation where this results in a subsequent error message.  I recently tried to restore a VM, and after I selected the VM to restore and chose a restore point, I received this error message.

Veeam Error:  Backup files are unavailable

When I first saw this message I was concerned there was a problem, but it didn't make sense because Veeam was obviously able to see the backup files and it even let me choose which restore point I wanted.  So I knew that the backup files were available and were accessible.

I could access the network share on my NAS file server and browse the files without issue.  I was able to click on OK to this error message, complete the restore wizard, and successfully restore my VM.  So clearly the backup files were accessible and there wasn't really an issue.

So why was this error occurring?

I submitted a support case to Veeam and spoke with a support engineer who showed me how to resolve this error.  It seems that whenever Veeam is unable to access the file share used in the Backup Infrastructure setting, it sets a flag or an error state in Veeam to indicate that the backup location is not available.  After this happens, you have to manually tell Veeam to re-scan the backup infrastructure in order to clear the error.  Fortunately, this is very simple and easy.

In Veeam, click on the Backup Infrastructure button in the bottom left, then click on the Backup Repositories page.  Right click on the Backup Repository that is giving the error, and select Rescan.

The Rescan will take several seconds to run, and when it is done, the "Backup files are unavailable" message will no longer appear when you perform a restore.  Or at least that worked for me.

Overall, I'm incredibly pleased with Veeam Backup and Replication and would highly recommend it if it's within your budget.

You can also find him on Twitter, YouTube, and Google+

Wednesday, May 10, 2017

Extract and Save Dynamics GP Document Attach Files Using .NET

By Steve Endow

In a prior post, I showed how to use BCP to extract and save Dynamics GP Document Attach files.  I decided to explore this further and use .NET to extract the GP Doc Attach files.

It is very easy to export the file attachments from SQL using .NET, but because you need some data access commands to read the data from the VarBinary field and separate write the data to a file using a stream, it's quite a few more lines than the very simple single-line BCP statement.

This MSDN forum post has the basic code sample, which is 9 lines.  So in theory you could have something that simple.  But in reality any .NET application of value is going to have a fair amount of additional code for a data access layer and other plumbing, allowing for parameters, processing multiple records, error handling, etc.

Very impressive SQL script for running multi-company Dynamics GP queries

By Steve Endow

I've previously posted some queries for retrieving values from multiple company databases.  I tried a few query designs, but ended up settling with cursors to loop through the different company databases and assemble the results.  Here are two examples:

Given the importance of the queries and how infrequently I used them, I didn't worry about the cursor or spend much additional time trying to figure out an alternative.

While these ad hoc, one-time queries are probably just fine using cursors, many SQL folks (myself included) are not fans of cursors, and it's often a fun challenge to figure out a way to use a proper set-based query to replace an iterating cursor in a query.

Well, Ian Grieve has come up with a very clever technique for assembling a query that can run against multiple Dynamics GP company databases.  Rather than executing a query multiple times and storing the results each time, he's figured out an intriguing method for dynamically assembling the queries for multiple companies, and then running that entire dynamic query string using sp_executesql.

Here is his blog post:

While I understand the general technique, I am not familiar with some of the commands and syntax--particularly the STUFF and FOR XMLPATH statements--so it is going to take me a while to deconstruct and fully understand the entire query.

But hats off to Ian for the creativity!  I think he's come up with a new standard template for multi-company Dynamics GP queries!

You can also find him on Twitter, YouTube, and Google+

Tuesday, May 9, 2017

Extract Dynamics GP Document Attach Files From SQL Server and Save To Disk

By Steve Endow

Today I was asked if there was a way to read an image file from the Dynamics GP Document Attach table so that the image can be added to a report.  For instance, suppose a customer wants to display item images on an invoice.

I have done very little work with the blob / varbinary field type in SQL Server, so I didn't know how difficult it would be to do this.  I quickly did some research and testing, and while I don't have a complete solution for inserting images onto a report, I did test one method for extracting files from the Dynamics GP Document Attach table and saving them to disk.

From what I could tell, there are at least three standard SQL Server tools/commands that you can use to extract an image or file from a varbinary field.

1. OLE Automation Query – This looks pretty sketchy and appears to be very poorly documented.  I tried some samples and couldn’t get it to work.

2. CLR (.NET) inside of SQL – Appears to be a viable option, but requires enabling CLR on SQL, which I personally would try to avoid on a GP SQL Server if possible, so I didn't try this one yet

3. BCP – I was able to get this to work and it was surprisingly easy, but I don’t know how easy it will be to integrate into a report process

Since I was able to quickly get the BCP option to work, here are the commands I used.  If you are able to shell out to run BCP as part of a report or other process, this should work.  

For my testing, I attached a text file and a JPG image to a customer record, and was able to use these BCP commands to successfully extract both files and save them to disk.  

When I ran the BCP command, it asked me four questions, and then prompted me to save those settings to a format file.  I accepted all of the default options except for the header length—that needs to be set to 0, and then saved the file as the default bcp.fmt.

BCP "SELECT BinaryBlob FROM TWO..coAttachmentItems WHERE Attachment_ID = '88d9e324-4d52-41fe-a3ff-6b3753aee6b4'" queryout "C:\Temp\DocAttach\TestAttach.txt" -T -f bcp.fmt

BCP "SELECT BinaryBlob FROM TWO..coAttachmentItems WHERE Attachment_ID = '43e1099f-4e2b-46c7-9f1c-a155ece489fa'" queryout "C:\Temp\DocAttach\TestImage.jpg" -T -f bcp.fmt

I found that BCP was able to extract the files from the Document Attach table and save them, and I was able to then immediately open and view the files.  Dynamics GP does not appear to be compressing or otherwise modifying the files before saving them to the varbinary field, so no additional decoding is required.

Given how simple BCP is and how well it seems to work, I would probably opt for this approach over using .NET, either inside SQL or outside of SQL.  But if you are already developing a .NET app, then it's probably better to use .NET code to perform this extraction.

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+

Friday, May 5, 2017

Updating .NET apps to support TLS 1.2 for Credit Card and ACH integrations

By Steve Endow is disabling support for TLS 1.0 and 1.1 in production as of September 18, 2017.  As of that date, they will only support TLS 1.2.  You can read more here:

I have a customer using a .NET integration with, so I reviewed my code and did some research on which protocols .NET supports and uses.

I reviewed my code and confirmed that I was not explicitly setting the TLS protocol version.  So I researched which versions of TLS were supported by which .NET versions, and how the version was chosen.

After reading a few posts on StackOverflow, I confirmed that .NET 4.5 does support TLS 1.2.  So that was good news.  After reading a few more posts, my understanding was that .NET auto negotiates the protocol with the server, so if requires TLS 1.2, I thought that my .NET app should work fine.

So I tested against the developer sandbox, which has already been set to require TLS 1.2, and to my surprise, I received a connection error.  I researched the error and confirmed that it was due to the TLS 1.2 requirement.  But if .NET 4.5 supports TLS 1.2 and is able to negotiate the protocol version, why would my connection fail?

My only guess is that, in addition to requiring TLS 1.2, have configured their systems to detect protocol negotiation requests and deny those connections.  My assumption is that this may be a measure to prevent surreptitious protocol downgrades, such as the POODLE vulnerability, which makes sense.

So I updated my application to include the following line of code:

System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;

(You will need a "using System.Net" reference as well in your class)

This explicitly tells System.Net to only use TLS 1.2.  After I added this line, my connections to the developer sandbox started working again.

Given this finding, I will need to prepare a new release and will have to work with the customer to deploy the new application before September 2017.

And one other small downside to this approach is that my application is now hard coded to TLS 1.2.  But in practical terms, I am not concerned about this, as TLS 1.2 is the latest version available in .NET 4.5.  If a new version of TLS is released, will probably require that specific protocol, and I'll need to update the .NET version of my application anyway, so I'll have to prepare a new release regardless.

UPDATE: And coincidentally, one after hour posting this article, I saw a discussion about TLS v1.3, which is apparently in the works:

So I learned a few things from this process, and fortunately the fix turned out to be very easy.

You can also find him on Twitter, YouTube, and Google+

Wednesday, April 26, 2017

Automatically generate and create batch IDs in Dynamics GP using VBA

By Steve Endow

This has probably been done a few thousand times by a few hundred people over the years, but I don't recall ever doing it before.

A customer wants to automatically generate batch IDs in Dynamics GP.  They don't want to have to type the batch ID or create batches when entering transactions.  They want to be able to open the Payables Transaction Entry window, have the Batch ID automatically generated and just starting entering a transaction.

Here is a link to a Word document with the instructions and the script.


Once the code is installed, if you open the Payables Transaction Entry window, a new batch is automatically created and the batch ID value is populated in the Batch ID field.

The customer didn't want to use the date, but wanted a sequential numeric batch ID that would be unique.  I used the username, plus the two digit year, plus the day of the year.  This ensures a unique batch for each day, and it's a way of using the date in the batch ID without using the full date.

Make sure to following the instructions in the Word doc and add a VBA Reference to Microsoft ActiveX Data Objects 2.8 Library in the VBA Editor.

Option Explicit

Dim conn As ADODB.Connection
Dim cmd As New ADODB.Command

Private Sub Window_BeforeOpen(OpenVisible As Boolean)
  Set conn = UserInfoGet.CreateADOConnection
  conn.DefaultDatabase = UserInfoGet.IntercompanyID
  cmd.ActiveConnection = conn
  cmd.CommandType = 1  'adCmdText
End Sub

Private Sub Window_AfterOpen()

    Dim interID As String
    Dim userID As String
    Dim shortYear As String
    Dim dayNumber As String
    Dim currentDate As Date
    Dim sqlDate As String
    Dim batch As String
    Dim sql As String
    interID = UserInfoGet.IntercompanyID  'Get the database name (INTERID)
    userID = UCase(UserInfoGet.userID)  'Get the GP user ID
    currentDate = DateTime.Now
    sqlDate = Format(currentDate, "yyyy-MM-dd")

    shortYear = Right(DateTime.year(currentDate), 2) '2 digit year
    dayNumber = DatePart("y", currentDate) 'Get day number (out of 365 / 366)
    batch = userID & "-" & shortYear & dayNumber  'Ex:  jdoe-17134
    'Create the batch if it does not already exist
    sql = "EXEC " & interID & "..taCreateUpdateBatchHeaderRcd '" & batch & "', '', 4, '" & sqlDate & "', 'PM_Trxent', 0, 1, 0, '', 0, 0, 1, 0, 0, 0, 0, '" & userID & "', 1, 0, 0, 0, '1900-01-01', '', '', '', '', '', '', '', 0, ''"
    cmd.CommandText = sql
    'MsgBox (sql)
    'Assign the new batch ID to the batch field on the window
    batchID.Value = batch
End Sub

You can also find him on Twitter, YouTube, and Google+

Enhancing the reliability of complex Dynamics GP integrations

By Steve Endow

I had a call today with a customer who is implementing several complex Dynamics GP integrations.  One of the integrations involves the following steps:

1. A SmartConnect map imports a large AP Invoice batch once per day
2. Once the batch is fully imported, SmartConnect inserts a database record telling Post Master Enterprise that the batch is ready to post
3. Post Master Enterprise posts the AP Invoice batch
4. Once the batch is finished posting, Post Master Enterprise automatically fires a stored procedure where the customer will have custom SQL script
5. The stored procedure will verify the batch posted successfully and then call a SQL command to trigger a second SmartConnect map
6. SmartConnect will then import AP Manual Payments and apply the payments to the posted AP Invoice batch
7. Once SmartConnect is done importing the Payments and Applications, it will then insert another database record telling Post Master to post the AP Manual Payment batch

In theory, this process will work just fine, as both SmartConnect and Post Master can be configured to work together to coordinate in this manner.


In any system integration with many steps or processes, there is a chance that one of the steps may not work properly, may encounter an error, may fail, may not fire, etc.

And in any system integration that is synchronous and tightly coupled like this one, any such problem in the chain of events can prevent the rest of the process from completing.  What if in Step 2, the SmartConnect map has a problem telling Post Master that the batch is ready to post?  What if in Step 3, Post Master finds that the Fiscal Period in Dynamics GP is closed, and it is unable to post the batch?  What if the batch fails to post due to a data error?  What if 682 out of 700 Invoices post successfully, but 18 fail to post due to invalid distributions? What if the custom SQL script in Step 4 encounters an error or table lock that prevents it from completing successfully?  What if SmartConnect encounters an error applying the Manual Payments to the posted Invoices?

In isolation, each of these issues is relatively small, but collectively, there are probably at least a dozen such minor problems that could potentially prevent the entire sequence from completing successfully.  Between the 7 plus different steps in this system integration and the dozen plus potential errors that can cause a failure, I would anticipate that this integration will have some reliability issues over time.

But that's okay.  That's what happens with complex systems--you often have equally complex failures and reliability challenges.

So how do you deal with this?  How do you manage such a complex integration?  And how do you increase reliability?

While speaking with the customer, a few things came to mind.

1. Error notifications.  Both SmartConnect and Post Master Enterprise can be configured to send notifications in case of an error.  As soon as a problem occurs, an email should be sent to a power user or administrator that has the tools to resolve the problem.

2. Proactive monitoring.  Sometimes problems occur, but notifications don't get sent or received, or are simply missed in the torrent of email that we all receive daily.  To supplement the error notifications, a monitoring job can be created that independently checks the status of the process.  Did the Invoices get imported by 1pm?  Did the Invoices get posted by 2pm?  Did the Manual Payments get imported and applied by 3pm?  Did the Manual Payments get posted by 4pm?  Each of these checks is just a simple SQL query against the Dynamics GP company database, and these checks serve as a second layer of notification in case there is a problem or delay.

3. Asynchronous design.  In my experience, synchronous, tightly coupled system integrations tend to be less reliable than asynchronous, loosely coupled integration.  So an integration could potentially be modified or redesigned to decouple one or more tasks in the chain, adopting more of a queue based integration rather than a real time integration.  In this particular integration, that would likely be challenging and would require a redesign.

4. Integration "Supervisor".  Rather than SmartConnect and Post Master working independently and simply handing off messages to each other, a "Supervisor" process or solution could be utilized that manages, or supervises, the entire process.  The Supervisor asks SmartConnect to run a map, monitoring that task until the import is complete.  It then hands a request to Post Master, monitoring that task until the posting is complete.  Rather than having to monitor two different applications and get notifications from each, this central supervisor service could potentially reduce the administration and monitoring of the overall process.

While these steps may help manage the process, they won't directly resolve the issues that cause failures or reliability issues in each step.  To increase reliability, I think you have to target the specific failures and eliminate the causes.  Increase data validation, add error handling, design each step to be able to proceed if some errors occur, and also utilize diagnostics to quickly track down the causes of errors.

There isn't a single answer for managing complex system integrations or improving reliability, but I think it is always a good idea to approach such complex integrations with an understanding of the potential errors and failures and an expectation that some additional measures may be required to increase reliability.

What have you done to make your complex system integrations easier to manage?  Have you used any techniques to increase overall reliability?

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+