Wednesday, December 28, 2016

What Do You Know About Your Customizations?

When I first started consulting over 15 years ago, it seemed like we didn't come across that much customization.   Maybe it was because we didn't have a developer on staff, maybe it was the nature of the clients we served, or maybe it was just indicative of the time.  But over the past 15 years, I've seen a growth in customization and integration on even the smallest of projects.  I attribute this to a number of things including the growing sophistication (and therefore) expectations of clients (even on the smaller end), the release of additional development tools that decrease the effort involved, and even a changing mindset that customization can help "unleash" the potential of your software. For whatever the reason, it seems like customization at some level has become a norm of sorts. 

Let me also add that when I say "customization" in this post, I am including integrations and interfaces between systems as well.

With clients we have implemented, and those we have picked up over time, the tendency  seems to be to "trust the professionals" with the customizations.  While I agree with this on one level, in terms of who should be doing the actual development-- I also would emphasize that every client/power user needs to understand their customizations on several levels.  This due diligence on the client/power user side can help ensure that the customization...

  • works in a practical sense for your everyday business
  • is built on technology that you understand at a high level
  • can grow with your business
  • is understood in a way that can be communicated throughout the user base (and for future users)
I often find that over time, the understanding of a customization can become lost in an organization.  Give it 5 years, and current users will bemoan that they don't understand...

  • why they have customization
  • what the customization does
  • how the customization can be adjusted for new needs
While the IT admins will bemoan a lack of understanding of how to support the customization effectively and/or perpetuate misunderstandings regarding the technology and capabilities.

None of this is anyone's fault necessarily, but it does emphasize the need for due diligence anytime you engage with a consultant or developer for a customization (and even a worthwhile endeavor to review your existing customization).  What are the key  parts of the due diligence I would recommend?  Well, you KNEW I was going to get to that!  So here you go...

  1. Every customization you have should have a specification. It doesn't have to be fancy, but it does need to be a document that contains an explanation of the functionality of the customization as well as the technology to be used to develop it.   Ideally, it should also contain contact information regarding the developer.  I'll be honest, I run in to clients wanting to skip this step more than consultants and developers. I suppose this has to do with not seeing the value of this step, seeing it as a way for consultants to bill more.  But this step has the greatest impact on a client's ability to understand what they are paying for, and minimizing miscommunication and missed expectations.  If you don't have them for your existing customizations, ask for them to be written now (either internally or by the consultants/developers). On another side note, somewhere the actual process for using the customization should be documented as well. Sometimes this is added to the spec, sometimes if is added to other process documentation. But make sure it happens, so that the customization is included in training efforts internally.
  2. Understand at a high level what technology is being used to develop the customization.  Why do you need to know this?  Well, you need to understand what is involved in upgrading the customization for new versions?  How about adjusting or adding functionality?  Will it mean code writing, or something simpler?  How about source code?  Does the customization have  source code, and who owns it (in most cases not the client but the developer/company retains ownership)?  What does that mean if the developer stops working for the company?  Or if you change companies?  Will there be a detailed technical design document to be used if other developers need to be engaged? And is the technology specialized (e.g., Dexterity) or more common (e.g., VB, .Net, etc). All are important questions that impact the longevity and flexibility of the customization.
  3. Conduct full testing with a method for collecting feedback internally, so that you can ensure that the customization indeed meets expectations and enhances the user experience.  It is not uncommon for a customization to be developed per the specification but in practice still need adjustments to make it truly useful for the users.  When this happens, clients will sometimes "stall out" out of fear of additional costs.  Even though, in the long run, the additional costs that might be incurred at this stage could save frustration as well as future replacement costs when the customization is abandoned.  Just make sure during this point in the project, that the spec and process documentation are updated with changes.
What else would you add to the due diligence for clients and customizations? Let me know!

Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Tuesday, December 27, 2016

Source code control is a process, and processes are prone to mistakes

By Steve Endow

I previously thought of source code control as just another piece of software--an application that you use that manages the versions of your code.  SourceSafe, SVN, Git, Team Foundation Server, there are many options for software or services that will take care of source code control / version control for you.  As long as you "use" one of those solutions, you're all set.

But today I learned a pretty significant lesson.  It is probably obvious for many people, but it was a bit of a wake up call for me.

"Source code control" is not Git or Team Foundation Server or VisualSVN.  Those are just tools that are just one piece of a larger process.  And regardless of which tool you use or how great that tool is, if you don't have a solid, resilient process surrounding that tool, you will likely experience breakdowns.

Last week I sent out a new version of a Dynamics GP customization.  The new version only had a few minor changes to enhance error handling--I added several try/catch blocks to try and track down an intermittent error that the user was seeing.

The customer tried the new version today, but they quickly noticed that a feature that was added over 3 months ago was gone. Uh oh.

While making the latest error handling changes, I noticed some oddities.  The last release was version 1.30, but the projects in Visual Studio were still set as version 1.21.  I checked the version 1.30 files that were released, and they were versioned properly, so something happened that caused the Visual Studio projects to revert to version 1.21.  I wouldn't have done that manually.

I then checked my Documentation.cs file that I maintain on all of my projects.  The last note I had was for version 1.21.  No notes on version 1.30.  That's not like me, as that's usually my first step when updating a project.

I then checked the Git branch of the project.  Visual Studio was using branch 1.31, but it was only a local branch and hadn't been published to BitBucket.  1.30 was published, but it didn't have any notes on version 1.30 in my local repository or on BitBucket.

I checked the Commit log on BitBucket, and that left me even more puzzled. I didn't seem to have any commits acknowledging the version 1.30 release.

I see check ins for v1.2 and v1.21, and the new v1.31 release, but nothing for v1.30.

Somehow I had produced a version 1.30, with correct version numbers in Visual Studio, which produced properly versioned DLLs, which got released to the customer, but I have the following problems:

1. I either didn't update my Documentation.cs file, or it somehow got reverted to a prior release, causing my changes to be wiped

2. Somehow my Visual Studio project version numbers got reverted from 1.30 to 1.21

3. I can't find any record in the code for the version 1.30 changes

4. Despite having v1.30 and v1.31 branches in Git, I didn't see any changes when comparing them to each other, or to v1.21.

5. I can't find any evidence of a version 1.30 release in BitBucket

The only evidence I have of a version 1.30 release is the separate release folder I maintain on my workstation, where I did document it in the release notes.

And I see that the DLLs were definitely version 1.30, so I'm not completely imagining things.

So somehow, I managed to make the following mistakes:

1. Potentially reverted my code to a prior release and lost some changes

2. Didn't clearly perform a v1.30 check in, or if I did, my commit comments did not indicate the version number like I usually (almost always) do

3. Created a v1.31 branch for an unknown reason that I didn't publish and didn't document.

4. Somehow made what is likely a series of several small mistakes that resulted in the versioning problem that I'm trying to fix today.

The most frustrating part is that it isn't obvious to me how such a roll back could have happened.

And all of this despite the fact that I'm using an excellent IDE (Visual Studio), an amazing version control system (Git), and a fantastic online source code management service (BitBucket).

My problems today have nothing to do with the tools I'm using.  They clearly stem from one or more breakdowns in my process.  And this was just me working on a small project.  Imagine the complexities and the mistakes and the issues that come up when there are 15 people working on a complex development project?

So today I learned that I have a process issue.  Maybe I was tired, maybe I was distracted, but clearly I forgot to complete multiple steps in my process, or somehow managed to revert my code and wipe out the work that I did.

I now see that I need to invest in my process.  I need to automate, reduce the number of steps, reduce the number of manual mistakes I can make, and make it easier for me to use the great tools that I have.

I don't know how to do that yet, but I'm pretty sure that much smarter people than I have had this same issue and come up with some good solutions.

You can also find him on Google+ and Twitter

Wednesday, December 21, 2016

Approaching Acquisitions with Dynamics GP

Sometimes in this line of work, you get so used to doing thing one way, it take a bit of jolt to remind you that there are other ways to approach things.  Sometimes that jolt comes from a coworker's comment, or a client  asking a question.  I like those moments, because they encourage innovative, creative thinking.  And innovative, creative thinking challenges me and, honestly, makes this job a whole lot more fun!

One example of this sort of situation is with acquisitions.  Specifically, situations where a company on GP  is acquired but has plans to stay on GP through the transition.   Typically, this means the following...

  1. Identify closing date of acquisition
  2. Set up a new company database
  3. Transfer over acquired assets and balances as of the transition date

This approach works well when  you are dealing with a single company. Or maybe a couple.   It works because its...

  1. Straightforward
  2. Relative simple
  3. Clean (the client can  keep the  history of the former company in the old company while the new company starts fresh)
Where this  process doesn't work so well is when we starting talking about...

  1. Lots o' companies
  2. Lots o' data/modules/customizations/integrations
In these cases, the idea of setting up multiple brand new companies, copying data, ensuring that customizations/integrations work, can be a bit daunting in the midst of an acquisition.  This is doubly true if the customizations/integrations support critical day to day business operations.  Those of you that know me know that I don't believe that just because something is "daunting" that we shouldn't do it.  But these "daunting" things do mean we have to approach the project with a higher level of due diligence in advance as well as project management during to mitigate risks.

So what about another option?  Can we avoid setting up all these new companies?  Yes, we can.  It just requires a bit more creative thinking.  As an alternative, we can approach it like this...

  1. Continue forward with same companies
  2. Backup companies at transition date to create historical companies
  3. Remove history as needed from live companies
  4. Enter manual closing entries as of the transition date (assuming fiscal year is not changing, and transition is not fiscal year end)
  5. Reset assets and any other balances as needed (this can be the tricky step, involving scripts to original cost basis = net cost basis, etc to move forward)

Now, the process above does require due diligence in advance as well to make sure all transition needs are identified and planned for.  But it can save effort and reduce risk in some cases.  So...a solution to consider.  What other creative/innovative approaches have you seen to handling acquisitions in Dynamics GP?  I'd love to hear from you!

Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Tuesday, December 13, 2016

Things to Consider- Chart of Accounts

During the course of a new implementation of Dynamics GP, we usually have a discussion surrounding the chart of accounts.  Do you want to change it? If so, how?  How well does it work for you today?  And clients sometimes vary in their willingness to explore changing it.  Some are open to discussion, to see how they might tweak it to better support their needs, while others are satisfied with what they use today.  From time to time, we also find ourselves discussing the chart of accounts structure with clients who have been on Dynamics GP for a number of years or even decades.  In those cases, the company may have grown and the reporting needs have also changed.

I thought it might be worthwhile to share some of my own discussion points when exploring the chart of accounts structure with both new and longtime Dynamics GP users.  So where do I start? I always start with the desired end result...Reporting! So let's start there, and then toss in all my other typical considerations...

  • What are the current and desired reporting needs?  How are reports divided/segmented (departmental, divisional, etc)?  Are the lowest levels for reporting represented in the chart of accounts today?  How about summary levels?  Do the summary levels change in terms of organization over time (so maybe they shouldn't be in the chart of accounts structure)? Is there reporting and/or other tracking in Excel that should be accommodated by the chart of accounts structure so that the reporting can be automated?
  • What about budgeting?  What level does budgeting occur at?  Is that represented? 
  • What about other analytics? Are the components available in the chart of accounts?  Are there statistical variables?  Are they in Dynamics GP as unit accounts?
  • How does payroll flow to the general ledger, does it align to the chart of accounts (e.g., departments, positions, codes, do they match up)?  Is there an expectation of payroll reporting from the general ledger in terms of benefit costs, employee costs, etc?  Are those levels represented in the chart of accounts?
  • Are your segments consistent?  Does a value in department mean the same thing across all accounts?  Or do you need to look at multiple segments to determine the meaning (e.g., department 10 with location 20 means something different than department 10 with location 40)?  Consistency is a goal whenever possible to facilitate reporting.
  • How about your main accounts?  Review a distinct list?  Are they logical, in order, and follow the norm (e.g., expenses in the 6000s)?  Is there room to add main accounts?  Are there duplicated/inconsistent main accounts?
  • Do you do allocations?  If so, how and by what factors?  Can we use fixed or variable allocations to facilitate in GP?  Do we have the needed components in the chart of accounts to determine what to allocate from and to?  Do you want to offset the allocation in separate accounts to see the in/out of the allocation?

Anything I missed?  Thoughts, comments?  Please share and I will update the list!

Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Monday, December 12, 2016

Rare eConnect taPMTransactionInsert error 311 and 312: Tax Schedule ID does not exist

By Steve Endow

Here are two obscure eConnect errors that you should never encounter.  Unlike my customer who did encounter them.

Error Number = 311 Stored Procedure= taPMTransactionInsert Error Description = Misc Tax Schedule ID (MSCSCHID) does not exist in the Sales/Purchse Tax Schedule Master Table – TX00102
MSCSCHID = Note: This parameter was not passed in, no value for the parameter will be returned.

Error Number = 312 Stored Procedure= taPMTransactionInsert Error Description = Freight Tax Schedule ID (FRTSCHID) does not exist in the Sales/Purchases Tax Schedule Master Table – TX00102
FRTSCHID = Note: This parameter was not passed in, no value for the parameter will be returned.

Notice that the error says that the tax schedule ID does not exist, but then says that no value was passed in for the tax schedule ID.

So, if you are sending in a blank tax schedule ID value to eConnect, how can it be invalid, and thus cause this error?

As with many eConnect errors like this, the error is not caused by what you send to eConnect.  It's caused by some value or configuration option buried deep in Dynamics GP, that is impossible to figure out based on the eConnect error alone.

Here is the validation script that triggers the 311 error:

IF ( @I_vMSCSCHID <> '' )
                        FROM    TX00102 (NOLOCK)
                        WHERE   TAXSCHID = @I_vMSCSCHID )
                SELECT  @O_iErrorState = 311;
                EXEC @iStatus = taUpdateString @O_iErrorState,
                    @oErrString, @oErrString OUTPUT,
                    @O_oErrorState OUTPUT;

This would seem to make sense--if a Misc Tax Schedule ID value was passed in, verify that it exists in the TX00102 tax table.

But...what if you aren't passing in a Misc Tax Schedule ID--which our error message above indicates?

Well, we then need to dig a little deeper to find out where a value is being set for @I_vMSCSCHID.  And we find this:

                            THEN PCHSCHID
                            ELSE @I_vPCHSCHID
        @I_vMSCSCHID = CASE WHEN ( @I_vMSCSCHID = '' )
                            THEN MSCSCHID
                            ELSE @I_vMSCSCHID
        @I_vFRTSCHID = CASE WHEN ( @I_vFRTSCHID = '' )
                            THEN FRTSCHID
                            ELSE @I_vFRTSCHID
FROM    PM40100 (NOLOCK)

So what does this tell us?  If no tax schedules are passed into taPMTransactionInsert, eConnect tries to get default Tax Schedule IDs from the Payables Setup table, PM40100.  Once it gets those Tax Schedule IDs, it validates them. could that cause the error we're seeing?

Figured it out yet?

The only way the default Tax Schedule IDs in PM40100 could cause the error would be if those default Tax Schedule IDs are INVALID!

Wait a minute.  How could the default tax schedule IDs in the Payables Setup Options window be invalid, you ask?  The Payables Setup Options window validates those at the field level--the window won't let you enter an invalid value or save an invalid value.

So, that leaves either a direct SQL update to set an invalid value in PM40100, or perhaps more likely, someone ran a SQL delete to remove records from TX00102.  My guess is that someone figured they didn't need a bunch of pesky tax schedules, or wanted to change some tax schedule IDs, and they didn't realize that the PM40100 was also storing the tax schedule IDs.

I've asked the consultant to run this query to check the tax schedule IDs setup in PM40100.

FROM PM40100 pm 

If the tax schedules have values, but the "IDExists" fields have a value of 0, then that means there are no matching records in TX00102, and that the values are invalid.

And that is the solution to your mystery eConnect error of the week!

You can also find him on Google+ and Twitter

Thursday, December 8, 2016

Do you EFT but still print remittances?

Sometimes when it rains, it pours.  It seems like requests come in waives, and in the last two weeks I have had 4 separate clients ask about or implement emailing remittances.  It seems like such an obvious thing, because if you are avoiding printing checks-- why wouldn't you also want to avoid printing remittances as well?

The good news is that it is super simple to set up.  Assuming you want the emails to be routed directly through exchange (and not through a local email client), you first need an account to be used for the sending of the emails.  And second, your exchange server needs to have the auto discover option enabled.  Then it is really as simple as the following 4 steps...

1. Admin-Setup-System-System Preferences, select Exchange for the email option
2. Admin-Setup-Company-Email Message Setup, create a message ID and message for the emails
3. Admin-Setup-Company-Email Settings, set options for emails (including document type) and then click Purchasing Series to enable and specify the email message ID for remittances
4. Cards-Purchasing-Vendor, enter email addresses using the Internet Addresses (blue/green globe) for the remit to address (use the To, CC, and BCC fields as appropriate) and then enable the email remittance under the Email Settings button for the vendor

Once you have these steps completed, it is as simple as choosing to email remittance forms when you are in the Process Remittance window (Transactions-Purchasing-Process Remittance).  Keep in mind, I definitely recommend doing this first with a single vendor using your own email address.  As you may want to tweak the format and/or the email message.

Happy emailing!

Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Tuesday, November 29, 2016

Multiple Fixed Asset Calendars

So many of you may already be aware that GP now has the capability to handle different calendars for different fixed asset books.  For example, your corporate book could be based on your fiscal year while your tax books could be based on a calendar year.  The calendars are managed in Fixed Assets under Setup, then Calendar.  The system comes with a Default calendar that is assigned to books by default.  However, until you run depreciation, you can change the calendar associated with a book (set up new ones). Once you run depreciation, however, you will have to set up a new book if you want to change the assigned calendar.

With the multi-calendar functionality, dealing with short or long years due to a fiscal year change has become much simpler.  In the calendar setup window, you now have options for these situations:

If the selected year needs to be either short or long, simply mark the option for that year (make sure you have the correct year selected).  Then you need to specify how much depreciation you want to take in the elongated year (100% would be the norm for a 12 period year).  So, for example, if you extended the year by 6 months then you might enter 150%.  Or if you have a short year of 6 months, you would enter 50% of the full year depreciation.  Easy Peasy Lemon Squeezy, right?

I also highlighted the options to build your future years based on the fiscal period setup.  You will want to do this so that they are synced to your new fiscal calendar including the prior year setup, the short/long year, and the future year setup (just make sure you have a future year setup with the normal fiscal year).

Assuming when you make these changes that you are not actually changing the depreciation to be taken in a period that has already been processed in Fixed Assets, there is no need to run a reset on the assets. 

Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Monday, November 21, 2016

The value of proactive integration logging and error notifications

By Steve Endow

Logging is often an afterthought with Dynamics GP integrations.  Custom integrations often do not have any logging, and while integration tools may log by default, they often log cryptic information, or log lots of events that are not meaningful to the user.

A few years ago I developed a custom RESTful JSON web service API for Dynamics GP that would allow a customer to submit data from their PHP based operational system to Dynamics GP.  They needed to save new customer information and credit card or ACH payment information to Dynamics GP, and they wanted to submit the data in real time.

I originally developed the integration with my standard logging to daily text log files.  While application logging purists (yes, they do exist) would probably criticize this method, my 12+ years of experience doing this has made it very clear that the simple text log file is by far the most appropriate solution for the Dynamics GP customers that I work with.  Let's just say that Splunk is not an option.

The GP integration worked great, and the logging was dutifully working in the background, unnoticed.  After a few months, the customer observed some performance issues with the GP integration, so I enhanced the logging to include more detailed information that would allow us to quickly identify performance issues and troubleshoot them.  In addition to enhancing the detail that was logged, I added some proactive measures to the logging.  I started tracking any delays in the GP integration, which were logged, and I added email notification in case any errors or delays were encountered.

The logging has worked very well, and has allowed us to identify several very complex issues that would have been impossible to diagnose without detailed, millisecond level logging.

Today there was a great demonstration of the value of the integration logging, and more importantly, the proactive nature of the error notification process.

This is an email that was sent to the finance department at 9:18am Central time.  It notifies the users that an error has occurred, the nature of the error, and recent lines from the log to help me quickly troubleshoot the issue.  The user won't be able to understand all of the details, but they will know within seconds that there was a problem, and they will see the customer record that had the problem.

Subject: GP Web Service - Registration Error - PROD

The Dynamics GP Web Service encountered the following errors on 11/21/2016 9:18:22 AM: 

SubmitRegistration for customer Acme Supply Co exceeded the timeout threshold: 15.61 seconds

Here are the most recent lines from the log file:

11/21/2016 09:18:06.505: SubmitRegistration called for customer Acme Supply Co (client, Credit Card)
11/21/2016 09:18:06.505: (0.00) SubmitRegistration - ValidRegistrationHMAC returned True
11/21/2016 09:18:06.505: (0.00) RegistrationRequest started for customer Acme Supply Co
11/21/2016 09:18:06.739: (0.22) ImportCustomer returned True
11/21/2016 09:18:06.786: (0.28) InsertCustomerEmailOptions returned True
11/21/2016 09:18:22.43:        (15.53) Non-Agency ImportAuthNet returned True
11/21/2016 09:18:22.121: (15.60) Non-Agency ImportAzox returned True
11/21/2016 09:18:22.121: (15.60) RegistrationRequest completed
11/21/2016 09:18:22.121: (15.61) SubmitRegistration - RegistrationRequest returned True
11/21/2016 09:18:22.121: WARNING: SubmitRegistration elapsed time: 15.61

Just by glancing at the email, I was able to tell the customer that the delay was due to  The log shows that a single call to took over 15 seconds to complete.  This pushed the total processing time over the 10 second threshold, which triggers a timeout error notification.

Subsequent timeout errors that occurred throughout the morning also showed delays with  We checked the status web page, but there were no issues listed.  We informed the client of the cause of the issue, and let them know that we had the choice of waiting to see if the problems went away, or submitting a support case with

The client chose to wait, and sure enough, at 10:35am Central time, posted a status update on Twitter about the issue.

That was followed by further status updates on the web site, with a resolution implemented by 11:18am Central time.

Because of the proactive logging and notification, the customer knew about an issue with one of the largest payment gateways within seconds, which was over an hour before notified customers.

We didn't have to panic, speculate, or waste time trying fixes that wouldn't resolve the issue (a sysadmin reflexively recommended rebooting servers).  The users knew of the issue immediately, and within minutes of receiving the diagnosis, they were able to adjust their workflow accordingly.

While in this case, we weren't able to directly resolve the issue with the external provider, the logging saved the entire team many hours of potentially wasted time.

So if you use integrations, particularly automated processes, meaningful logging and proactive notifications are essential to reducing the effort and costs associated with supporting those integrations.

You can also find him on Google+ and Twitter

Monday, November 7, 2016

Why Use The System Password in Dynamics GP?

Earlier today I had a client mention the use of the System Password.  I will admit that I tend to toss the concept of the System Password in Dynamics GP in the same pile with User Classes-- used extensively in the past but not so much in new implementations.  For those of you that aren't familiar with it, the System Password in Dynamics GP (Admin Page..Setup..System Password) protects all system menus (Setup, Cards, Reports, etc) with a password.  So to access the system windows, a user is prompted to enter the password (even if they technically have access to the window).

In older versions of Dynamics GP, this was particularly useful since security was optimistic with all users starting out with full access to all windows.  So enacting the System Password was a quick way to protect security level settings.  However, over time the usefulness has faded for a number of reasons...

1. Although there are several critical windows under System, many critical (and damaging) windows are available in the module level setups, routines, and utilities-- so a comprehensive security setup must be in place if you truly want to protect your system
2. The system password is all or nothing, so if you have a user who only needs access to handful of useful windows (like exchange tables, or Smartlist options) then the password will not allow them access
3.  The password is actually easily recoverable (meaning someone with enough knowledge could easily find out what the password is even if they don't have access to the setup window)

I believe that the System Password can give administrators a false sense of security, implying that they have "locked down" the most important aspects of the system when they actually have not.  When Dynamics GP introduced pessimistic (by default, users only have access to log in to the system and nothing else) role and task based security, many existing users kept the System Password in place.  For some,  it provides a simple double-check by prompting the password.  I don't think this is a problem, as long as security is still well-defined and thought through (including the windows accessed through the security menus).  But if you have not considered the points above in your security strategy, I would encourage you to avoid using the System Password as your primary line of defense in your system.

Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Dynamics GP FP: Can't close Table! and FP: Couldn't close table! error messages

By Steve Endow

A Dynamics GP customer contacted me and asked why this error message occurred:

There are a few forum threads on this discussing possible causes, but I thought I would note my experience with this error message.

I see this quite a bit on my numerous Dynamics GP development VMs because of the testing and changes I perform on the machines.  I can easily produce the error with these steps:

1. Launch Dynamics GP and login to a company
2. Open a Dynamics GP window, such as Customer Maintenance
3. Select a record on the GP window, such as a customer, so that you are viewing the customer data
4. Restart the SQL Server service
5. After the SQL Server service is running again, close the Dynamics GP (Customer Maintenance) window (not the whole app)
6. The "FP: Can't close Table!" message should appear
7. Close Dynamics GP completely.  You may get one or more other error messages, and then another "FP: Couldn't close table!" error.  (slightly different wording and Table is not capitalized)

Given this, my initial explanation for this error message is that the Dynamics GP client application lost its connection with the SQL Server.  There may be other causes, but that is always the reason why I happen to see it on my servers.

If you don't think that the SQL Server was rebooted or the SQL service was restarted, then my next guess would be that there was a network interruption between the machine running GP and the SQL Server machine.

If this happens to a GP client running on the SQL Server (as it did with my customer), then that would typically rule out a physical network issue, and I would look into whether SQL was restarted.

There is the possibility of a deeper network configuration issue or an antivirus issue, as discussed in this post:

I personally can't remember the last time I saw the FP error at a customer site, so I can't speak to those as likely causes, but it wouldn't surprise me.

You can also find him on Google+ and Twitter

Thursday, October 27, 2016

Integration Designers and Developers Need Business Context

By Steve Endow

At the GPUG Summit 2016 conference in Tampa 2 weeks ago, I gave a presentation on Dynamics GP system integration design with Tim Wappat.

This is one of my final slides in the presentation:

Out of context, this slide may not seem as meaningful as it is in the presentation, but the idea is that when system integrations are not simplified or designed strategically, they are just glorified data imports that do not help the business strategically.

The next slide offers some goals:

Part of making system integrations successful and more valuable is designing them to meet business requirements.  It isn't just about pushing data into GP.  It's about ensuring that the entire purchasing process, for example, from requisition to PO to receipt to invoice, goes smoothly and with as little manual effort as possible.

To do that properly, the entire team needs to be aware of the context of the system integration and understand the business use cases and requirements.

Friday, October 21, 2016

Two lesser known SQL Server performance tips: Appropriate for Dynamics GP?

By Steve Endow

At the amazing GPUG Summit 2016 mega conference in gorgeous Tampa, his eminence John Lowther, MVP, hosted a session "SQL Administration Made Easy", where he discussed several great tips for simplifying SQL Server administration for Dynamics GP.

Sunny Tampa, October 2016

In his role at nJevity, John and his colleagues have to manage hundreds of SQL Server instances, so they've worked hard to optimize their SQL configurations to work well for Dynamics GP.

As a result of that session, I've kept my eye out for SQL optimization tips, and I have come across two lesser known configuration settings related to SQL Server.

Tuesday, October 18, 2016

Conversations from Summit: What do you call your partner/ISV/VAR/consultant?

As promised, I am continuing to blog about some of the great conversations I had last week at Summit in Tampa, Florida.  This one comes courtesy of multiple discussions over days, with clients as well as with coworkers and a few long time consulting friends.  But, first, a little background on me.  I did indeed become a consultant out of mere happenstance. I needed a job, I knew a little (very little indeed) about software, and took the leap.  Over the years, however, being a consultant has become part of who I am and I find it hard to imagine a different career. I love what I do for two primary reasons (among many lesser ones):

  • I enjoy helping clients get the most out of their systems and processes, becoming a "trusted advisor" that they seek out for advice, guidance, and perspective
  • I like building partnerships with clients, where I learn from them (about requirements, about business, about what keeps them up at night) and I (and my fabulous teammates of course) can impart some of my knowledge to them

The discussions I had last week at Summit circled around the client relationships I enjoy most.  And we kept coming back to the term "partner".  I love it.  It is my preferred term.  Because it suggests, obviously, a partnership of give and take and mutual benefit.  Not simply a business exchange of services for money (which is why terms like "vendor" or "contractor" don't appeal to me).  Of course, I am not talking about a word game but more the nature of the relationship between consultant and client.

Of course we have to start with a solid base of product knowledge and experience, but the truly great client/consultant partnerships extend in to so much more.  Where we as consultants can benefit clients in ways that extend beyond how to configure a module or cut an AP check.  And on the client side, they can contribute greatly to our understanding of their business, industry, and needs...which makes it that much easy (and dynamic) to serve them well.

So I challenge you to look at your relationship with clients or with consultants (depending on your role), and see how you can contribute to making it a partnership. One way to look at it is to make note of how you refer to them (either client or consultant), is it in a defensive/combative way (preparing for an issue)? Is it distanced, lacking in trust? And then commit to moving towards a partnership-- sometimes that means a "clearing the air" conversation, or discussing specific expectations from both sides to improve the relationship, or simply reaching out regularly with questions that extend beyond error messages.  On both sides, it requires a fair amount of good faith, trust, and expertise. Or at least it does in my opinion, do you agree or disagree?

Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

.NET Error: Side-By-Side Configuration is Incorrect

By Steve Endow

A client recently installed a new .NET eConnect integration for Dynamics GP.  After configuring the integration, they received this error when they ran the EXE file:

They also saw this message in Event Viewer, which helped narrow down the cause.

I quickly reviewed the exe.config file, but couldn't find any obvious errors.  I tried opening the XML file in Internet Explorer as a quick test, and did see that it didn't load properly, so there did appear to be an issue with the file format.

I then opened the config file in UltraEdit and used it's XML validation, but while it did say there was an error, it gave me a misleading description of the error, pointing to a node that was valid.

So I finally resorted to opening both the customer's config file and my dev config file and flipped between the two to compare line by line.  I finally found the problem.

One of the closing value tags had been accidentally deleted when the file was edited.
Once I added back the close value (< / value >) tag, the import worked properly.

You can also find him on Google+ and Twitter

Friday, October 14, 2016

Conversations from Summit- Security and Asking Why?

Sitting on a plane flying home from Tampa is a great time to reflect. I have always enjoyed Summit for a variety of reasons beyond the sessions like...
  1. Reconnecting with long-time consulting friends, vendors I have known longer than I have been married, former students, and clients
  2. Inspired  conversations about what we do, how we do it better, and why we are so collectively passionate about how software can help businesses grow and individuals prosper
  3. I always leave having learned something, some years it is functional/technical know-how and other years it might be a perspective change or a piece of software that solves a common issue
One of these inspired conversations happened this week when I was sitting with a group of MVPs and others in the Mekorma HUB discussing who is trusted in an organization.  So I wanted to share a few of those points, and expand the question a bit.

It is not uncommon for us (as consultants) to get requests to restrict security for users.  Totally common.  What gives us pause from time to time is the role of some of the individuals that are being restricted like...

  • Payroll processors
  • Business analysts
  • HR managers
  • Financial managers
It is not so unusual to have security in place to protect processes and demonstrate the process control that auditors like to see in organizations.  But sometimes it is the specifics of what is restricted that makes me think.

For example...
  • With payroll processors- restrict their ability to see any dollar values
  • For business analysts- restrict their ability to see certain financial numbers, or detail information behind them
  • For HR managers- restrict their ability to see payroll information
  • For Financial managers- restrict their ability to use reporting tools, like Smartlist
I don't mean to suggest that there are not valid reasons to do all of the above.  But I do think that if you find yourself doing (or requesting these things), you should pause for a moment to ask "why" and do a little due diligence on the goal.  Often times, when we have these "pause" conversations, we find that the reason falls in to one of two primary categories: trust or training.  Here are some of the specific reasons we hear, categorized accordingly..

  • Trust
    • Concerns that employee will not be discrete/act professionally with information
    • Executive level edicts that certain data be restricted to limited individuals
    • Internal behavior that suggests employee would be upset with what they learn
    • Employee is new to organization
  • Training
    • Concerns that the reporting that will be created will be incomplete/invalid
    • Experiences where individuals "messed up" the system inadvertently
    • Wanting to avoid unnecessary questions or "busy-body" behavior
In the cases above, I encourage organizations to address  the root cause.  Limiting access doesn't address these gaps or behaviors, it just eliminates the opportunity.  If we are in the business of growing employees (which I think all organizations are), we need to have these coaching conversations, provide the training and guidance they need, to be better professionals (which in turn benefits the business in not only outcomes, but also in loyalty).

More conversations from the Summit coming, but let me know your thoughts on this one.  Do you agree?  Disagree?

Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Thursday, October 6, 2016

How do you choose your passwords and keep them secure?

By Steve Endow

I've had a mini discussion on Twitter recently about passwords.  There was some joking about the stereotype of the password written on a Post-It Note stuck on a monitor, and about how common passwords tend to be re-used.

I noted that I preferred P-Touch labels for my monitor passwords, as they worked better than Post-It Notes.

I occasionally have calls with Dynamics partners who store all client information in a central system, such as MS CRM Online.  In addition to the standard customer information, they store VPN connection information, Windows logins, and SQL Server sa passwords in plain text in the CRM customer records.  Logistically this makes sense, as it allows all of the consultants to quickly access login info and assist customers.  But it poses a potential security risk, as a single compromised CRM login could expose all customer connection information, and full access to customer Dynamics GP SQL Servers and databases.

Monday, October 3, 2016

Why Care? A Case for Really Understanding Your Account Framework

It seems like most of us have had a time in our careers as consultants where we did it all.  We installed the software, configured and trained, migrated data, and even did an upgrade or two.  Over the years, my job has evolved where now I spend a lot more time working with a team.  I don't remember the last upgrade I did (maybe three or four years ago?), other than on my own machine.  And I don't recall the last new install I did. 

So it occurs to me that as we bring people in to consulting, the "do everything" approach is not really used much anymore from what I can tell.  Due to the increasing technical complexities of installing and supporting systems, and larger partner organizations, we tend to see people fall in to one or two of three primary categories- functional, technical, developer.  And for this reason, there is sometime a disconnect between things that each group should know and understand. 

Many years ago, I made the case for waiting to install the software until after we had completed our discovery.  It was common practice to schedule installation as soon as we had registration keys, which sometimes meant even before we had formally kicked off the project.  Why is this an issue?  Account Framework.  This thing that isn't an issue very often, but when it is-- it is a significant issue that can lead a customer to say "Why didn't you anticipate this?".  Which, by the way, is something project managers don't want to be asked.

I want to make a case that everyone on your team, and clients/users as well, needs to understand the significance of the Account Framework.  In many ways, it is the most critical decision (or lack of making a decision) encountered during the installation process.  Handled poorly, it can cost you money, time, and project goodwill.  At a high level, the Account Framework is comprised of three components- maximum number of segments, maximum number of characters for the entire account string, and individual maximum characters for each individual segment.

The Account Framework is defined during the creation of the Dynamics GP system database.  It is defined at this stage because it impacts the fields and sorting available for the chart of accounts in ALL COMPANY DATABASES.  Did you catch that?  ALL COMPANY DATABASES.  Now, let's contrast with the Account Format.  The Account Format is defined during the actual configuration (logging in and setting up) in each company.  The key here is that the Account Format can be different in each company, but all companies' Account Formats must be able to individually fit within the overall Account Framework?


Well, let's take the standard example of 66 characters and 10 segments.  This is the largest possible Account Framework in Dynamics GP.  This allows us to grow to use 10 segments of 6-7 characters each (as it allocates the 66 over 10 segments).  So the following Account Formats would be possible with a "66 and 10" Account Framework:


Wouldn't be possible in this example?




It is important to keep in mind that the Account Framework is the largest the installation can GROW.  So maybe your initial account format for a company is XX-XXXX-X so you decide to specify the same as the Account Framework (a practice we come across frequently).  This works great initially, but if the client ever wants to expand their Account Format (a fairly common request when someone is on the software for a number of years, or even sometimes in the course of an implementation when the need to add an additional segment comes to light), they will have to chance the Account Framework (more on that later) which is not the flip of a switch.  Anecdotally, I have not seen evidence that a smaller account framework yields performance or other benefits.  But if you are reading this and you have seen such benefits, please share!

When a client wants to change the Account Format, it is a fairly straightforward process.  Then can update the Account Format Setup (Setup-Company-Account Format) and then as needed use the PSTL Account Modifier/Combiner tool to update segments as needed.  But if they encounter the boundaries of the Account Framework when doing this, it becomes a much larger issue.  Technically, you can script the changes to the Account Framework to alter the necessary tables and components.  But the recommended approach would be to use the Reformatter tool from Corporate Renaissance (which effectively creates a new Dynamics database and ports over the needed data)...

But of course that involved buying a product, testing, and then deploying the fix.  Microsoft used to also offer a service to do this through support (for an additional fee), but that involves additional down time as the data is sent to MS and returned.  In all cases, additional cost and time is involved.  And it takes a change that is intended to make the system flexible (e.g., being able to change the account format and use the PSTL account modified/combiner) and user-friendly, and adds cost and potential consulting hours.

In my not so humble opinion, before an install happens, everyone on the project team should understand what framework is being requested and why.  If you want to check your account framework, one of the easiest places to do so is Setup-Company-Account Format where you can see the maximum number of segments, characters, and the max length for each segment per the framework.

For reference, when installing Dynamics GP for the first time, GP utilities will allow you to select either a Basic or Advanced installation.  The Basic installation will install with a default Account Framework of maximum 5 segments, 45 characters, and 9 characters per segment.  This cannot be changed.  An advanced installation will allow you to specify the framework.

Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Sunday, October 2, 2016

Supporting .NET 2.0 references in a 4.0+ class library at runtime

By Steve Endow

If you don't know what a .NET mixed mode assembly is, you can skip this post and save a few minutes of your life.  If you do know what a .NET mixed mode assembly is and understand why it can be a pain, you may want to read on.

This post is primarily to document this solution for myself so that I can find it again in the future without having to dig through 10 old projects (like I had to do yesterday).

This solution is something my colleague Andrew Dean and I have shared during our Dynamics GP .NET Development presentations at the reIMAGINE and GPUG Summit, as it is something we have had to use in our development projects with Dynamics GP.

In the case of Dynamics GP, the primary reason I've had to use this is because I regularly use GPConnNet.dll.  That dll is still only .NET 2.0 'compatible' (or whatever the term is).  So if you develop a new solution using .NET 4.5.1, you will get the famous mixed mode assembly error.

Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information.

The standard solution to resolve this error is to modify your application config file.  Meaning your app.config / exe.config file.

That works fine if you are developing an EXE, but it won't work if you are developing a class library DLL that will be used by other applications that you can't control.  Like Dynamics GP.

This post by Reed Copsey Jr. provides an alternative, and brilliant, approach to setting the .NET legacy policy at runtime.  This is a life saver for DLL libraries that are referencing resources with multiple .NET versions.

public static class RuntimePolicyHelper
    public static bool LegacyV2RuntimeEnabledSuccessfully { get; private set; }

    static RuntimePolicyHelper()
        ICLRRuntimeInfo clrRuntimeInfo =
            LegacyV2RuntimeEnabledSuccessfully = true;
        catch (COMException)
            // This occurs with an HRESULT meaning 
            // "A different runtime was already bound to the legacy CLR version 2 activation policy."
            LegacyV2RuntimeEnabledSuccessfully = false;

    private interface ICLRRuntimeInfo
        void xGetVersionString();
        void xGetRuntimeDirectory();
        void xIsLoaded();
        void xIsLoadable();
        void xLoadErrorString();
        void xLoadLibrary();
        void xGetProcAddress();
        void xGetInterface();
        void xSetDefaultStartupFlags();
        void xGetDefaultStartupFlags();

        [MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
        void BindAsLegacyV2Runtime();

Go forth and develop mixed mode assembly class libraries.

You can also find him on Google+ and Twitter

Saturday, October 1, 2016

Cool Visual Studio Immediate Window trick

By Steve Endow

I'm a big fan of the Immediate Window in Visual Studio.  When you are debugging code and trying to figure out a problem, the immediate window is a simple and quick way to check variable values, test operations at a breakpoint, and take a look at what is going on.

But there is one small issue.  If you are outputting lots of text, such as an eConnect XML document or eConnect exception message, the Immediate Window will output the entire XML document as a single line of text.  A really, really long line.  Worse, newlines will be displayed in the text as \n\r.  

This makes outputting XML to the Immediate Window a bit of a hassle--I have been copying the text, pasting it into UltraEdit, replacing the newline characters with actual CRLF line breaks, and then formatting the XML.  It's a hassle.

So, once again, my laziness to repeat that process finally overcame my laziness to actually do some research and see if there was a better solution.  Of course, there is a better way.  I just didn't know it for the last TEN YEARS.  I'm apparently a slow learner.

Big surprise, the answer was on Stack Overflow.

The trick is to type ",nq" after your Immediate Window command to request "no quotes".  Apparently this is a "format specifier".

Holy cow, what a huge improvement.  It might seem trivial, but it makes a huge difference.  Here's an example of before and after.

Notice the standard "? response" command outputs that variable text (an eConnect exception) as a single line that's a pain to read.  Once I typed "? response,nq", like magic, it output the text in a very nicely formatted manner.  Brilliant.  Soooo much more readable and it saves me a half dozen useless steps in a text editor.

There are at least 1 billion shortcuts and hidden gems in Visual Studio that I still don't know, but I'm finding them one at a time...  Pacing myself.

You can also find him on Google+ and Twitter

Thursday, September 29, 2016

Fix Dynamics GP scaling and font size issues on high DPI displays

By Steve Endow

If you have a relatively new Windows laptop with a high resolution display or if you have 4K monitors on a Windows desktop, you are likely familiar with the numerous software display issues related to these "high DPI" displays.

If you use a 4K monitor at full resolution, the fonts and text in Windows is microscopic.  To make text readable and buttons large enough to click, Windows uses "scaling", whereby it increases the size of text and graphics to make them readable and large enough to actually use.  (I'm sure there's a huge technical discussion about the details, but I don't care about any of that, I just want to get work done.)

So when you run Windows 10 with a high DPI display, it will typically recommend running at 200% scaling.  I prefer to use 175% to get more real estate on my displays.

This Scaling technique generally works well, and with many applications, you may not even realize that scaling is occurring...

Until you launch an application that isn't "high DPI aware" or high DPI compatible.  Then you'll see a complete mess. I use SugarSync, which does not handle scaling very well.  Notice the text on the left is very small compared to the folder names on the right.

It's annoying, but tolerable.

I thought that Dynamics GP 2015 displayed fine on my Surface Pro 4 when I first installed it, but I rarely use GP on my SP4, and when I launched it earlier this week, I saw this odd mix of font sizes.  A colleague told me the same thing--GP used to display properly on his Surface Pro 4, but recently the fonts went haywire.

Notice the menus at the top are very large, while the text on the left panes is tiny.

Similarly, SQL Server Management Studio 2014 is a hot mess on high DPI displays.  It was the SSMS 2014 issue that had me searching for a solution.  And I came across this brilliant post.

Clearly the author has a pretty good understanding of the high DPI settings in Windows and figured out how to change the scaling method on a per-application basis using a manifest file.

I don't understand any of the mechanics, but I followed his instructions to import the extra registry key, then saved the Ssms.exe.manifest file to my machine.  Like magic, SSMS 2014 displayed properly.  The bitmap scaling on my displays at 175% is a bit fuzzy, but I'm okay with that, since it makes the applications much more usable.  I can always try 150% or 200% scaling if necessary if the blurriness bothers me.

Since it worked for SQL Server Management Studio, I then wondered--could it work for Dynamics GP?

(Full credit goes to Gianluca Sartori for his article showing how to implement this fix--I didn't come up with any of this)

I created a Dynamics.exe.manifest file...and behold.  GP launched just fine and rendered "normally", without the funky font sizes.

On a roll, I made a manifest file for SugarSync, and was pleased to see that it fixed SugarSync's scaling as well!  SugarSync already had a manifest file, so I just edited it to add the additional settings from the SSMS 2014 sample.

I don't understand any of the settings in the manifest file, but it seems to be fairly generic and flexible, as I've just used the technique for 3 different applications.

If you want to give this a try, you do so at your own risk.  If you don't know how to edit the registry or fix things if your computer goes haywire doing this, you should "consult an expert" to assist you.

Save this text to a file called "Scaling.reg".

Windows Registry Editor Version 5.00

Once you have created the "reg" file, double click on it to import it into the Windows registry on the machine where Dynamics GP is installed.

Then save this text to a file called Dynamics.exe.manifest.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" xmlns:asmv3="urn:schemas-microsoft-com:asm.v3">
    <assemblyIdentity type="win32" name="Microsoft.Windows.Common-Controls" version="" processorArchitecture="*" publicKeyToken="6595b64144ccf1df" language="*">
    <assemblyIdentity type="win32" name="Microsoft.VC90.CRT" version="9.0.21022.8" processorArchitecture="amd64" publicKeyToken="1fc8b3b9a1e18e3b">
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
      <requestedExecutionLevel level="asInvoker" uiAccess="false"/>
  <asmv3:windowsSettings xmlns="">
    <ms_windowsSettings:dpiAware xmlns:ms_windowsSettings="">false</ms_windowsSettings:dpiAware>

Then copy your new Dynamics.exe.manifest file to the GP application directory where the Dynamics.exe file is located.

Give it a try and let me know if it works for you.  I'm curious if it fixes some, most, or all of the situations where GP is apparently displayed using small fonts.

UPDATE: It appears that GP 2013 may handle scaling differently than GP 2015.  With GP 2013, I'm seeing a completely different mess of font sizes, and the GP window fonts are extremely small.  I used the Dynamics.exe.manifest file and it now displays properly.


After adding the manifest file:

Huge difference.

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter