Page 1 of 512345

Update: Microsoft Dynamics GP 2013 R2 updates to Report Writer Functions – RW_CoAddrIDInfo broken

David Meego - Click for blog homepageFollowing on from my post: Microsoft Dynamics GP 2013 R2 updates to Report Writer Functions, I have to admit that I made a mistake.

The Problem

The update to the RW_CoAddrIDInfo() report writer function has broken the code. It works when no Address ID is supplied to return the information from the default Address information stored on the Company Master table, but fails to return any results from the Location Master table when an Address ID is provided.

I looked at the code and identified the issue and have logged the proposed fix with the development team as TFS Bug 79342. Hopefully, we can get the fix into the next GP 2013 R2 release soon.

In the meantime, I have created a couple of workaround fixes which can be used until the code is fixed by development. They are attached to bottom of this article.

 

The Fixes 

Using Support Debugging Tool

The first fix uses the Support Debugging Tool (http://aka.ms/SDT) to create a custom Report Writer function which can be called using the following syntax:

FUNCTION_SCRIPT( rw_TableHeaderString  5261  “RW_COADDRIDINFO”  “Address ID”  0  X )

where the Address ID can be specified as a string constant or from a variable and X is the data you want returned.

The Runtime Execute script and an example custom report are in the archive for you to use.

If you already have the SDT installed in the Recommended Configuration (with a shared setup file), you can deploy the fix by importing the configuration settings xml file once on any workstation and updating your reports to use the function call above.

 

Using Dexterity

The second fix is a small Dexterity chunk file which triggers a corrected version of the RW_CoAddrIDInfo() function after the original and so provides the correct results. It can be used without making changes to existing reports, however, the chunk will need to be deployed on all workstations that need the fix. The chunk file and its source code are provided in the archive for you to use.

 

Both fixes should be removed once the original code is fixed.

See I am human, and not the Alien that Mark Polino thinks I am. 

David

Original post:
Update: Microsoft Dynamics GP 2013 R2 updates to Report Writer Functions – RW_CoAddrIDInfo broken

August 22, 2014 · David Musgrave · No Comments
Tags: , , , , , , , , ,  · Posted in: Blogs I Follow Total Views: 78

Violation of Primary Key constraint. Cannot insert duplicate key in object error when using Item List

David Meego - Click for blog homepageToday, I have another blog post based on a recent support case.

The partner logged a case with us where the customer was receiving the error “Violation of Primary Key constraint. Cannot insert duplicate key in object” when they used the Item List in Microsoft Dynamics GP.

The table referenced in the error was a temporary table. Below is the screenshot of the error:

MicrosoftSQL Server Native Client 10.0SQL ServerViolation of PRIMARY KEY constraint ‘PK##1955155′. Cannot insert duplicate key in object ‘dbo.##1955155′.

I logged on my system and found that the list data is populated from by the IV_ItemList stored procedure.

BEGIN DECLARE @stored_proc_name char(19) DECLARE @retstat int DECLARE @param2 int set nocount on SELECT @stored_proc_name = ‘TWO.dbo.IV_ItemList’ EXEC @retstat = @stored_proc_name ‘##0602745′, @param2 OUT SELECT @retstat, @param2 set nocount on END

Looking at the stored procedure it is getting its data from IV00101 and IV00102 using the following select statement to populate the table.

exec(‘ select IV00101.ITEMNMBR,IV00101.ITMCLSCD,IV00101.ITEMDESC,IV00101.ITEMTYPE,IV00101.DECPLQTY,IV40201.BASEUOFM,’
    + ‘ IV00101.ITMSHNAM, IV00101.ITMGEDSC,IV00101.LOCNCODE,’
    + ‘ (ISNULL(IV00102.QTYONHND,0.00000) – ISNULL(IV00102.ATYALLOC,0.00000)) – ISNULL(IV00102.QTYCOMTD,0.00000), ‘
    + ‘ IV00101.USCATVLS_1,IV00101.USCATVLS_2,IV00101.USCATVLS_3,IV00101.USCATVLS_4,IV00101.USCATVLS_5,IV00101.USCATVLS_6,’
    + ‘ IV00102.ATYALLOC, IV00102.QTYONHND, IV00102.QTYONORD,’ + ‘ 0,0′
    + ‘ FROM IV00101 LEFT OUTER JOIN IV00102 ON IV00101.ITEMNMBR = IV00102.ITEMNMBR AND IV00101.MSTRCDTY = IV00102.RCRDTYPE ‘
    + ‘ LEFT OUTER JOIN IV40201 ON IV00101.UOMSCHDL = IV40201.UOMSCHDL’)

 

 

I modified the query so we can look for duplicates in the data set

exec(‘select ITEMNMBR from (select IV00101.ITEMNMBR,IV00101.ITMCLSCD,IV00101.ITEMDESC,IV00101.ITEMTYPE,IV00101.DECPLQTY,IV40201.BASEUOFM,’
    + ‘ IV00101.ITMSHNAM, IV00101.ITMGEDSC,IV00101.LOCNCODE,’
    + ‘ (ISNULL(IV00102.QTYONHND,0.00000) – ISNULL(IV00102.ATYALLOC,0.00000)) – ISNULL(IV00102.QTYCOMTD,0.00000) as c10, ‘
    + ‘ IV00101.USCATVLS_1,IV00101.USCATVLS_2,IV00101.USCATVLS_3,IV00101.USCATVLS_4,IV00101.USCATVLS_5,IV00101.USCATVLS_6,’
    + ‘ IV00102.ATYALLOC, IV00102.QTYONHND, IV00102.QTYONORD,’ + ‘ 0 as c20,0 as c21′
    + ‘ FROM IV00101 LEFT OUTER JOIN IV00102 ON IV00101.ITEMNMBR = IV00102.ITEMNMBR AND IV00101.MSTRCDTY = IV00102.RCRDTYPE ‘
    + ‘ LEFT OUTER JOIN IV40201 ON IV00101.UOMSCHDL = IV40201.UOMSCHDL) A group by ITEMNMBR having count(*) > 1′)

 

 

With a little more research, we were able to confirm that there were duplicate records in the IV_Item_MSTR_QTYS (IV00102) table. These duplicates in turn caused the temporary table used by the Item List query to have duplicates and generate the error.

The questions had to be asked “How could the table get duplicate records in it?”. “Aren’t the indexes meant to stop duplicate records from being created?”.

Well, using SQL Server Management Studio we looked at the indexes on the table to find that there were none defined, they were missing!

Before we could successfully recreate the indexes, we needed to locate and remote the duplicates.

The IV00102 table should have 4 indexes of which the first 3 do not allow duplicates. The key fields for the 3 indexes are:

  • ITEMNMBR, RCRDTYPE, LOCNCODE
  • LOCNCODE, ITEMNMBR
  • PRIMVNDR, ITEMNMBR, LOCNCODE

I created the following queries to look for duplicate records for the 3 indexes.  

Scripts to locate Duplicates for 3 indexes

– Identify duplicates for Key 1
select I.ITEMNMBR, I.RCRDTYPE, I.LOCNCODE, I.DEX_ROW_ID
from IV00102 I
join
(
       select ITEMNMBR, RCRDTYPE, LOCNCODE, count(0) as COUNT, min(DEX_ROW_ID) as DEX_ROW_ID
       from IV00102
       group by ITEMNMBR, RCRDTYPE, LOCNCODE
       having count(*) > 1
) D on D.ITEMNMBR = I.ITEMNMBR and D.RCRDTYPE = I.RCRDTYPE and D.LOCNCODE = I.LOCNCODE
 
– Identify duplicates for Key 2
select I.LOCNCODE, I.ITEMNMBR, I.DEX_ROW_ID
from IV00102 I
join
(
       select LOCNCODE, ITEMNMBR, count(0) as COUNT, min(DEX_ROW_ID) as DEX_ROW_ID
       from IV00102
       group by LOCNCODE, ITEMNMBR 
       having count(*) > 1
) D on D.LOCNCODE = I.LOCNCODE and D.ITEMNMBR = I.ITEMNMBR

– Identify duplicates for Key 3
select I.PRIMVNDR, I.ITEMNMBR, I.LOCNCODE, I.DEX_ROW_ID
from IV00102 I
join
(
       select PRIMVNDR, ITEMNMBR, LOCNCODE, count(0) as COUNT, min(DEX_ROW_ID) as DEX_ROW_ID
       from IV00102
       group by PRIMVNDR, ITEMNMBR, LOCNCODE
       having count(*) > 1
) D on D.PRIMVNDR = I.PRIMVNDR and D.ITEMNMBR = I.ITEMNMBR and D.LOCNCODE = I.LOCNCODE

 

  

Then I used the DEX_ROW_ID identity column to identify the original (first created) record, so we could keep that one and remove the rest of the duplicates. The following queries show the records which will be deleted for the 3 indexes.

Scripts to locate the records to remove for 3 indexes

– Identify records to remove for Key 1 Duplicates
select I.ITEMNMBR, I.RCRDTYPE, I.LOCNCODE, I.DEX_ROW_ID
from IV00102 I
join
(
       select ITEMNMBR, RCRDTYPE, LOCNCODE, count(0) as COUNT, min(DEX_ROW_ID) as DEX_ROW_ID
       from IV00102
       group by ITEMNMBR, RCRDTYPE, LOCNCODE
       having count(*) > 1
) D on D.ITEMNMBR = I.ITEMNMBR and D.RCRDTYPE = I.RCRDTYPE and D.LOCNCODE = I.LOCNCODE
where I.DEX_ROW_ID <> D.DEX_ROW_ID
 
– Identify records to remove for Key 2 Duplicates
select I.LOCNCODE, I.ITEMNMBR, I.DEX_ROW_ID
from IV00102 I
join
(
       select LOCNCODE, ITEMNMBR, count(0) as COUNT, min(DEX_ROW_ID) as DEX_ROW_ID
       from IV00102
       group by LOCNCODE, ITEMNMBR 
       having count(*) > 1
) D on D.LOCNCODE = I.LOCNCODE and D.ITEMNMBR = I.ITEMNMBR
where I.DEX_ROW_ID <> D.DEX_ROW_ID

– Identify records to remove for Key 3 Duplicates
select I.PRIMVNDR, I.ITEMNMBR, I.LOCNCODE, I.DEX_ROW_ID
from IV00102 I
join
(
       select PRIMVNDR, ITEMNMBR, LOCNCODE, count(0) as COUNT, min(DEX_ROW_ID) as DEX_ROW_ID
       from IV00102
       group by PRIMVNDR, ITEMNMBR, LOCNCODE
       having count(*) > 1
) D on D.PRIMVNDR = I.PRIMVNDR and D.ITEMNMBR = I.ITEMNMBR and D.LOCNCODE = I.LOCNCODE
where I.DEX_ROW_ID <> D.DEX_ROW_ID

 

 

Once the duplicate records had been identified we could remove them to leave just a single record: Below are the scripts to remove the duplicates.

Script to remove duplicate records for 3 indexes

– Remove records for Key 1 Duplicates
delete I
from IV00102 I
join
(
       select ITEMNMBR, RCRDTYPE, LOCNCODE, count(0) as COUNT, min(DEX_ROW_ID) as DEX_ROW_ID
       from IV00102
       group by ITEMNMBR, RCRDTYPE, LOCNCODE
       having count(*) > 1
) D on D.ITEMNMBR = I.ITEMNMBR and D.RCRDTYPE = I.RCRDTYPE and D.LOCNCODE = I.LOCNCODE
where I.DEX_ROW_ID <> D.DEX_ROW_ID


– Remove records for Key 2 Duplicates
delete I
from IV00102 I
join
(
       select LOCNCODE, ITEMNMBR, count(0) as COUNT, min(DEX_ROW_ID) as DEX_ROW_ID
       from IV00102
       group by LOCNCODE, ITEMNMBR 
       having count(*) > 1
) D on D.LOCNCODE = I.LOCNCODE and D.ITEMNMBR = I.ITEMNMBR
where I.DEX_ROW_ID <> D.DEX_ROW_ID

 

– Remove records for Key 3 Duplicates
delete I
from IV00102 I
join
(
       select PRIMVNDR, ITEMNMBR, LOCNCODE, count(0) as COUNT, min(DEX_ROW_ID) as DEX_ROW_ID
       from IV00102
       group by PRIMVNDR, ITEMNMBR, LOCNCODE
       having count(*) > 1
) D on D.PRIMVNDR = I.PRIMVNDR and D.ITEMNMBR = I.ITEMNMBR and D.LOCNCODE = I.LOCNCODE
where I.DEX_ROW_ID <> D.DEX_ROW_ID

 

 

So now we had cleaned up the data, the best/simplest way to get the indexes and the associated zDP stored procedures created was to backup the data, recreate the table using the SQL Maintenance feature from inside Microsoft Dynamics GP and then copy the data back. The following blog post provides the method you can use to perform these steps:

So the partner completed the steps and tested the system and it still allowed new duplicate records to be created. The only way this could happen is if the indexes were not created properly. So I asked the partner to check that they existed by looking in SQL Server Management Studio. The screenshot below is how it should look:

 

The only reason that I could think of for the index creation to fail is if indexes of the same name already existed. So I asked the partner to also see if the indexes could be found elsewhere. He came back with the following screenshot:

 

As you can see the indexes on the IV00102 were missing, but there was a backup copy of the table with the same index names as the indexes we were trying to create.

I requested the partner remove those indexes and go through the backup/recreate/restore process again and this time the indexes were created and the system no longer created duplicate records and without the duplicates the Item List worked properly.

 

So the moral of this story: Do not rename existing tables as a backup of a table as this will leave indexes in place which will prevent the recreation of the indexes when the table is recreated. 

 

If you do rename the table, then rename the indexes to match or remove the indexes. Better still you can use SELECT * INTO FROM

 instead, to copy the data to a new table

I hope you find this information useful.

David

Link:
Violation of Primary Key constraint. Cannot insert duplicate key in object error when using Item List

May 12, 2014 · David Musgrave · No Comments
Tags: , , , , , , , , , ,  · Posted in: Blogs I Follow Total Views: 326

Capture Logins Script from KB 878449 generates Invalid object name error

David Meego - Click for blog homepageThis week I had an interesting case from a partner trying to use the Capture Logins script from Knowledge Base (KB) article 878449.

When they ran the script on their SQL Server 2012 system they received the follow error:


Invalid object name ‘master..sysxlogins’.

Looking at the source code for the script where the error occurs, I could see that the code has two branches. For the newer SQL Server versions it is meant to call the temporary stored procedure seeMigrateSQLLogins, otherwise it should call the temporary stored procedure sp_help_revlogin.

So why was it running the wrong branch of the code?

The branch executed is decided by an if statement which checks the version number information returned from the @@version system variable.  When I tested the code using print statements to display what version number the if statement was comparing against on my SQL Server 2012 system, it displayed “P1) ” when the code was expecting “11.0”.

The problem occurs because the @@version for SQL 2012 SP1 shifts the version number six characters to the right, causing the version number check to fail and the wrong branch of the code to execute.

Microsoft SQL Server 2012 (SP1)11.0.3128.0 (X64)
       Dec 28 2012 20:23:12
       Copyright (c) Microsoft Corporation
       Enterprise Edition (64-bit) on Windows NT 6.2 (Build 9200: ) (Hypervisor)

The original code below cannot handle this change. In fact it has to be changed for every version of SQL Server as the layout of the information return by the @@version command might have changed. 

Original script Excerpt

declare @version2005 char(5)
declare @version2008 char(5)
declare @version2008R2 char(5)

–Get the current version of SQL Server running
select @version2005 = substring(@@version,29,4)
select @version2008 = substring(@@version,35,4)
select @version2008R2 = substring(@@version,38,4)
 
if (@version2005 in (‘9.00′,’11.0′) or @version2008 = ‘10.0’ or @version2008R2 = ‘10.5’)

 

 

We could just keep using the same method and extend it to work with SQL Server 2012 SP1 (and tidy up the SQL Server 2012 handling as well). However, this will still need to be updated for each new SQL Server version, see below: 

Modified Script Excerpt

declare @version2005 char(5)
declare @version2008 char(5)
declare @version2008R2 char(5)
declare @version2012 char(5)
declare @version2012SP1 char(5)

–Get the current version of SQL Server running
select @version2005 = substring(@@version,29,4)
select @version2008 = substring(@@version,35,4)
select @version2008R2 = substring(@@version,38,4)
select @version2012 = substring(@@version,29,4)
select @version2012SP1 = substring(@@version,35,4)
 
if (@version2005 = ‘9.00’ or @version2008 = ‘10.0’ or @version2008R2 = ‘10.5’ or @version2012 = ‘11.0’ or @version2012SP1 = ‘11.0’)

 

 

A better method is to make the code smart and version independent. We can achieve this by using the patindex() function looking for the string #.# (Number, Full stop (period), Number) in the version details and then starting one character before the location returned (to capture two digit major version numbers) and grab the following 4 characters using substring(). By trimming off the left hand spaces using ltrim() we can remove the space at the beginning when the major version number is only a single digit (ie. version 9.0). Finally using cast() to change the data type allows for a numeric comparison of the version to check if it is version 9.0 or greater.

Below is the version proof code which will work for any version and does not need changes unless the method used needs to change:

Final Script Excerpt

if cast(ltrim(substring(@@version, patindex(‘%0-9.0-9%’, @@version)-1,4)) as decimal(4,2)) >= 9.0

 

 

Try the following on your system. This will display the major and minor version numbers for your SQL Server.

print ltrim(substring(@@version,patindex(‘%0-9.0-9%’, @@version)-1,4))

 

The archive attached to the bottom of this article contains three script files: the original script (KB878449_Capture_Logins_Original.sql), the modified version (KB878449_Capture_Logins_Modified.sql) and then the final version (to replace the original: KB878449_Capture_Logins.sql) using the patindex() approach.

Hope you find this useful.

David

PS: I have requested that the script file associated with the KB article be updated with the new version I created.

See original article:
Capture Logins Script from KB 878449 generates Invalid object name error

May 2, 2014 · David Musgrave · No Comments
Tags: , , , , , , , , , ,  · Posted in: Blogs I Follow Total Views: 316

More on SQL Server Connection issues with Microsoft Dynamics GP

David Meego - Click for blog homepageI have been working on a recent support case that had a situation that makes me want to raise this topic again.

In the past, I have posted a number of articles discussing reasons for connections to SQL Server to be lost. When Microsoft Dynamics GP loses its connection to SQL Server, the Dexterity Runtime automatically creates a new connection to minimise disruption. However, this reconnection has a new SPID and is not linked in any way to the previous connection.

This means that any SQL temporary tables created in the original connection will not be available in the new connection. When Microsoft Dynamics GP attempts to access or close the temporary tables it believes exists, you will get errors.

Below are some example errors (all screenshots from the same case):


A remove range operation on table ‘XXX’ cannot find the table.

 


A save operation on table ‘XXX’ cannot find the table.

 

 
FP: Couldn’t close table!

 

It is important to note that once a Dexterity Script generates an error, the script will abort and whatever tasks were later in the script will not be completed.

Sometimes these table errors can cause other errors to be generated.


All call stacks are in use. Cannot start script.

 

Please note that ignoring any error in Microsoft Dynamics GP is a risk (see my blog articles on the Importance of Reporting Errors below). 


 

What stood out as different with this case was the following error dialog, which I had never seen before:


 A SQL network connection error occurred and your connection was cleared.
This probably occurred because all available connections to the server are in use. Please try again later.

 

This error message made me wonder what was happening with the connections to the SQL Server. The customer’s System Administrator obtained graphs of the connections, below is an example showing how the connections keep increasing during the day until the server forcibly closes connections.


Connections increasing to a maximum of 2303 connections during half a day

 

So, what could cause Microsoft Dynamics GP to continuously use more and more connections?

I had one idea about what could be the cause, so I asked the question to the partner: “Does the site have VBA code that uses ADO to access SQL Server data?”

The answer was “Yes”.

So then I asked about how the ADO (ActiveX Data Objects) connection was being closed at the end of the VBA (Visual Basic for Applications) code (when exiting the form or report). We looked at the VBA project and the code used to clean up looked something like:

     ‘ Close ADO Connection
    Set cn = Nothing
    Set rst = Nothing
    Set cmd = Nothing

When the code should look something like the following example:

     ‘ Close ADO Connection
    If rst.State = adStateOpen Then rst.Close
    If cn.State = adStateOpen Then cn.Close
    Set cn = Nothing
    Set rst = Nothing
    Set cmd = Nothing

The problem is that code was destroying the connection object “cn” before actually closing the connection, thus leaving the connection to the SQL Server open with no way to close it anymore. 

Best practice for VBA used in reports and forms when using ADO is to close the result set and connection before destroying the objects. Please see the blog posts below for details of the best practice methods for using VBA with ADO on Forms and Reports.

After a code review and making the changes needed, the connections on the customer’s system levelled out, with a maximum of 539 connections. 

 
Connections rising a small amount as users log in, but staying fairly constant throughout the day

 

So the lessons learned here are: 

  1. Always make sure you close any SQL Server connections you open (regardless of the development tool being used: Dexterity, VBA, or VSTools).
     
  2. Never ignore an error dialog (as it means code has not completed executing and the data could be in an unknown state).
     
  3. Always report the first error received as any errors that follow are usually the result of that first error.

 

For more information see the following blog posts:

 

I hope you find this information useful. 

David

More:
More on SQL Server Connection issues with Microsoft Dynamics GP

April 23, 2014 · David Musgrave · No Comments
Tags: , , , , , , , , , ,  · Posted in: Blogs I Follow Total Views: 411

Business Portal Requisitions Management Approval Hierarchy List: Server Error in ‘/’ Application

David Meego - Click for blog homepageI have seen this error pop up on a few Microsoft Dynamics GP 2013 systems, so I thought I would post some information to help you resolve it. …. Well, at least one possible cause of it.

When using Business Portal 6.0 for Microsoft Dynamics GP 2013 and click on the Approval Hierarchy List in Requisition Management, you receive the following error:

Server Error in ‘/’ Application.

Depending on the system settings you may or may not get the detail of the error itself, see the screenshots below:

 


Click to expand screenshot without detail

 

To show better error information from Business Portal (BP), use the following steps on the Internet Information Server (IIS) machine:

  1. Navigate to the folder where BP’s IIS site is stored (e.g. C:inetpubwwwrootwssVirtualDirectories80)
  2. Make a backup copy of the Web.config file
  3. Open the Web.config in Notepad
  4. Search for the following line:
     
        CallStack=”false”
      
    and change it to:
     
        CallStack=”true” 
     
  5. Search for the following line: 
     
        CustomError=”On” 
     
    and change it to: 
     
        CustomError=”Off” 
     
  6. Save and close the Web.config
  7. Close and re-open IE if it’s open, then navigate to the BP Home page.

 


Click to expand screenshot with detail

 

You will notice that the error with the detail references Line 151 in the source file /BusinessPortal/Applications/ReqMgmt/ApprovalTree/ApprovalTree.aspx

Looking at that line in ApprovalTree.aspx file, we can see that the CssClass property is missing the opening quote.

 

Editing the ApprovalTree.aspx file to add the opening quote, fixes the issue

BrandingAndStyleActionButton” runat=”server” Text=”Modify”>

 

This is logged in the system as Problem Report 72313 and has been fixed in the code. However, it has not been made available as there has not been an update for Business Portal 6.0 released.

You can fix the issue yourself, either by editing the file with Notepad.exe or by using the fixed file attached as an archive to the bottom of this article.

 

Hope this helps

David

View original post here:
Business Portal Requisitions Management Approval Hierarchy List: Server Error in ‘/’ Application

January 30, 2014 · David Musgrave · No Comments
Tags: , , , , , , ,  · Posted in: Blogs I Follow Total Views: 129

SQL Server xact abort setting breaks Microsoft Dynamics GP

David Meego - Click for blog homepageHappy New Year. Wishing everyone a very successful 2014.

I have been back from leave for two weeks, but have been really busy with support cases.

One recent case highlighted a SQL Server setting that I have never used before and the negative consequences of changing the setting from its default value of “off”.

 

Background

The customer reported that when Payables Management transactions were fully applied and should be moved to history, that what was happening was the transaction was being removed from the Open tables, but was failing to be added to the History tables. The end result was that the transactions would just disappear from the system.

We used the Support Debugging Tool (http://aka.ms/SDT) to capture logs. From the Dexterity side nothing looked unusual, but from the DEXSQL.LOG the system showed a couple of unusual behaviours.

  1. We were seeing many more of the Invalid column name ‘desSPRkmhBBCreh’ errors than would be expected. These errors are not a problem in themselves (see blog post), but the frequency that they occurred at was unusually high.
     
  2. We were also seeing many commands in the DEXSQL.LOG repeated. Calls to stored procedures and select statements being issued multiple times when they would normally only occur once.

Reviewing the case notes, I found that it was mentioned in passing that the problem started occurring AFTER the Microsoft Dynamics GP system was migrated to a new server.

So we needed to look at the Database configuration and Server Configuration to see what differences could be responsible for this behaviour

 

The Problems

Further investigation using the sp_helpdb command did highlight that the Dynamics GP system and company databases were set to compatibility 900 (SQL Server 2005) and the database owner was not set to DYNSA.  These settings were easily fixed using the Options page on the Database Properties to change the Compatibility Level and the T-SQL command sp_changedbowner ‘DYNSA’ to change the owner for each of the databases.  While these settings were wrong, I doubt they were responsible for the issues we were seeing.

I then received an email which explained that the customer’s IT Administrator had been comparing settings on the SQL Server between the old and new servers and noticed some differences on the Server Properties on the Connections page.

 

The differences notices was that on the old server: the Remote Query Timeout value was set to 1200 seconds instead of the default 600 seconds, and on the new server: the option xact abort was checked (the default value is unchecked).

The Remote Query Timeout being 1200 would not cause any problems for normal operation. I suggested they move that setting to the new server. There may have been a reason it was increased.

The xact abort setting however was once that I have never come across before, so I did some research to understand what it means and how it can affect the behaviour.

 

The Solution

Looking at SQL Books online for the Server Properties Connection Page it explains the xact abort settings as “Rolls back a transaction if a Transact-SQL statement raises a run-time error.” 

I also found an interesting article about the option: Use Caution with Explicit Transactions in Stored Procedures.

While the blog article suggests that it would be safe to leave the option on, the author does not know the Dynamics GP application.  Dynamics GP does generate a number of expected runtime errors (such as the column desSPRkmhBBCreh one) and ignores them or handles them as appropriate.

However, if the xact abort option is enabled, it will cause transactions to be rolled back for the slightest error…. even ones that Dynamics GP expects. 

After unchecking the option and restarting the SQL Server, the problems stopped occurring.

 

Hope you find this useful.

David

Excerpt from:
SQL Server xact abort setting breaks Microsoft Dynamics GP

January 17, 2014 · David Musgrave · No Comments
Tags: , , , , , , , ,  · Posted in: Blogs I Follow Total Views: 211

Developers: Microsoft Dynamics GP 2013 Service Pack 2 moves the Vendor and Customer Combiner & Modifier into core code

David Meego - Click for blog homepageNow before you ask…. the answer is No, I have not spent all week playing XBox One (as mentioned on the last post). Yes, we did get one and yes the kids have been using it, but the reason I have not posted on the blog is just that I have been very busy with support cases.

Talking of support cases……..

I had a case today where a developer was having issues when using the Customer Combiner. This functionality was part of PSTL (Professional Services Tools Library) and as of Microsoft Dynamics GP 2013 Service Pack 2 is now part of core code.

Note: During this article I will be referring to the Customer ID, this is the ‘Customer Number’ (CUSTNMBR) field. The same information is applicable to the ‘Vendor ID’ (VENDORID) field.

 

The Problem

The developer was receiving a “Violation of Primary Key Constraint” error when using the Customer Combiner. Using a SQL Profile Trace, they worked out that the problem was being caused when the Combiner’s code issued an update statement on their custom table which caused the error.

UPDATE

SET CUSTNMBR = ‘‘ WHERE CUSTNMBR = ‘

This update statement will work perfectly if you want to move a transaction table record from the old Customer ID to the new combined Customer ID. However, it will fail when applied to a master table record as it will create a duplicate record. This duplicate issue can occur with any table that has the Customer ID field as part of a primary key or another key that does not allow duplicates.

 

The Solution

So you ask, “How can I avoid this issue with my custom tables in my custom code?”. Well, if the Customer ID is used in a custom table which is a transactional table, there is no problem. If the custom table is a master table, then you will need to add some code into your customisation to handle the situation.

Note: The information below is based on how the code incorporated into the DYNAMICS.DIC for GP 2013 SP2 works, the details for the PSTL version of the code is different.

To understand how to add our extra custom code we need to understand how the original code works.


Customer Combiner and Modifier window

From the Customer Combiner and Modifier window, when you click Process, depending on the mode selected (Combiner or Modifier) it will execute a Dexterity global procedure which will in turn call a SQL Stored Procedure. The SQL Stored Procedure will then disable some triggers, call a second stored procedure and re-enable the triggers. The second stored procedure is where the actual work is done. This stored procedure will update all the tables in the core products (from Microsoft) as required and then run a script which will run an additional update statement against all tables in the database which have the column CUSTNMBR in them (except the core products tables already updated).

Now here is the cool bit; just before the stored procedure makes changes and just after it has made changes it calls Pre and Post scripts which can be used by developers to add additional code. This is similar to the Pre and Post scripts when working with eConnect.

The table below shows the scripts involved:

  Customer
Combiner 
Customer
Modifier  
Vendor
Combiner
Vendor
Modifier  
Dexterity Procedure rmCustomerCombiner rmCustomerModifier pmVendorCombiner pmVendorModifier
1st Stored Procedure rmCustomerCombiner rmCustomerModifier pmVendorCombiner pmVendorModifier
2nd Stored Procedure rmCustomerCombinerMaster N/A pmChangeVendorMaster N/A
Pre Stored Procedure rmCustomerCombinerPre rmCustomerModifierPre pmVendorCombinerPre pmVendorModifierPre
Post Stored Procedure rmCustomerCombinerPost rmCustomerModifierPost pmVendorCombinerPost pmVendorModifierPost

Note: The Modifier scripts don’t use a second stored procedure, all the code is in the first stored procedure. Use the CoreParams_SP2.txt file in the SDK (Software Development Kit) or a SP2 source dictionary to obtain the parameter list for the Dexterity global procedures.

There are two solutions to this problem:

  • Either use a Dexterity Trigger to run before the Dexterity Procedures listed above to update your custom tables, so that when the SQL code runs against your tables there will be no records to update.
     
  • Or, if you cannot use Dexterity triggers, you can update the Pre Stored Procedure to make the changes for your tables.

If you need to perform any actions after the updates have occurred, you can use an after Dexterity Trigger or the Post Stored Procedure.

  

The Recommendation

Now that the Combiner and Modifier functionality is part of the core code, it means that ALL Dynamics GP sites will have this feature and so it will be up to developers to ensure that their customisations are compatible. In the past not every site used the PSTL features, but that will change now that the code has become part of the core functionality.

If you can use Dexterity Triggers, this will be simpler as many Dexterity developers can trigger on the same scripts without clashing with each other. Changing the SQL Pre and Post Stored Procedures will need to be a manual process as it might have already been modified already by another developer or system administrator.

Start updating your apps.

Hope you found this useful. 

David

Originally posted here:
Developers: Microsoft Dynamics GP 2013 Service Pack 2 moves the Vendor and Customer Combiner & Modifier into core code

December 2, 2013 · David Musgrave · No Comments
Tags: , , , , , , , , ,  · Posted in: Blogs I Follow Total Views: 153