In the previous blog post we have explored how to tell how many records do we have per company account using a T-SQL Query. But AX does have a concept which SQL cannot cope with, and makes it a bit harder to tell the active record count in ValidTimeState tables.
During the cleanup exercise of our code upgrade to Dynamics 365 Finance and Operations we have noticed that many of our custom tables did not have the cacheLookup configured. It can have a negative performance impact on your system, so it is best practice to have this properly configured. You can find below a job to find tables without cacheLookup property set.
You can find more details about caching in the documentation:
During our code upgrade we have identified that the SysDeletedObjects configuration keys were still turned on. Also many DEL_-prefixed fields were being referred or used in various places. We had to come up with a way for finding objects marked for deletion.
The documentation gives you an idea of what to do with such elements.
As part of our code upgrade to Dynamics 365 Finance and Operations we have removed some processes, for which the security roles had to be refreshed en mass. For this I have decided to go with modifying D365FO metadata using reflection to see how can we do it with the new file-based repository.
First I have created a new C# project and referred the following 3 Dynamic Link Libraries from the AOSService\PAckagesLocal\bin folder:
Then the following code was developed, which shows how to manipulate the metadata using the new DiskMetadataProvider classes. In the example below we were working against the JADOperation package and AxSecurityRole objects.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Dynamics.AX.Metadata.Storage;
using Microsoft.Dynamics.AX.Metadata.Core.Collections;
using Microsoft.Dynamics.AX.Metadata.Providers;
using Microsoft.Dynamics.AX.Metadata.MetaModel;
namespace ModifyAOT
{
class Program
{
static bool removeNode(
AxSecurityRole _role,
KeyedObjectCollection<AxSecurityPrivilegeReference> _privs,
string _node
)
{
bool isModified = false;
if (_privs.Contains(_node))
{
Console.WriteLine("Removing " + _role.Name + "." + _node);
_privs.Remove(_node);
isModified = true;
}
return isModified;
}
static void Main(string[] args)
{
string packagesLocalDirectory = @"C:\AosService\PackagesLocalDirectory";
IMetadataProvider diskMetadataProvider = new MetadataProviderFactory().CreateDiskProvider(packagesLocalDirectory);
var l = diskMetadataProvider.SecurityRoles.ListObjects("JADOperation");
var le = l.GetEnumerator();
while (le.MoveNext())
{
bool isModified = false;
AxSecurityRole r = diskMetadataProvider.SecurityRoles.Read(le.Current);
KeyedObjectCollection<AxSecurityPrivilegeReference> privs = r.Privileges;
isModified = removeNode(r, privs, "JADCCDelayCaptureBatchProcess") || isModified;
isModified = removeNode(r, privs, "ProfitAccountStatistics_CustItemGenerate") || isModified;
isModified = removeNode(r, privs, "CCMOrderPadLoginManagementMaintain") || isModified;
if (isModified)
{
ModelSaveInfo model = diskMetadataProvider.ModelManifest.ConstructSaveInfo(diskMetadataProvider.ModelManifest.GetMetadataReferenceFromObject("JADOperation", "AxSecurityRole", r.Name).ModelReference);
diskMetadataProvider.SecurityRoles.Update(r, model);
}
}
Console.WriteLine("Press any key to continue...");
Console.ReadKey();
}
}
}
And this is all that you require for modifying D365FO metadata using reflection.
You can find some other use cases of doing metadata reflection in previous blog posts as well:
The Document Attachments were stored in our AX 2012 database, and were occupying several hundred GBs. As part of the migration to D365FO we had to export these files in bulk. The most efficient way is to run the batch task in parallel. But this data is tilted in a way that if you would group the tasks based on their creation date, it the first tasks would barely have any records to process while the last ones would go on forever. The solution for this is called data histogram equalization.
It would be a big challenge to code this in X++, but SQL Server has a function for doing this exactly: NTILE.
The following direct query is able to process the data histogram, and then return 10 buckets of Record Identifier ranges of roughly equal size:
WITH documents (Bucket, RecId)
AS (
SELECT NTILE(10) OVER( ORDER BY DocuRef.RecId) AS Bucket
,DocuRef.RECID
FROM docuRef
INNER JOIN docuType
ON (docuType.dataAreaId = docuRef.RefCompanyId OR DOCUREF.REFCOMPANYID = '')
AND docuType.TypeId = docuRef.TypeId
INNER JOIN docuValue
ON docuValue.RecId = docuRef.ValueRecId
WHERE docuType.TypeGroup = 1 -- DocuTypeGroup::File
AND docuType.FilePlace = 0 -- DocuFilePlace::Archive
AND docuType.ArchivePath <> ''
AND docuValue.Path = ''
)
SELECT Bucket
,count(*) AS Count
,(SELECT MIN(RecId) FROM documents D WHERE D.Bucket = documents.Bucket) AS RecId_From
,(SELECT MAX(RecId) FROM documents D WHERE D.Bucket = documents.Bucket) AS RecId_To
FROM documents
GROUP BY Bucket
Here are the results for spawning 10 batch job tasks to do parallel execution based on the RecId surrogate key index, with ~57230 rows in each bucket. This allows you to evenly distribute the load for data processing in parallel.
If we would export our document attachments sequentially, it would take roughly 40 hours total. By utilizing the data histogram equalization we could get it down to be under 3 hours. That is amazing!
It is a great way to split ranges of data for parallel processing, to keep in mind for the future.