Dynamics AX - Adding tables to DatabaseLog programatically in AX 2009 - axapta

I'm looking for a way to enable logging changes for certain tables.
I have tried and tested adding tables to database log programatically, but with various success so far - sometimes it works sometimes it doesn't (mostly it does not) - it seems simply inserting rows into DatabaseLog table doesn't quite do the trick.
What I have tried:
Adding row with proper tableId, fieldId, logType and . Domain had been assigned as 'Admin', main company, empty field and subcompanies with the same result.
I have created class that handles inserts, main two functions are:
public static void InsertBase(STR tableName, domainId _domain='Admin')
{
//base logging for insert, delete, uptade on fieldid=0
DatabaseLog DBDict;
TableId _tableId;
DatabaseLogType _logType;
fieldId _fieldId =0;
List logTypes;
int i;
ListEnumerator enumerator;
;
_tableId= tableName2id(tableName);
logTypes = new List(Types::Enum);
logTypes.addEnd(DatabaseLogType::Insert);
logTypes.addEnd(DatabaseLogType::Update);
logTypes.addEnd(DatabaseLogType::Delete);
logTypes.addEnd(DatabaseLogType::EventInsert);
logTypes.addEnd(DatabaseLogType::EventUpdate);
logTypes.addEnd(DatabaseLogType::EventDelete);
enumerator = logTypes.getEnumerator();
while(enumerator.moveNext())
{
_logType = enumerator.current();
select * from dbdict where
dbdict.logTable==_tableId && dbdict.logField==_fieldId
&& dbdict.logType==_logType;
if(!dbDict) //that means it doesnt exist
{
dbdict.logTable=_tableId;
dbdict.logField=_fieldId;
dbdict.logType=_logType;
dbdict.domainId=_domain;
dbdict.insert();
}
}
info("Success");
}
and the method that lists every single field and adds as logType::Update
public static void init(str TableName, DomainId domain='Admin')
{
DatabaseLogType logtype;
int i;
container kk, ll;
DatabaseLog dblog;
tableid _tableId;
fieldid _fieldid;
;
logtype = DatabaseLogType::Update;
//holds a container of not yet added table fields to databaselog
kk = BLX_AddTableToDatabaseLog::buildFieldList(logtype,TableName);
for(i=1; i <= conlen(kk);i++)
{
ll = conpeek(kk,i);
_tableid = tableName2id(tableName);
_fieldid = conpeek(ll,1);
info(strfmt("%1 %2", conpeek(ll,1),conpeek(ll,2)));
dblog.logType=logType;
dblog.logTable = _tableId;
dblog.domainId = domain;
dblog.logField =_fieldid;
dblog.insert();
}
}
result:
What am I missing ?
#EDIT with some additional info
Does not work for SalesTable and SalesLine, WMSBillOfLading.
I couldn't add log for SalesTable and SalesLine by using wizard in administration panel, but my colleague somehow did (she has done exactly the same things as me). We also tried to add log to various other tables and we often found out that she could while I could not and vice versa (and sometimes none managed to do it like in case of WMSBillOfLading table).
The inconsistency of this mechanism is what drove me to write this code, which I hoped would solve all the problems.

After doing your setup changes you probably have to call
SysFlushDatabaseLogSetup::main();
in order to flush any caches.
This method is also called in the standard AX code in the form method SysDatabaseLogTableSetup\Methods\close and in the class method SysDatabaseLogWizard\doRun.

Related

How to migrate dynamodb data on major table change?

During development structures and requirements change. Key and index settings need to be changed, that might break incremental table update. So my solution so far is to delete the table and recreate it from the cloudformation stack.
But how to solve this problem with a production deployment? Is it possible to automate dynamodb deployment as follows?
Create new table
Migrate data from old table to new table
Delete old table
Yes, it is perfectly possible to automate such a deployment structure. As long as you have code to create a table, it should be fairly straightforward to get all of the data from an old table, change the data, and then upload it all to a new table without any drops in up-time. If you write what language you would like to do such a thing in I can help a bit more.
I've done this before and I've added below a small generified code-sample on how you could do this in Java.
Java method for creating a table given the class of the object type stored in dynamo:
/**
* Creates a single table with its appropriate configuration (CreateTableRequest)
*/
public void createTable(Class tableClass) {
DynamoDBMapper mapper = createMapper(); // you'll need your own function to do this.
ProvisionedThroughput pt = new ProvisionedThroughput(1L, 1L);
CreateTableRequest ctr = mapper.generateCreateTableRequest(tableClass);
ctr.withProvisionedThroughput(new ProvisionedThroughput(1L, 1L));
// Provision throughput and configure projection for secondary indices.
if (ctr.getGlobalSecondaryIndexes() != null) {
for (GlobalSecondaryIndex idx : ctr.getGlobalSecondaryIndexes()) {
if (idx != null) {
idx.withProvisionedThroughput(pt).withProjection(new Projection().withProjectionType("ALL"));
}
}
}
TableUtils.createTableIfNotExists(client, ctr);
}
Java method to delete table:
private static void deleteTable(String tableName) {
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);
Table table = dynamoDB.getTable(tableName);
try {
System.out.println("Issuing DeleteTable request for " + tableName);
table.delete();
System.out.println("Waiting for " + tableName + " to be deleted...this may take a while...");
table.waitForDelete();
}
catch (Exception e) {
System.err.println("DeleteTable request failed for " + tableName);
System.err.println(e.getMessage());
}
}
I would scan the whole table and plop all of the content into a List and then map through that list, converting the objects into your new type, and then create a new table of that type but with a different name, push all of your new objects, and then delete the old table after switching any references you might have of the old table to the new one. Unfortunately this does mean that everything consuming your tables are going to need to be able to switch between your two staging tables.

How to refresh Factbox

I have a form, when i click on my button.It adds to my table A (what my factbox shows)is it possible to refresh the factbox with X++ code? I can't figure out how to refresh my infopart or query which factbox uses.
For an infopart you can call an update of the data source of the infopart's form run:
void clicked()
{
PartList partList;
int i;
FormRun infoPartFormRun;
FormDataSource infoPartDataSource;
super();
partList = new PartList(element);
for (i = 1; i <= partList.partCount(); i++)
{
infoPartFormRun = partList.getPartById(i);
if (infoPartFormRun.name() == identifierStr(MyInfoPart))
{
infoPartDataSource = infoPartFormRun.dataSource();
if (infoPartDataSource)
{
infoPartDataSource.research();
}
}
}
}
I added the check for the infoPartDataSource because I first tested this with a cue group fact box, which does not have a data source (or at least I could not figure out how to get the data source of one of the cues in the cue group and since you asked for an infopart fact box, I did not investigate further).
Update: The issue seems to be popular at the moment, Martin DrĂ¡b also wrote in his blog about it: Refreshing form parts

How to identify advanced query or dynamic joins from query window?

In the query window that pops up, if a user right clicks and chooses "1:n" and selects a table, how can one detect and use that table? I have a good sample job and screenshots that should demonstrate what I'm trying to accomplish.
I wrote this sample job that dumps out the AOT query objects but not the dynamically joined table/range/value.
static void InventSumQuery(Args _args)
{
Query query = new Query(queryStr(InventDimPhys));
QueryRun qr = new QueryRun(query);
QueryBuildRange queryRange;
DictField dictField;
int i, n;
if(qr.prompt())
{
for (n=1; n<=query.dataSourceCount(); n++)
{
for (i=1; i<=query.dataSourceNo(n).rangeCount(); i++)
{
queryRange = query.dataSourceNo(n).range(i);
dictField = new dictField(query.dataSourceNo(n).table(), fieldName2id(query.dataSourceNo(n).table(), queryRange.AOTname()));
info(strFmt("%1.%2", tableId2name(dictField.tableid()), dictField.name()));
}
}
}
info("Done");
}
Of course I figure my own answer out. Query objects are static, and the query form actually just modifies the query when you make the change.
So you need to modify the code above to:
if(qr.prompt())
{
query = qr.query();
This gets the modified query. The advanced querying actually is just a function of the form itself that ultimately modifies the query.

Query to fetch table names from AX takes too long

I am using the following code in X++ to get table names:
client server public static container tableNames()
{
tableId tableId;
int tablecounter;
Dictionary dict = new Dictionary();
container tableNamesList;
for (tablecounter=1; tablecounter<=dict.tableCnt(); tablecounter++)
{
tableId = dict.tableCnt2Id(tablecounter);
tableNamesList = conIns(tableNamesList,1,dict.tableName(tableId));
}
return tableNamesList;
}
Business connector code :
tablesList = (AxaptaContainer)Global.ax.
CallStaticClassMethod("Code_Generator", "tableNames");
for (int i = 1; i <= tablesList.Count; i++)
{
tableName = tablesList.get_Item(i).ToString();
tables.Add(tableName);
}
The application hangs for 2 - 3 minutes while fetching data. What could be the cause? Any optimizations?
Rather than use ConIns, use +=, it will be faster
tableNamesList += dict.tableName(tableId);
ConIns has to work out where in the container to place the insert. += just adds it to the end
As mentioned before avoid conIns() when appending elements to a container because it makes a new copy of the container. Use += instead to append in place.
Also, you may want to check for permissions and leave out temporary tables, table maps, and other special cases. Standard Ax has a method to build a table name lookup form that takes these things into account. Check the method Global::pickTable() for details.
You could avoid some calls through the business connector as well and build the entire list in Ax in a similar way and return that in a single function call.
If you are using Dynamics Ax 2012, you could skip the treeNode stuff and use the SysModelElement table to fetch the data and return it immediately as a .Net Array to easy up things on the other side.
public static System.Collections.ArrayList FetchTableNames_ModelElementTables()
{
SysModelElement element;
SysModelElementType elementType;
System.Collections.ArrayList tableNames = new System.Collections.ArrayList();
;
// The SysModelElementType table contains the element types
// and we need the recId for the next selection
select firstonly RecId
from elementType
where elementType.Name == 'Table';
// With the recId of the table element type,
// select all of the elements with that type (hence, select all of the tables)
while select Name
from element
where element.ElementType == elementType.RecId
{
tableNames.Add(element.Name);
}
return tableNames;
}
}
Alright, I have tried a lot of things and in the end, I decided to create a table consisting of all table names. This table will have a Job populating it. I am fetching records from this table.

SQL statement's placeholders that is not replaced leads to "Cannot update '#columnName'; field not updateable"

I'm writing some code updating database with a SQL statement that has some placeholders . But it doesn't seem to update these placeholders.
I got the following error:
Cannot update '#columnName'; field not updateable
Here is the method:
public void updateDoctorTableField(string columnName, string newValue, string vendorNumber) {
sqlStatement = "update Doctor set #columnName = #newValue where `VENDOR #` = #vendorNumber;";
try {
_command = new OleDbCommand(sqlStatement, _connection);
_command.Parameters.Add("#columnName", OleDbType.WChar).Value = columnName;
_command.Parameters.Add("#newValue", OleDbType.WChar).Value = newValue;
_command.Parameters.Add("#vendorNumber", OleDbType.WChar).Value = vendorNumber;
_command.ExecuteNonQuery();
} catch (Exception ex) {
processExeption(ex);
} finally {
_connection.Close();
}
}
Not all parts of the query are parameterisable.
You can't parametrise the name of the column. This needs to be specified explicitly in your query text.
If this is sent via user input you need to take care against SQL Injection. In fact in any event it would be best to check it against a whitelist of known valid column names.
The reason the language does not allow for parameters for things like table names, column names and such is exactly the same reason why your C# program does not allow for substitution of variables in the code. Basically your question can be rephrased like this in a C# program:
class MyClass
{
int x;
float y;
string z;
void DoSomething(string variableName)
{
this.#variable = ...
}
}
MyCLass my = new MyClass();
my.DoSomething("x"); // expect this to manuipulate my.x
my.DoSomething("y"); // expect this to manuipulate my.y
my.DoSomething("z"); // expect this to manuipulate my.z
This obviously won't compile, because the compiler cannot generate the code. Same for T-SQL: the compiler cannot generate the code to locate the column "#columnName" in your case. And just as in C# you would use reflection to do this kind of tricks, in T-SQL you would use dynamic SQL to achieve the same.
You can (and should) use the QUOTENAME function when building your dynamic SQL to guard against SQL injection.

Resources