AX 2012R2: Lookup query takes too long, lookup never opens - axapta

I have a AX2012R2 CU6 (build&client 6.2.1000.1437, kernel 6.2.1000.5268) with the following problem:
On AP>Journals>Invoices>Invoice Journal>lines (form LedgerJournalTransVendInvoice), when I select Vendor as Account type and then activate the lookup on the Account field, AX freezes for a couple minutes and when it recovers, the lookup is closed/never opened. This happens every time when account type vendor, other account types work just fine.
I debugged this to LedgerJournalEngine.accountNumLookup() --> VendTable.lookupVendor line
formSegmentedEntryControl.performFormLookup(formRun);
The above process takes up the time.
Any ideas before I hire an exorcist?

There is a known KB for this for R3, look for it on Lifecycle services
KB 3086961 Performance issue of VendorLookup on the volume data,
during the GFM Bugbash 6/11 took over 30 minutes
Even though the fix is for R3 it should be easy to backport as the changes are described as
The root cause seemed to be the DirPartyLookupGridView, which had
around 14 joins on views and tables. This view is used in many places
and hence seemed to have grown quite a lot over time.
The changes in the hotfix remove the view and add only the required
datasources - dirpartytable and logisticsaddress to the
VendTableLookup form.
The custtableLookup is not using the view and using custom datasource
joins instead, so no changes there.
Try implementing that change and see what happens.
I'm not sure this will fix your issue as in your execution plan the only operation that seems really expensive is the sort operator which needs to spill to tempdb (you might need more memory to solve that) but the changes in the datasource could have the effect of removing the sort operator from the execution plan as the data may be sorted by an index.

Probably the SQL Server chose the wrong query plan.
First check that you have not disabled any indexes on the involved tables, then do a synchronize on them.
If still a problem, then to run a STATISTICS UPDATE on the involved tables (including the tables in the view).

Related

To speed up data refresh should I create a view or use query folding?

I have an SQLite database of 5GB which gets updated few times a day and is used to refresh a PowerBI dashboard. While below 1GB I could refresh the dashboard in under a minute, but now takes around 20 minutes.
Should I create views so merges, joins, etc. are made in the view instead of loading the table itself and using Power Query to perform data manipulation?
Should I use incremental refresh? Is it possible in SQLite?
I would implement views on the database side in this case. This is in my eyes the benefit of having control of the DB yourself - pushing these data transforms to the DB instead of tinkering in Power Query is a major benefit.
When the views are set up, work on incremental refresh if the SQLITE3 connector supports it, but if it doesn't use the "regular" SQL Database connector it may not do so, according to the official documentation.
5GB is tiny in size and your refreshes should not be overly long. How complicated are your queries and what kind of transformations are you doing?
I would first look at making sure you're not breaking query folding. Seemingly simple steps do not always support folding but by rearranging them you can ensure folding happens and have a drastic effect on refresh times. Check at what stage folding is breaking and see if you can move things around.
Next, I'd look at moving your transformations upstream to a view (Roche's maxim).
Incremental refresh will also work so it is up to you really to pick from the available options.

Can you get a log file of 'reads' on specific RECID(Tablename) in Progress-4GL/Openedge at RunTime without access to Source Code?

I want to know which tables are being read by a query.
for each Customer where CustomerID = 12345.
Eventually this customer will be found in the following example, but progress must 'read' many tables before getting to customer 12345.
How do I know exactly which tables are read (By CustomerID), prior to getting to customer 12345?
*NOTE: I do not have access to modify the code being run for this selection. Ideally I would run a separate set of code that is executed at the same time as the customer query above to track the reads.
EDIT: More clearly - Can you track reads from a given program (.p) OR ProcessID and output either a RECID or the PrimaryKey to a file?
I understand the information is being read off the Disk and probably stored in a database buffer. So how would I get at the information in the database buffer?
You seem to be mixing up a few different things.
In a situation like your example where you FIND a specific record in one, and only one table then there is just a single record read. Progress will find that record by first scanning a relevant index. That might be 2 or 3 "logical reads" of the b-tree to get to the proper node. The record block and index blocks may, or may not be read from disk - that depends on what has happened previously.
There are "Virtual System Tables" available that can tell you how many READ operations take place against a particular table or index. But they do not trace the specific ROWID or other identifying data. _TableStat and _IndexStat are aggregates for all users on the system, _UserTableStat and _UserIndexStat are specific to a particular user's activity. You do need to set the -tablerangesize and -indexrangesize parameters adequately to take advantage of these.
If you have enabled the table and index statistics then you can use a tool like ProTop - http://protop.wss.com to get insight into this activity. Or you can write your own code.
OpenEdge Auditing does not track reads. That would be prohibitively expensive.
It's probably not really a good idea but, in theory, you could write FIND triggers for the tables you are interested in. That doesn't require access to the application source but you would need a development license. It will probably kill performance to do this though - so unless this is a non-production test environment that you just want to fiddle with I wouldn't really do that.
You mention wanting to know how you got to that point. That sounds more like you might need to have a "4gl trace". One easy way to get the stack trace of a running process is to execute:
$DLC/bin/proGetStack PID (UNIX)
or
%DLC%\bin\proGetStack PID (Windows)
This command will generate a "protrace.pid" file containing a 4gl stack trace and other interesting information.
There are also more complicated ways to get that info like using PROMON and the "client statement cache" or setting various log entry types at session startup. But proGetStack is pretty convenient and requires no code or scripting changes.
Some great options from Tom above. And all of them may be relevant to you. The option he only skirts around is the logging options. I feel obliged to expand on this because I'm giving a talk on it in a couple of weeks!
Assuming you are running a modern version of Progress, or even 10.2B08, then you have client logging available to you. Start your session with these additional options:
-clientlog "\somefolder\somefile.txt"
-logentrytypes "QryInfo:3"
This will log all the info of all the queries in your session to the file you specified above. If you navigate to the point in the system where you want to analyse your query and empty the logfile and save it, you can then run the offending query and see all the detail you need.
The output tells you all sorts of useful info, including the number of reads on each table, compared with the number returned to the user. You also get the index selected.
Using Tom's advice and/or this will get you what you need.

Teradata DELETE ALL vs DROP+CREATE

I've been recently assigned on a project using Teradata.
I've been told to strictly use DROP+CREATE instead of DELETE ALL, because the latter "leaves some space allocated someway". This is counter-intuitive to me, and I think it's probably wrong. I searched the web for a comparison between the two methods, but I found nothing.
This only reinforces my belief that DELETE ALL doesn't suffer from the issue above.
However, if this is the case, I must prove it (both practically and theoretically).
So, my question is: is there a difference in space allocation between the two methods? If not, is there an official document (user guide, technical specification, whatever else) that proves it?
Thank you!
There's a discussion here: http://teradataforum.com/teradata/20120403_105705.htm about the very same subject (although it does not really answer the "leaves some space allocated someway" part). They actually recommend DELETE ALL but for other (performance) reasons:
I'll quote just in case the link goes dead:
"Delete all" will be quicker, although being practical there often isn't a lot of difference in the performance of them.
However, especially for a process that is run regularly (say a daily batch process) then I recommend the "delete all" approach. This will do less work as it only removes the data and leaves the definition in place. Remember that if you remove the definition then this requires accessing multiple dictionary tables, and of course you then have to access those same tables (typically) when you re-create the object.
Apart from the performance aspect, the downside of the drop/create approach is that every time you create an object Teradata inserts "default rows" into the AccessRights table, even if subsequent access to the object is controlled via Role security and/or database level security. As you may well know the AccessRights table can easily get large and very skewed. In my experience many sites have a process which cleans this table on a regular basis, removing redundant rows. If your (typically batch) processes regularly drop/create objects, then you're simply adding rows into the table which have previously been removed by a clean process, and which will be removed in the future by the same process. This all sounds like a complete waste of time to me.
Your impression is correct, you didn't find any reference to "DELETE leaves some space allocated" in any place, because it's simply wrong :-)
DELETE ALL is similar to a TRUNCATE in other DBMSes and in most cases use fastpath processing:
First of all, you cannot do DROP/CREATE in one transaction in Teradata (in Oracle there are other problems with everyday DDL) so when ETL processes become complicated you might end up with the dependence where more important business processes depend on less important (like you might see the customers table empty just because the interests rates were not refreshed
or you have an exceeding varchar value in just one minor column)
My opinion: Use transactions and modular programming. In Teradata this means avoiding DDL where possible and using DELETE/UPDATE/MERGE/INSERT instead of DROP/CREATE.
We have a slightly different situation in Postgres where DDL statements are transactional.

Solution for previewing user changes and allowing rollback/commit over a period of time

I have asked a few questions today as I try to think through to the solution of a problem.
We have a complex data structure where all of the various entities are tightly interconnected, with almost all entities heavily reliant/dependant upon entities of other types.
The project is a website (MVC3, .NET 4), and all of the logic is implemented using LINQ-to-SQL (2008) in the business layer.
What we need to do is have a user "lock" the system while they make their changes (there are other reasons for this which I won't go into here that are not database related). While this user is making their changes we want to be able to show them the original state of entities which they are updating, as well as a "preview" of the changes they have made. When finished, they need to be able to rollback/commit.
We have considered these options:
Holding open a transaction for the length of time a user takes to make multiple changes stinks, so that's out.
Holding a copy of all the data in memory (or cached to disk) is an option but there is heck of a lot of it, so seems unreasonable.
Maintaining a set of secondary tables, or attempting to use session state to store changes, but this is complex and difficult to maintain.
Using two databases, flipping between them by connection string, and using T-SQL to manage replication, putting them back in sync after commit/rollback. I.e. switching on/off, forcing snapshot, reversing direction etc.
We're a bit stumped for a solution that is relatively easy to maintain. Any suggestions?
Our solution to a similar problem is to use a locking table that holds locks per entity type in our system. When the client application wants to edit an entity, we do a "GetWithLock" which gets the client the most up-to-date version of the entity's data as well as obtaining a lock (a GUID that is stored in the lock table along with the entity type and the entity ID). This prevents other users from editing the same entity. When you commit your changes with an update, you release the lock by deleting the lock record from the lock table. Since stored procedures are the api we use for interacting with the database, this allows a very straight forward way to lock/unlock access to specific entities.
On the client side, we implement IEditableObject on the UI model classes. Our model classes hold a reference to the instance of the service entity that was retrieved on the service call. This allows the UI to do a Begin/End/Cancel Edit and do the commit or rollback as necessary. By holding the instance of the original service entity, we are able to see the original and current data, which would allow the user to get that "preview" you're looking for.
While our solution does not implement LINQ, I don't believe there's anything unique in our approach that would prevent you from using LINQ as well.
HTH
Consider this:
Long transactions makes system less scalable. If you do UPDATE command, update locks last until commit/rollback, preventing other transaction to proceed.
Second tables/database can be modified by concurent transactions, so you cannot rely on data in tables. Only way is to lock it => see no1.
Serializable transaction in some data engines uses versions of data in your tables. So after first cmd is executed, transaction can see exact data available in cmd execution time. This might help you to show changes made by user, but you have no guarantee to save them back into storage.
DataSets contains old/new version of data. But that is unfortunatelly out of your technology aim.
Use a set of secondary tables.
The problem is that your connection should see two versions of data while the other connections should see only one (or two, one of them being their own).
While it is possible theoretically and is implemented in Oracle using flashbacks, SQL Server does not support it natively, since it has no means to query previous versions of the records.
You can issue a query like this:
SELECT *
FROM mytable
AS OF TIMESTAMP
TO_TIMESTAMP('2010-01-17')
in Oracle but not in SQL Server.
This means that you need to implement this functionality yourself (placing the new versions of rows into your own tables).
Sounds like an ugly problem, and raises a whole lot of questions you won't be able to go into on SO. I got the following idea while reading your problem, and while it "smells" as bad as the others you list, it may help you work up an eventual solution.
First, have some kind of locking system, as described by #user580122, to flag/record the fact that one of these transactions is going on. (Be sure to include some kind of periodic automated check, to test for lost or abandoned transactions!)
Next, for every change you make to the database, log it somehow, either in the application or in a dedicated table somewhere. The idea is, given a copy of the database at state X, you could re-run the steps submitted by the user at any time.
Next up is figuring out how to use database snapshots. Read up on these in BOL; the general idea is you create a point-in-time snapshot of the database, do whatever you want with it, and eventually throw it away. (Only available in SQL 2005 and up, Enterprise edition only.)
So:
A user comes along and initiates one of these meta-transactions.
A flag is marked in the database showing what is going on. A new transaction cannot be started if one is already in process. (Again, check for lost transactions now and then!)
Every change made to the database is tracked and recorded in such a fashion that it could be repeated.
If the user decides to cancel the transaction, you just drop the snapshot, and nothing is changed.
If the user decides to keep the transaction, you drop the snapshot, and then immediately re-apply the logged changes to the "real" database. This should work, since your requirements imply that, while someone is working on one of these, no one else can touch the related parts of the database.
Yep, this sure smells, and it may not apply to well to your problem. Hopefully the ideas here help you work something out.

How do I implement "pessimistic locking" in an asp.net application?

I would like some advice from anyone experienced with implementing something like "pessimistic locking" in an asp.net application. This is the behavior I'm looking for:
User A opens order #313
User B attempts to open order #313 but is told that User A has had the order opened exclusively for X minutes.
Since I haven't implemented this functionality before, I have a few design questions:
What data should i attach to the order record? I'm considering:
LockOwnedBy
LockAcquiredTime
LockRefreshedTime
I would consider a record unlocked if the LockRefreshedTime < (Now - 10 min).
How do I guarantee that locks aren't held for longer than necessary but don't expire unexpectedly either?
I'm pretty comfortable with jQuery so approaches which make use of client script are welcome. This would be an internal web application so I can be rather liberal with my use of bandwidth/cycles. I'm also wondering if "pessimistic locking" is an appropriate term for this concept.
It sounds like you are most of the way there. I don't think you really need LockRefreshedTime though, it doesn't really add anything. You may just as well use the LockAcquiredTime to decide when a lock has become stale.
The other thing you will want to do is make sure you make use of transactions. You need to wrap the checking and setting of the lock within a database transaction, so that you don't end up with two users who think they have a valid lock.
If you have tasks that require gaining locks on more than one resource (i.e. more than one record of a given type or more than one type of record) then you need to apply the locks in the same order wherever you do the locking. Otherwise you can have a dead lock, where one bit of code has record A locked and is wanting to lock record B and another bit of code has B locked and is waiting for record A.
As to how you ensure locks aren't released unexpectedly. Make sure that if you have any long running process that could run longer than your lock timeout, that it refreshes its lock during its run.
The term "explicit locking" is also used to describe this time of locking.
I have done this manually.
Store the primary-key of the record to a lock table, and mark record
mode attribute to edit.
When another user tries to select this record, indicate the user's
ready only record.
Have a set-up maximum time for locking the records.
Refresh page data for locked records. While an user is allowed to
make changes, all other users are only allowed to check.
Lock table should have design similar to this:
User_ID, //who locked
Lock_start_Time,
Locked_Row_ID(Entity_ID), //this is primary key of the table of locked row.
Table_Name(Entity_Name) //table name of the locked row.
Remaining logic is something you have to figure out.
This is just an idea which I implemented 4 years ago on special request of a client. After that client no one has asked me again to do anything similar, so I haven't achieved any other method.

Resources