Background
I have an issue where roughly once a month the AIFQueueManager table is populated with ~150 records which relate to messages which had been sent to AX (where they "successfully failed"; i.e. errorred due to violation of business rules, but returned an exception as expected) over 6 months ago.
Question
What tables are involved in the AIF inbound message process / what order to events occur in? e.g. XML file is picked up and recorded in the AifDocumentLog, data's extracted and added to the AifQueueManager and AifGatewayQueue tables, records from here are then inserted in the AifMessageLog, etc.
Thanks in advance.
There are 4 main AIF classes, I will be talking about the inbound only, and focusing on the included file system adapter and flat XML files. I hope this makes things a little less hazy.
AIFGatewayReceiveService - Uses adapters/channels to read messages in from different sources, and dumps them in the AifGatewayQueue table
AIFInboundProcessingService - This processes the AifGatewayQueue table data and sends to the Ax[Document] classes
AIFOutboundProcessingService - This is the inverse of #2. It creates XMLs with relevent metadata
AIFGatewaySendService - This is the inverse of #1, where it uses adapters/channels to send messages out to different locations from the AifGatewayQueue
For #1
So #1 basically fills the AifGatewayQueue, which is just a queue of work. It loops through all of your channels and then finds the relevant adapter by ClassId. The adapters are classes that implement AifIntegrationAdapter and AifReceiveAdapter if you wanted to make your own custom one. When it loops over the different channels, it then loops over each "message" and tries to receive it into the queue.
If it can't process the file for some reason, it catches exceptions and throws them in the SysExceptionTable [Basic>Periodic>Application Integration Framework>Exceptions]. These messages are scraped from the infolog, and the messages are generated mostly from the receive adaptor, which would be AifFileSystemReceiveAdapter for my example.
For #2
So #2 is processing the inbound messages sitting in the queue (ready/inprocess). The AifRequestProcessor\processServiceRequest does the work.
From this method, it will call:
Various calls to Classes\AifMessageManager, which puts records in the AifMessageLog and the AifDocumentLog.
This key line: responseMessage = AifRequestProcessor::executeServiceOperation(message, endpointActionPolicy); which actually does the operation against the Ax[Document] classes by eventually getting to AifDispatcher::callServiceMethod(...)
It gets the return XML and packages that into an AifMessage called responseMessage and returns that where it may be logged. It also takes that return value, and if there is a response channel, it submits that back into the AifGatewayQueue
AifQueueManager is actually cleared and populated on the fly by calling AifQueueManager::createQueueManagerData();.
Related
I am trying to write a recursive web API call in PBI to collect all 27,515 records, the oDATA feed has a limit of 1,000 rows. I need this data to be refreshable in the PBI service, therefore these 28 requests via M code cannot be formulated in a dynamic way. PBI only allows for static or non-dynamic sources for refresh within the service. Below, I will share two pieces of M code, 1. one that is considered to be a dynamic data source (not what I need, but pulls all 27,515 records correctly) and 2. one that is a static data source (which is giving an incorrect number of 19,000 records, but is the type of data source that I need for this refreshing problem).
Noteworthy: Upon initial API call I receive a table named table "d" (in the photo below) with two rows one row it titled "results" which contains all of the data (1,000 rows) I need per request, the second row is titled "__next" which has the next API URL with an embedded skiptoken from the current calls worth of data. This skiptoken tells the API which rows to skip so that the next request doesn't deliver the data we have already collected.
Table d, Initial Table
M Code for Dynamic Data Source: This dynamic data source is pulling the correct number of records in 28 requests (up to 1,000 records per request) totaling 27,515 rows.
= List.Generate( ()=> Json.Document(Web.Contents("https://my_instance/odata/v2/Table?$format=JSON&$paging=snapshot"))[d],
each Record.HasFields(_, "results")= true,
each try Json.Document(Web.Contents(_[__next]))[d] otherwise [df=[__next="dummy_variable"]])
M Code for Static Data Source: This static data source is the type that I need for refreshing in PBI service (I confirmed it does refresh in the service), but is returning an incorrect number of rows, 19,000 versus 27,515. This code is calling 19 requests versus the needed 28 requests. I believe the error lies in the Query portion where I am attempting to call the next API URL with the skiptoken from the previous request.
= List.Generate( ()=> Json.Document(Web.Contents("https://my_instance/odata/v2/Table?$format=JSON&$paging=snapshot"))[d],
each Record.HasFields(_, "results")= true,
each try Json.Document(Web.Contents("https://my_instance/odata/v2/Table?$format=JSON&$paging=snapshot", [Query=[q=_[__next]]]))[d] otherwise [df=[__next="dummy_variable"]])
Does anyone see an error in the static code for iteratively calling each new request in the table [d] which has rows labeled [results] (all the data) and another row labeled [__next] which has the next URL with the skiptoken from the previous API call.
To be clear, in Web.Contents the url must be static, but you can freely use dynamic components in the RelativePath optional option argument (as in this simple example function) which is how you can generate dynamic web API queries that work in the service without the error you are seeing w.r.t. dynamic queries:
(current_page as text) =>
let
data = Web.Contents(
"https://my_instance/api/v2/endpoint", // static!
[
RelativePath = "?page="¤t_page // dynamic!
]
)
in
data
So if you can split out the relative path of your _next parameter and feed it into such a function it will be OK for automatic refreshes in the Power BI service.
I have asked another question with almost same scenario, A catch 22 in generating 999 file
Basically, I am inbounding HIPPA 837 files and am required to generate 999 response file.
Today I inbounded a file with ST02 element missing.
The TA1 created with Accept status, cause it only cares ISA-IEA level and that part is good.
BizTalk inbounded the file, found the issue, and actually generated a 999 message, but it failed to send out as a physical file because:
Unable to read the stream produced by the pipeline.
Details: Error: 1 (Field level error)
SegmentID: AK2
Position in TS: 3
Data Element ID: AK202
Position in Segment: 2
Data Value:
1: Mandatory data element missing
So here's the catch 22: A 999 should be created to report error for this incoming 837 file, The 999's AK202 is a required field reference to incoming file's transactionnumber defined in ST02.
And the error of the incoming file is it is missing this ST02.
Now, for this scenario, it ends up with an accept TA1 and a pending message in BizTalk messageBox.
In our trading partners view, they send a file and ONLY get a TA1 response with accept status.
My question goes here:
1. Which is the right file to report this kind of error (ST02 missing), TA1 or 999?
Is there anyway to bypass this error and have the 999 created?
There's an RFI on this at x12.org: http://rfi.x12.org/Request/Details/55?stateViewModel=WPC.RFI.Models.ViewModels.RequestViewModel
The TLDR version: you should reject the entire functional group, and use the control identifier from the functional group in AK202.
Here's the relevant text:
Description
What Segments/Data Elements should be used in the 997 when reporting an error in ST02 (Transaction Set Control Number) when the error is related to syntax or min/max? If you attempt to create a 997 back to the submitter with the inbound data from ST02 in AK202 of the 997 you would be creating an invalid 997 transaction. It appears there may be a gap in the 997 standard for reporting errors at this level. If we have misinterpreted the use of the transaction and it can be reported, please let us know how.
Response
Data elements AK102 and AK202 located within transaction set 997 and transaction set 999 are to be used to convey the values of control numbers in the functional group or transaction sets being acknowledged. If including a copy of the value of a data element in the 997 or 999 would cause a syntax violation in the 997 or 999, then if the violation is to be reported at the level at which it was found it must be reported at the next higher level.
Recommendation
The official response to a formal RFI is a letter from the current ASC X12 chair. This website often displays a summary of the RFI. Click here to view a PDF of the letter for this RFI.
When reporting errors after the syntactic analysis of the transaction set, the data analyzed must be able to be reported within the acknowledgment. While data element AK404 supports reporting the value of a data element that fails syntactic analysis without violating the syntax of the 997, the same does not apply to AK202. There are two generally accepted methods of acknowledging transaction sets: 1) acknowledge all transaction sets within the functional group or 2) acknowledge only those transaction sets containing errors. It is not recommended to accept a functional group with errors if the transaction set control number in error cannot be reported in AK202. For the example in your request, the appropriate action is to reject the entire functional group containing the ST02 value which when echoed in AK202 would create a syntactically invalid 997. In addition, the same logic applies to the functional group control number;, the appropriate action is to reject the entire interchange containing the syntactically invalid data.
So I have a basic
$.ajax( { url : 'MyController/MyAction',
method : 'POST',
async: true,
.
. } );
that can be called very frequently as it is event-driven. Like it may be called 50 times in 1 second if the user is being obnoxious. It updates values in the database.
My friend told me that it's possible that the updates may be sent to the database in the wrong order. Is this true? This is causing me major cognitive dissonance and I can't sleep tonight.
I should mention that these values being updates in the database are associated with a user. In particular, the data is like
data : { userId : '21EC2020-3AEA-4069-A2DD-08002B30309D',
answerId : '69',
val : 'd' }
where the only values changing in rapid succession are answerId and val.
Unfortunately, my understand is "YES. The order is not guaranteed".
You send http request 50 times in 1 second and server save it to DB.
When the network is good and server is strong, it is okay to be saved to DB in order.
But if http sever is busy or network is interrupted, it does not guarantee the data will still always ordered in real happened sequence. Ex, 1 or 2 data order will be exchanged in DB.
My suggestion is : if the order is very critical and the update traffic is heavy, you should add a happen time in http data and save it to DB.
When you select data from DB, you can order by the happen time and it will make sure it has the correct order as event happens to avoid the mis-order caused by server or network busy.
I use twain 2.3 (TWAINDSM.DLL) in my application with HP Scanjet 200 TWAIN Protocol 1.9.
My TWAIIN calls are:
OpenDSM: DG_CONTROL, DAT_PARENT, MSG_OPENDSM
OpenDS: DG_CONTROL, DAT_IDENTITY, MSG_OPENDS
EnableDS: DG_CONTROL, DAT_USERINTERFACE, MSG_ENABLEDS
ProcessDeviceEvent: DG_CONTROL, DAT_EVENT, MSG_PROCESSEVENT
and as a result of the last call I allways get TWRC_NOTDSEVENT instead of TWRC_DSEVENT.
Could please someone help with this?
Once you use DG_CONTROL / DAT_EVENT / MSG_PROCESSEVENT, all messages from the applications message loop must be sent to the data source for processing. Receiving TWRC_NOTDSEVENT means the forwarded message isn't for the source so the application should process it as normal.
Keep forwarding all messages to the source until you receive MSG_XFERREADY which means there is data to transfer. Once the transfer is finished and you have sent MSG_DISABLEDS that's when you can stop forwarding messages to the source.
Twain is a standard, and when many company implement that standard, not all of them do the same way. Along the way to support Twain, we will learn and adjust the code to support all the different implementations.
I experienced this situation before, and this is my workaround:
Instead of placing (rc == TWRC_DSEVENT) at the beginning of code (will skip the following MSG_XFERREADY processing afterward) you can move the comparison to the end after MSG_XFERREADY processing, so that MSG_XFERREADY is always checked and processed.
(rc == TWRC_DSEVENT) is only to determine if we should forward the window message or not.
I don't know your specific situation. I ran into a similar issue because I called OpenDSM with a HWND/wId which is from another process. You should call OpenDSM with the HWND of
the active window/dialog which is owned by current process.
Is there any way (C# or native) of counting the messages in a message queue (sub queue).
Using a queue name of "DIRECT=OS:slc11555001\private$\file-queue;retry"
I want to know how many messages are in the sub queue. At the moment I can see using the management console that there are in fact messages in that queue. If the OS can do it, so should I.
MQMgmtGetInfo returns 0xc00e0020 (which is not a documented error code).
I am therefore confused.
I am using code from here:
http://functionalflow.co.uk/blog/2008/08/27/counting-the-number-of-messages-in-a-message-queue-in/
The error is as follows (from http://support.microsoft.com/kb/304287):
MQ_ERROR_UNSUPPORTED_FORMATNAME_OPERATION (0xC00E0020).
MQMgmtGetInfo won't understand the subqueue format name.
The retry subqueue only really exists as a logical division of private$\file-queue queue.
You can call a count on the file-queue but not subqueues within it.
Cheers
John
From the MSDN page on subqueues:
Subqueues are created implicitly, so only the following APIs can be
used with subqueues: MQOpenQueue, MQCloseQueue, MQCreateCursor,
MQReceiveMessage, MQReceiveMessageByLookupId, MQHandleToFormatName,
MQMoveMessage, and MQPathNameToFormatName. Calling any of the other
Message Queuing APIs returns an error