I'm looking to loop data received from SQL Server data received from wcf-sql adapter.
I use for loop and and the following
itostring=i.ToString();
MessageOne=xpath(MessagePolling,"/*[local-name()='MainData' and namespace-uri()='http..["+itostring+"]");
When the XPath in for the first receive message path[i]
Is this the correct way?
There are two ways^ to loop on multiple records contained within an Xml message received by BizTalk:
Envelope Schemas
When you define the schema that represents the message, mark it as an Envelope Schema. This tells the Receive Pipeline Disassembler to create (and publish) one message to the BizTalk Message Box for each record in the incoming message (in your case from the WCF-SQL Adapter). This will cause a single Orchestration instance to be started for each record in your incoming message.
Richard Seroter has a great blog post on doing this from the WCF-SQL Adapter - http://seroter.wordpress.com/2010/04/08/debatching-inbound-messages-from-biztalk-wcf-sql-adapter/
Be aware that with this approach, you don't want to be de-batching tens of thousands of records from the incoming message as BizTalk will grind to a halt :-)
XPath Inside an Orchestration
If you do not use an Envelope Schema, you will start a single Orchestration instance for the incoming message (containing multiple records). Within an Expression Shape in your Orchestration, you can use XPath (and some other magic) to loop around each record an extract each to an Orchestration variable (which you can then map on etc.)
Take a look at the following links that will help you with extracting via XPath:
http://social.technet.microsoft.com/wiki/contents/articles/6944.biztalk-orchestrations-xpath-survival-guide.aspx
http://www.biztalkgurus.com/biztalk_server/biztalk_blogs/b/biztalk/archive/2004/10/25/using-xpath-inside-biztalk-orchestrations.aspx
http://blog.eliasen.dk/2006/11/05/LoopingAroundElementsOfAMessage.aspx
http://www.codeproject.com/Articles/534627/BizTalk-Looping-through-repeating-message-nodes-in
^There is also a third way to achieve this as of BizTalk Server 2009 (I think - it seems like so long ago) whereby you can execute a Receive Pipeline within an Orchestration, so you could perform your Envelope de-batching in an Orch, instead of a Receive Location's Receive Pipeline.
Related
I am not new to BizTalk however this situation is somewhat new. I have below situation in an BizTalk Orchestration,
I get path of flat file from some other source.
I want to load this file in orchestration and disassemble it by executing pipeline.
I searched a lot but almost every one talks about feeding a XML document in pipeline inside orchestration.
I got below links too but I couldn't get the working solution so far,
Calling FlatFile pipeline inside orchestration
4 Different ways to process an XLANGMessage
When I implemented solution given at above links, I get error "No Disassemble stage components can recognize the data."
I also don't want to create dynamic receive locations because of performance constrains.
Below is my code so far,
Load file content in a stream
Create a CustomBTXMessage instance as suggested in link two.
Load stream as below
customBTXMessage = new CustomBTXMessage("MyMessageName",
Service.RootService.XlangStore.OwningContext);
customBTXMessage.AddPart(string.Empty, "Body");
customBTXMessage[0].LoadFrom(ms);
return customBTXMessage.GetMessageWrapperForUserCode();
I think this situation is not something new in BizTalk world. Any one who has done this must be able to help me quickly.
Here's what I would do...or at least try first.
Create a Receive Port and Receive Location for each Flat File type you get.
Get the list of files.
In the Orchestration, Move the file to the appropriate Receive Location.
Flat File Disassembler the file in Port Pipeline like normal.
Receive the File into the Orchestration with an Ordered Delivery Port bound to the Receive Port from Step 1.
Loop on Receiving the files, checking for BTS.LastInterchagneMessage.
When True, Exit that Loop and go back to step 3.
Within my BizTalk 2010 solution I have the following orchestration that’s is started by the receipt of a courier update message. It them makes a couple of call to AX's WCF AIF via two solicit-response ports, a Find request and an Update request.
For this application we are meeting audit requirements through use of the tracking database to store the message body. We are able to link to this from references provided in BAM when we use the TPE. The result for the customer is nice, they get a web portal from which they can view BAM data of message timings etc. but they can also click a link to pull up a copy of the message payloads from the tracking db. Although this works well and makes use of out-of-box functionality for payload storage it has led to relatively complex jobs for the archiving of the tracking db (but that's another story!).
My problem relates to continuation. I have created the following Tracking Profile:
I have associated the first of the orchestration's two solicit response ports with the continuation RcvToOdx based on the interchange Id and this works, I get the following single record written to the Completed activity table:
So, in this case we can assume that an entry was first written on receipt in the inbound message, with the TimeReceivedIntoBts column populated by the physical file receive port. The FindRequestToAx column was then populated by the physical WCF send port. Because this was bound to the RcvToOdx continuation Id and used the same interchange Id and the previously mentioned file receive message, the update was made to the same activity. Notification of the resulting response was also updated to the same activity record - into the FindResponseFromAx column.
My problem is that I would also like BAM to record a timestamp for the subsequent UpdateRequestToAx. Because this request will have the same interchange Id as the previous messages I would expect to be able to solve this problem by simply binding the AxUpdate send port (both send and receive parts of it) to the same continuation id, as seen in the following screen grab:
I also map the UpdateRequestToAx milestone to the physical Ax_TrackAndTraceUpdate_SendPort (Send) and the OrchestrationCompleted milestone to Ax_TrackAndTraceUpdate_SendPort (Receive)
Unfortunately, when I try this I get the following result:
Two problems can be seen from the above db screen grab:
1. Date for the update send port was inserted into a new activity record
2. The record was never completed
I was surprised by this because I'd thought since they update port was enlisted to use the same continuation, and the single InterchangeId was being used by all ports for the continuation Id then all the data milestones would be applied to a single activity.
In looking for a solution to this problem I came across the following post on Stack Overflow suggesting that the continuation must be closed from the BAM API: BAM Continuation issue with TPE. So, I tried this by calling the following method from an expression shape in my orchestration:
public static void EndBAMContinuation(string continuationId)
{
OrchestrationEventStream.EndActivity(CARRIER_ORDER_ACTIVITY_NAME, continuationId);
}
I can be sure the method was called ok because I wrapped the call with a log entry from the CAT framework which I could see in debug view:
I checked the RcvToOdx{867… continuation Id against the non-closed activity and confirmed they matched:
This would suggest that perhaps the request to end the continuation is being processed before the milestone of the received message from the UpdateAx call?
When I query the Relationsips tables I get the following results:
Could anyone please advise why the UpdateToAx activity is not being completed?
I realise that I may be able to solve the problem using only the BAM API but I really want to exhaust any possibility of the TPE being fit for purpose first since the TPE is widely used in other BizTalk solutions of the organisation.
To solve this problem I created a 2nd continuation in the TPE.
"RcvToOdx" continuation for the Find and "OdxToUpdate" continuation for the update - source is InterchangeId on the initial receive port - UPS_TrackAndTrace (same as for other "RcvToOdx" continuation), dest Id is the InterchangeId mapped to update send port.
We receive many large data files daily in a variety of formats (i.e. CSV, Excel, XML, etc.). In order to process these large files we transform the incoming data into one of our standard 'collection' message classes (using XSLT and a pipeline component - either built-in or custom), disassemble the large transformed message into individual 'object' messages and then call a series of SOAP web service methods to handle business logic and database operations.
Unlike other files received, the latest file will contain all data rows each day and therefore, we have to handle the differences to prevent identical records from being re-processed each day.
I have a suitable mechanism for handling inserts and updates but am currently struggling with the deletes (where the record exists in the database but not in the latest file).
My current thought process is to flag the deleted records in the database using a 'cleanup' task at the end of the entire process but this would require a method to be called once all 'object' messages from the disassembled file have completed.
Is it possible to monitor individual messages from a multi-record file and call a method on completion of the whole file? Currently, all research is pointing to an orchestration with some sort of 'wait' but is this the only option?
Example: File contains 100 vehicle records. This is disassembled into 100 individual XML messages which are processed using 100 calls to a web service method. Wish to call cleanup operation when all 100 messages are complete.
The best way I've found to handle the 'all rows every day' scenario is to pre-stage the data in SQL Server where it's easier to compare the 'current' set to the 'previous' set. The INTERSECT and EXCEPT operators make it pretty easy in most cases.
Then drain the records with a Polling statement.
The component that does the de-batching would need to publish a start of batch message with the number of individual records and a correlation key.
The components that do the insert & update would need to publish a completion message with the same correlation key when it is completed processing.
The start of batch message would have spun up an Orchestration that would would listen for the completion messages with that correlation key and count the number, and either after it has received the correct number or after a timeout period it would call the cleanup or raise an exception.
I have a BizTalk receive port monitoring an FTP location. I expect a file to arrive at least once per day in that location and for BizTalk to pick it up and kick off an orchestration. This part is working fine.
However, sometimes the sender fails to send a message during a day, in which case I want an email to sent to notify the users that something is amiss.
I could solve this outside of BizTalk, by creating a daily job that looks in our database for processed files and makes sure there is at least one in any given day. However, I'd prefer to solve this "in line" with the BizTalk solution that is already in place, and not deploy a separate, unrelated job which will increase maintenance headaches.
Is there any functionality in BizTalk that would allow me to send a notification if a receive port doesn't receive something in a given timeframe?
Short answer: Not really.
The logic you want to implement would require a customised version of the FTP Adapter. Depends on how comfortable you are rolling up your sleeves and getting into the Adapter SDK.
If you wanted to keep your solution "Purely BizTalk", you could set up a secondary Orchestration using a SQL Receive Location tied to a stored procedure. This stored procedure executes regularly and looks for records in your "Processed File" table received in the past (business) day. If none are found, it fabricates a record and returns it via the SQL Receive Location. This would be your trigger to send the email notification.
One solution, not elegant though, is to have a secondary FILE receive location, with a schedule window, outside your cutoff time.
Failure scenario:
In this FILE receive location, you have an intelligent/dummy message conforming to the same schema as FTP receive. The intelligent part is to have one of the fields in the message telling us when was the last time we received the file from FTP. The rest of the content is dummy.
Within your orchestration, you check where you received your file from. If its the secondary receive location (using the context property BTS.ReceiveLocationName), you check the date field of this dummy/intelligent message and if it is in past 24 hours ( or similar logic) send an email notifying you did not receive the file from the upstream FTP process and also save a copy of the dummy message (received) back to the secondary FILE receive location unchanged.
Success Scenario:
Apart from normal processing, you save a copy of the dummy/intelligent message to the secondary FILE receive location, with the datetime field reflecting when you processed the file you received from FTP receive location.
Initialising:
You start with a dummy/intelligent message in the secondary FILE receive location with the datetime field value well in the past ( assuming we never received the file from FTP) or with yesterday's date ( assuming we received a file successfully from FTP the day before.)
Overview:
Your orchestration has two trigger points.
When you receive a file via FTP
A scheduled FILE receive location, triggered after the cut-off time.
I'll try provide as much information as possible:
No error message.
The instance stays in the "ready service instances".
The receive location has the same parameters (except URI, the three polling queries, user account/pw and receive pipeline) as another receive location that points to another database/table which works.
The pipeline is waiting for the correct schema.
The port surface and receive location are both waiting for the correct schema.
In my test example, there are only 10 lines being returned.
The message, which contains those 10 lines, validates against the schema.
I tried to let the instance alone to no avail - 30+ minutes - and no change in its condition.
I had also tried suspending and then resuming it which then places the instance in the "dehydrated orchestrations" list. Again, with no error message.
I'm able to get the message by looking at the body of the message that's in the "ready to run" service. (This is the message that validates versus the schema I use in Visual Studio.)
How might something like this arise?
Stupid question, but I have to ask... Is the corresponding host instance running?