BizTalk WCF_SQL Polling - biztalk

I have a strange issue with my WCF_SQL receive location polling. The BizTalk server is BizTalk 2010. The polling executes every 1 min and involves executing a Stored_Proc selecting records from a table and updating the selected records status to something like 'Processing'
Select top 10 * from ProcessingTable where Status = 'New'
Update ProcessingTable Set Status = 'Processing' where Status = 'New'
The receive pipeline is XMLReceive which will debatch the records and route to another orchestration for processing. At the end of the orchestration, there will be a Send port for updating the Status to 'Processed'.
Here comes the issue, during the period when we have our maintenance and the BizTalk DB/Application servers are brought down, host instances will be down and these records will be stuck in 'New' state. After the maintenance and host instances initialized, these records will get picked up immediately and have its status updated to 'Processing'. Strange thing is that it's stuck at this status and never proceed to get updated to 'Processed'. This is only happening for the top 10 records (first pull/pick up). Subsequently, all other remaining 'New' records get picked up and processed successfully. Currently the workaround is to always monitor for those records stuck in 'Processing' and Update these records to 'New' again to retrigger the processing. Anyone has an answer to this problem?

Have you used singleton pattern orchestration for this ? If no then try it once and see if your getting same problem as I suspect it is facing race condition

Related

rabbitmq : message is not consumed by consumer , but publisher is able to publish message

I am using rabbitmq to for messaging in my service. Lets suppose there are 2 micro service A and B.
there are more 3 exchange and respective queue is there in between.
A is publisher and B is consumer here. while sending message from A it is successfully updating in queue( able to see in console queue is increases). But here consumer is not able to receive messages. previously it was working.
but for other exchange and queue , consumer is working fine.
I tried purze the queue and restarted application , didnt helped me. there is always 4 unached message in queue and rest is ready to Go. finally I deleted queue and exchange and respective routing key and recreated the same. then all working fine..
Can any one help me here what happened to this. Why it didnt worked?
Sometimes when some failure occur for message processing then if we throw error. it stucks there so go in infinite loop(queue-> processing -> queue > .....) if method keep throwing error.
for other messages who all are in queue, by increasing the batch concurrency we can execute the other. but unack message will be there.that will only go if someone stop consumer..
Now I have one question can i set limit retry for unacked message processing. if some one knows about it then can help here.

BizTalk 2006 Event Log Warnings - Cannot insert duplicate key row in object 'dta_MessageFieldValues' with unique index 'IX_MessageFieldValues'

We have been seeing the following 'warnings' in the event log of our BizTalk
machine since upgrading to BTS 2006. They seem to occur
randomly 6 or 8 times per day.
Does anyone know what this means and what needs to be done to clear it up?
we have only one BizTalk server which is running on only one machine.
I am new to BizTalk, so I am unable to find how many tracking host instances running for BizTalk server. Also, can you please let me know that we can configure only one instance for one server/machine?
Source: BAM EventBus Service
Event: 5
Warning Details:
Execute batch error. Exception information: TDDS failed to batch execution
of streams. SQLServer: bizprod, Database: BizTalkDTADb.Cannot insert
duplicate key row in object 'dta_MessageFieldValues' with unique index
'IX_MessageFieldValues'.
The statement has been terminated..
I see you got a partial answer in your MSDN Post
go to BizTalk Admin Console ,check in Platform Settings -> Hosts, in the list of hosts on the right, confirm that only a single Host has the Tracking column marked as Yes.
As to your other question. Yes you can run a Single Host Instance on a Single Server. Although when your server starts to come under a bit of load you may want to consider setting up some more so you can balance the workload better.

De-batching and asynchronous processing of the messages

Problem Description:
1.There is Biztalk Application receiving formatted/zipped data file containing > 2 million data records.
2.Created pipeline component that process file and 'de batching' these 2 million records of data into smaller slice-messages with ~2000 records each .
3.Slice-messages are being sent to SQL port and processed by stored procedure.Slice-messages contains filename and batch id.
Questions:
A.What would be the best way to know that all slice-messages received and processing of whole file completed on SQL side ?
B.Is there any way in biztalk port to say "do not send message of type B, until all messages of Type A have been send" (messages priority)?
Here are possible solutions I've tried :
S1.Add specific 'end of file' tags to end of last slice-message saying that file is being processed and stored procedure will receive this part of message mark file as completed.
But because messages are being delivered asynchronously last message can be received on sql earlier that other messages and I will have false-competed event.
So this solution is only possible for "Ordered delivery ports" - but this type of ports have poor performance because sending only one message at a time.
S2.Transfer total records count into every slice-message and run count() sql statement after every slice-message received.
Because table where data is stored is super huge, even running count with filename as parameter takes time.
I'm wondering if there is better solution to know that all messages are being received ?
Have your pipeline component emit a "batch" message that contains the count of the records in the batch and some unique identifier that can link it back to the slice-messages records.
Have both the stored procedure that processes the slice-message and the batch message check to see if the batch total (if it exists yet for the slice-message process) matches the processed total, if they match, then you've finished processing them all.
Here's how I would approach this.
Load the 2MM records into a SQL Server table or tables by SSIS.
Drain the table at whatever rate give you an acceptable performance profile.
Delete records as they are processed (completed).
When no more records for "FILE001.txt" exist, SQL Server will return a flag
saying "FILE001.txt complete".
Do further processing.
When the staging table is empty, the Polling SP can either return nothing (the Adapter will silently ignore the response) or return a flag that says "nothing to do" and you handle that yourself.

BizTalk TPE continuations and uncompleted activities

Within my BizTalk 2010 solution I have the following orchestration that’s is started by the receipt of a courier update message. It them makes a couple of call to AX's WCF AIF via two solicit-response ports, a Find request and an Update request.
For this application we are meeting audit requirements through use of the tracking database to store the message body. We are able to link to this from references provided in BAM when we use the TPE. The result for the customer is nice, they get a web portal from which they can view BAM data of message timings etc. but they can also click a link to pull up a copy of the message payloads from the tracking db. Although this works well and makes use of out-of-box functionality for payload storage it has led to relatively complex jobs for the archiving of the tracking db (but that's another story!).
My problem relates to continuation. I have created the following Tracking Profile:
I have associated the first of the orchestration's two solicit response ports with the continuation RcvToOdx based on the interchange Id and this works, I get the following single record written to the Completed activity table:
So, in this case we can assume that an entry was first written on receipt in the inbound message, with the TimeReceivedIntoBts column populated by the physical file receive port. The FindRequestToAx column was then populated by the physical WCF send port. Because this was bound to the RcvToOdx continuation Id and used the same interchange Id and the previously mentioned file receive message, the update was made to the same activity. Notification of the resulting response was also updated to the same activity record - into the FindResponseFromAx column.
My problem is that I would also like BAM to record a timestamp for the subsequent UpdateRequestToAx. Because this request will have the same interchange Id as the previous messages I would expect to be able to solve this problem by simply binding the AxUpdate send port (both send and receive parts of it) to the same continuation id, as seen in the following screen grab:
I also map the UpdateRequestToAx milestone to the physical Ax_TrackAndTraceUpdate_SendPort (Send) and the OrchestrationCompleted milestone to Ax_TrackAndTraceUpdate_SendPort (Receive)
Unfortunately, when I try this I get the following result:
Two problems can be seen from the above db screen grab:
1. Date for the update send port was inserted into a new activity record
2. The record was never completed
I was surprised by this because I'd thought since they update port was enlisted to use the same continuation, and the single InterchangeId was being used by all ports for the continuation Id then all the data milestones would be applied to a single activity.
In looking for a solution to this problem I came across the following post on Stack Overflow suggesting that the continuation must be closed from the BAM API: BAM Continuation issue with TPE. So, I tried this by calling the following method from an expression shape in my orchestration:
public static void EndBAMContinuation(string continuationId)
{
OrchestrationEventStream.EndActivity(CARRIER_ORDER_ACTIVITY_NAME, continuationId);
}
I can be sure the method was called ok because I wrapped the call with a log entry from the CAT framework which I could see in debug view:
I checked the RcvToOdx{867… continuation Id against the non-closed activity and confirmed they matched:
This would suggest that perhaps the request to end the continuation is being processed before the milestone of the received message from the UpdateAx call?
When I query the Relationsips tables I get the following results:
Could anyone please advise why the UpdateToAx activity is not being completed?
I realise that I may be able to solve the problem using only the BAM API but I really want to exhaust any possibility of the TPE being fit for purpose first since the TPE is widely used in other BizTalk solutions of the organisation.
To solve this problem I created a 2nd continuation in the TPE.
"RcvToOdx" continuation for the Find and "OdxToUpdate" continuation for the update - source is InterchangeId on the initial receive port - UPS_TrackAndTrace (same as for other "RcvToOdx" continuation), dest Id is the InterchangeId mapped to update send port.

Send notifcation if expected message did not arrive in BizTalk

I have a BizTalk receive port monitoring an FTP location. I expect a file to arrive at least once per day in that location and for BizTalk to pick it up and kick off an orchestration. This part is working fine.
However, sometimes the sender fails to send a message during a day, in which case I want an email to sent to notify the users that something is amiss.
I could solve this outside of BizTalk, by creating a daily job that looks in our database for processed files and makes sure there is at least one in any given day. However, I'd prefer to solve this "in line" with the BizTalk solution that is already in place, and not deploy a separate, unrelated job which will increase maintenance headaches.
Is there any functionality in BizTalk that would allow me to send a notification if a receive port doesn't receive something in a given timeframe?
Short answer: Not really.
The logic you want to implement would require a customised version of the FTP Adapter. Depends on how comfortable you are rolling up your sleeves and getting into the Adapter SDK.
If you wanted to keep your solution "Purely BizTalk", you could set up a secondary Orchestration using a SQL Receive Location tied to a stored procedure. This stored procedure executes regularly and looks for records in your "Processed File" table received in the past (business) day. If none are found, it fabricates a record and returns it via the SQL Receive Location. This would be your trigger to send the email notification.
One solution, not elegant though, is to have a secondary FILE receive location, with a schedule window, outside your cutoff time.
Failure scenario:
In this FILE receive location, you have an intelligent/dummy message conforming to the same schema as FTP receive. The intelligent part is to have one of the fields in the message telling us when was the last time we received the file from FTP. The rest of the content is dummy.
Within your orchestration, you check where you received your file from. If its the secondary receive location (using the context property BTS.ReceiveLocationName), you check the date field of this dummy/intelligent message and if it is in past 24 hours ( or similar logic) send an email notifying you did not receive the file from the upstream FTP process and also save a copy of the dummy message (received) back to the secondary FILE receive location unchanged.
Success Scenario:
Apart from normal processing, you save a copy of the dummy/intelligent message to the secondary FILE receive location, with the datetime field reflecting when you processed the file you received from FTP receive location.
Initialising:
You start with a dummy/intelligent message in the secondary FILE receive location with the datetime field value well in the past ( assuming we never received the file from FTP) or with yesterday's date ( assuming we received a file successfully from FTP the day before.)
Overview:
Your orchestration has two trigger points.
When you receive a file via FTP
A scheduled FILE receive location, triggered after the cut-off time.

Resources