BizTalk receive location process files one by one? - biztalk

Is there a way in BizTalk to process messages dropped in a folder location (using file adapter) one at a time? I do not want all the messages in the folder get picked all at once.

Not using the native file adapter in BizTalk.
You would have to write a custom file adapter using the sample project in the SDK that can be found under <BizTalkDirectory>\SDK\Samples\AadaptersDevelopment\FileAdapter

I can understand why you want this.
In case of a lot of files flush in your intake folder, and if processing the file in pipeline takes extremely long time, it is easy to stuck all the files in pipeline.
In case of you don't want to write a custom adapter, an alternative solution I have used is using Loopback adapter.
What I did before is keep the file receive port as is and make it use passthrough pipeline. Then a loopback adapter will subscribe the message from this file receive port. You can apply your expected pipeline in Loopback ports receive port. and enforce one by one message process by enabling Ordered Delivery

Related

BizTalk: Outbound logical ports are not getting removed from orchestration

I have an orchestration that has 3 wcf-custom( with sqlbinding, calling stored procedures) adapters configured to 3 send ports. My requirement was to replace the WCF-Custom adapter calls with C# method method call.
I wrote the C# methods to call the stored procs and removed the 3 send ports from orchestration. I am using a cmd line tool (BTTask ExportApplication) to create an msi and imported it in admin console.
However, I can still see all 3 logical ports in orchestration bindings tab. I have tried cleaning the solution and building several times but it just wouldn't go. Am I missing something here?
I am using BizTalk 2013R2.

Where does ActiveMQ Artemis Console store address and queue definitions?

I created the broker folder inside /var/lib on Ubuntu 18.04. Inside /var/lib/[broker]/etc there is the broker.xml file that you can use to define addresses and queues. However, I used the administration console to create an address with a couple queue, and this file does not update. In fact, no files inside the broker directory or Artemis home update.
So where is the administration console storing the definitions?
Also is it a better practice to create the addresses and queue in the broker.xml file instead of through the console?
Definitions for addresses and queues created at runtime are stored in binary form in the broker's journal, specifically in the "bindings" journal which is separate from where messages are stored. In your configuration the bindings journal would be in /var/lib/[broker]/data/bindings by default.
As far as best practices go it really depends on the use-case. Some users like to have the address & queue definitions in broker.xml. The broker.xml can be updated at runtime and the broker will deploy newly configured addresses & queues. However, other users don't like to manually edit broker.xml and would rather use the management API instead either through the web console or through another management interface (e.g. HTTP via Jolokia, JMX, management messages, etc.). Still others don't manage addresses or queues at all but simply allow the broker to auto-create the resources which their applications need.

Does BizTalk Server support exchanging large files over Azure File Shares when 3rd Party system is using the REST API?

"Starting with BizTalk Server 2016, you can connect to an Azure file
share using the File adapter. The Azure storage account must be
mounted on your BizTalk Server."
source: https://learn.microsoft.com/en-us/biztalk/core/configure-the-file-adapter
So at first glance, this would appear to be a supported thing to do. And until recently, we have been using Azure File Shares with BizTalk Server with no problems. However, we are now looking to exchange larger files (approx 2 MB). BizTalk Server is consuming the files without any errors but the file contains only NUL bytes. (The message in the tracking database is the correct size but is filled with NUL bytes).
The systems writing the files (Azure Logic Apps, Azure Storage Explorer) are seeing the following error:
{
"status": 409,
"message": "The specified resource may be in use by an SMB client.\r\nclientRequestId: 4e0085f6-4464-41b5-b529-6373fg9affb0",
}
If we try uploading the file to the mounted drive using Windows Explorer (thus using the SMB protocol), the file is picked up without problems by BizTalk Server.
As such, I suspect the BizTalk Server File adapter is not supported when the system writing or consuming the file is using the REST API rather than the SMB protocol.
So my questions are:
Is this a caveat to BizTalk Server support of Azure File Share that is documented somewhere?
Is there anything we can do to make this work?
Or do we just have to use a different way of exchanging files?
We have unsuccessfully investigated/tried the following:
I cannot see any settings in the Azure File Storage connector (as
used by Logic Apps) that would ensure files are locked till they are
fully written.
Tried using the File adapter advanced adapter property “rename files while reading”, this did not solve the problem.
Look at the SFTP-SSH connector. It does message chunking with a total file size of 1 GB or smaller and: Provides the Rename file action, which renames a file on the SFTP server.!!
With an ISE environment you could potentially leverage a total file size of 5B
Here is the solution we have implemented with justifications for this choice.
Chosen Option: We stuck with Azure File Shares and implemented the signal file pattern
The Logic Apps of the integrated system writes a signal file to the same folder where the message file is created. The signal file has the same filename but with a .done extension. e.g. myfile.json.done.
In the BizTalk solution, a custom pipeline component has be written to retrieve the related message file for the signal file.
Note: Concern that the Azure Files connector is still in preview.
Discounted Option 1: Logic Apps use the BizTalk Server connector
Whilst this would work, I was keen to keep a layer of separation between the system and BizTalk. This allows BizTalk applications to be deployed without downtime of the endpoints to system.
Restricts the load levelling (throttling) capabilities of BizTalk Server. Note: we have a custom file adapter to restrict the rate that files are picked up.
This option also requires setup of the “On-Premise Data Gateway”.
Discounted Option 2: Use of File System connector
Logic Apps writes the file in chunks of 2MB and then releases the lock on the file. This enables BizTalk to pick up the file instantly. When the connector tries to write the next chunk of 2MB, the file is not available anymore and hence fails with a 400 status error "The requested action could not be completed. Check your request parameters to make sure the path //test.json' exists on your file system.”
File sizes are limited to 20MB.
Required setup of On-Premise Data Gateway. Note: We also considered this to be a good time to also introduce use of Integration Service Environment (ISE) to host Logic Apps within the vNET. The thinking is that this would keep File exchanges between the system and BizTalk within the network. However, currently there is no ISE specific connector for the File System.
Discounted Option 3: Use of SFTP connector
Our expectation is that logic apps using FTP will experience similar chunking issues while Logic Apps is writing files.
The Azure SFTP connector has no rename action.
We were keen to avoid use of this ageing protocol.
We were keen to avoid extra infrastructure and software needed to support SFTP.
Discounted Option 4: Logic Apps Renames the File once written
There is no rename action in the File Storage REST API or File Connector. Only a Copy action. Our concern Copy is the file still needs time to be written so the same chunking problem remains.
Discounted Option 5: Logic Apps use of Service Bus Connector
The maximum size of a message is 1MB.
Discounted Option 6: Using Azure File Sync to Mirror files to another location.
The File Sync only happens once every 24 hours, as such was not suitable for our integration needs. Microsoft are planning to build change notifications into Azure File Shares to address this.
Microsoft have just announced "Azure Service Bus premium tier namespaces now support sending and receiving message payloads up to 100 MB (up from 1MB previously)."
https://azure.microsoft.com/en-us/updates/azure-service-bus-large-message-support-reaches-general-availability/
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-premium-messaging#large-messages-support

EDI Receive Pipeline performance issue

I have a file receive location with edireceive pipeline configed to receive incoming HIPPA 5010 837 files.
The normal incoming file size is 4 to 6 megabytes, contains 3K to 5K records. The 837 schema deployed is the "multiple" version which have the subdocument_break="yes". So the file been processed will generate 3K to 5K messages per file.
The pipeline works fine and can split the file into multiple messages as expected. for 1 single file, BizTalk takes less than 5 mins to process.
The problem is when more than 10 files was put to the incoming folder at same time, Biztalk will start process these files parallel. But it will take hours to process these files and the BizTalk Host consumes more than 10G memory.
Some other info:
The BizTalk host is a dedicated 64bit receive host
No file lock by other applications found
Batching setting in file adapter is Num of Msgs in a batch = 1; Max batch size = 10240000
Rename file while reading is checked.
My question is: Is this performance normal? how can I improve it?
You are correct, 5K message is not really the issue, it 5 batchs of 5K message at the same time that's causing the problem.
To serialize the debatching you can use an Ordered Delivery Two Way Send Port with an Loopback Adapter which debatches the EDI on the Receive Side. In this case, the initial Receive Location would be a PassThrough.
You can find several Loopback Adapters here: http://social.technet.microsoft.com/wiki/contents/articles/12824.biztalk-server-list-of-custom-adapters.aspx#jjj
BizTalk isn't really made to process multiple large files at once, and the file adapter doesn't have any built in way to limit how many files it will pull at once.
There's a commercial solution available to help handle this (disclosure: I work for Tallan and work on this solution) called the T-Connect EDI Splitter (https://www.tallan.com/products/t-connect/edi-file-splitter/). The use case is splitting the files on a pipeline into more manageable chunks to be consumed elsewhere. This is not a trivial task, unfortunately.
If your files are small enough to process without splitting them before they hit the EDI recieve pipeline (you don't need to split them further, you just need to process them one at a time), you'll have to come up with a more complicated messaging flow to deal with that - receive them using PassThrough transmit, send them somewhere that can just consume them, then poll them using a second receive location that offers more precise control of polling.
You could also just write your own adapter that offers polling and interval settings, but that's much more complicated and messy.

BizTalk Archiving Pipeline Component Consideration

In my scenario, I have a Pipeline that (1) decrypts and then (2) disassembles a flat file on a receive port.
My requirement is to capture the file, and put it on a local fileshare, between (1) and (2).
My initial approach was to introduce an Archive component between these, but I have run into issues with this. The Archiving component uses direct access to storage to dump the file. This is essentially poor methodology, as per BizTalk principles, this is a function of a send port/send adapter. So, if for example the Archiving destination is an FTP host, the Archiving component is useless.
Hence two ideas come to mind:
A) Somehow configure the archiving component to use a Send Port(if that's even possible)
B) Abandon the idea of the archiving component and just use BizTalk's native functionality as follows:
-Receive the file using decrypt only pipeline
-Send the file to a temporary local storage using a Send Port
-Subscribe to the receive port to send the file to an archive
-Pick up the file form local storage using Disassemble pipeline (second receive port)
-Use orchestration to process the file from the second receive port.
Are there any issues with Option B)?
If NOT, then what's the point of even using an archive component?
Other options also include
C) Have an archive send port and a loop-back send port subscribe to the receive port, the loop-back send port would have the flat file dissembler on the receive.
D) Have an archive send port and an Orchestration that subscribe to the receive port. Call the dissemble pipeline in the Orchestration.
We've used used both these scenarios for different solutions.
If you are using Native Biztalk functionality setting up send ports subscribing to the message type for archive is sufficient.
If you are using the BizTalk ESB Toolkit it is very difficult to split message for archiving since you are executing in the pipeline context. Using an orchestration in your itinerary will allow you to split the message but that of course requires the itinerary to leave the pipeline and drop the message on the message box. Just doing simple message archiving may lend this solution to be over kill.
You can use a custom pipeline component such as the one below. It is a pipeline component that can be reused, works in a BizTalk ESB toolkit scenario (very handy if you want to original message because it is transformed), as a file archive or SQL archive and works on both inbound and outbound pipeline scenarios.
BizTalk Archiving - SQL and File
You will only be responsible for the maintenance of the old/unwanted messages to avoid bloat.

Resources