I know it is possible to create a map that takes one input message and outputs multiple messages within an orchestration.
When you do the reverse of this i.e. merge many messages into one. The 'wizard' creates the map and the input schema. However when doing the above only a map is created. The schema is in-line.
Is there a way around this? I would like to create my own output schema and map without spinning up an orchestration. If I try to do this I am unable to assign multiple messages to the output even if I copy the in-line schema structure from a generated map.
The answer to this question is - Not Possible. When mapping to multiple output messages the schema has to be in-line.
Related
I want to issue something e.g. a new option. Inside the flow where I'm issuing this new option, I need to get info from two separate oracles that need to provide data for the output state.
How should I do this... should I have one output and 3 commands? command with data from Oracle 1, command with data from Oracle 2 and then the issue command? or can this be done with one command?
It's entirely up to you - your command can contain whatever you data you want, so in theory, you could do the whole with one command.
Having said that, I would probably split it out into at least two commands for clarity and privacy. The privacy element is you can build a filtered transaction for the oracle to sign that only contains the oracle command.
If you don't mind the two oracles seeing the data sent to each for signing, you could encapsulate the data in one command e.g.
class OracleCommand(val spotPrice: SpotPrice, val volatility: Volatility) : CommandData
Where one oracle attests to spotPrice and the other to Volatility.
However, you would find it hard to determine what part of the data they attested to since they will both sign over the entire filtered transaction.
Unless you knew the design of the oracle could specifically pick out the correct data, you're probably better off going with three separate commands.
I have multiple flatfiles (CSV) (with multiple records) where files will be received randomly. I have to combine them (records) with unique ID fields.
How can I combine them, if there is no common unique field for all files, and I don't know which one will be received first?
Here are some files examples:
In real there are 16 files.
Fields and records are much more then in this example.
I would avoid trying to do this purely in XSLT/BizTalk orchestrations/C# code. These are fairly simple flat files. Load them into SQL, and create a view to join your data up.
You can still use BizTalk to pickup/load the files. You can also still use BizTalk to execute the view or procedure that joins the data up and sends your final message.
There are a few questions that might help guide how this would work here:
When do you want to join the data together? What triggers that (a time of day, a certain number of messages received, a certain type of message, a particular record, etc)? How will BizTalk know when it's received enough/the right data to join?
What does a canonical version of this data look like? Does all of the data from all of these files truly get correlated into one entity (e.g. a "Trade" or a "Transfer" etc.)?
I'd probably start with defining my canonical entity, and then look towards the path of getting a "complete" picture of that canonical entity by using SQL for this kind of case.
Good time of day.
Hoping to get some help with BizTalk solution we're working through.
I've generated the adapter by using WCF-SQL wizard and choosing typed polling. It worked out of the box. I was able to create a send port of a file type where my message dropped in an XML batch. As next step I debatched the messages by modifying the schema of a generated entity, changing it to an envelope and configuring leaf node. Great, now I have a bunch of files, one per message sitting in my send port's file folder. Now I am trying to crate a map against the newly created messages. That's where the problems begin. If I create a map based on the same schema that was generated for me by WCF-SQL wizard then I drag the whole structure of the Envelope -> Array -> Message, which of course does not match to the structure of a singular message and the map is not working. If I am to create a new schema, based on a single XML message from send port's file directory, the schema it generates shares the name with an existing schema of my Envelope and BizTalk server throws an error as a result.
I was thinking that maybe I could accomplish one of the following:
Split WCF-SQL generated schema into two, the Envelope + Array and the Message. Not sure if it's possible. Something about this idea doesn't seat well with me.
Somehow change the namespace of the debatched message. Not sure how to achieve.
Any ideas are welcome. Thank you!
It sound like you have all the hard stuff done already so here's a couple of hints that will get you what you're looking for:
The Envelope Schema does not need to represent the entire Message Structure, just down to the Body element from where the debatched children are taken.
The debatched message would then be it's own separate Schema.
The Envelope Schema does not even need to reference the Message Schema in any way.
It's usually better to use a custom Pipeline with the the XmlDisassembler that has the Envelope Schemas and Document Schemas properties set at Design Time.
The WCF binding actually generates the Schemas already separated for you. When you create the Map, you would choose StoredProcedureResultSet0 instead of ArrayOfStoredProcedureResultSet0.
If you set the Root Reference, you would not get the choice so unset that if it is.
Consider a situation where you have 2 receive locations, each receiving it's own unique message type. There is an orchestration with a parallel correlation happening based on a shared unique value in each of these messages.
Once a correlation set occurs, the orchestration runs and the job of it is to merge data from the 2 messages and create 1 from them. My idea was to use a map which takes in 2 input messages: 1 of each type in the correlation. The destination schema just happens to be the same as one of the input schemas (so we're basically just adding data to one of them from the other)
I can create the map, choose the 2 input message schema and the destination schema. The mapper than opens up and on the source side looks like so:
Which is quite alright.
The problem comes when you start expanding the nodes, they only seem to go 1 level deep. For example, here is the source and destination side-by-side, the same schema, except one is Part 1 of a 2 part source and the other is the single destination part:
This is just one example, but compare EVN_5. On the left it doesn't have children, on the right it does. It's the same schema, but one is part of a multi-input source and the other is the destination.
Is there any way to fix this, or is it not possible? Doing a link by name/structure results in missing data because the source "thinks" it's not there.
Edit: I just wanted to add the detail that this problem of only showing one-level deep worth of elements in the mapper is happening for both of the input schemas.
Make sure you added reference to assembly having segments, tables and datatypes xsds for the message
Let's say I have a flat file containing incoming messages. Where would the appropriate place be to inject the logic that takes identifying information from the message and sets primary key properties to link it to internal record IDs. For example, to map a customer's version of order ID into our internal order ID.
Sounds like you are looking to do a conversion of the incoming id to the internal id before sending the further along.
There are multiple places to do this.
You could do it in a pipeline component that either reads directly from its run-time configuration or from a database. You could also do it in a orchestration.
The easiest and most suitable place to do is probably however in a transformation map. Just make sure not to hard-code the transformation table (what id maps to one of you internal ids) as these usually change a lot. Have the map do a lookup ion a database for example to find the matching id.
Doing these kind of tasks in a map compared to the other options gives you a bit more flexibility as you can then apply the map directly in receive or send port. So if you don't need to do any workflow based logic you can use a messaging pattern and skip any orchestrations (always preferable).
I would consider doing this type of conversion in a map.