A lot of our use cases for Biztalk involve simply mapping and routing HL7 2.x messages from one system to another. Implementing maps and associating them to send/recieve ports is generally straightforward, but we also need to do some content based filtering on the sending side.
For example, we may want to only send ADT A04 and ADT A08 messages to system X if the sending facility is any 200 facilities (out of a possible 1000 facilities we have in our organization), but System Y needs ADT A04, A05, A8 for a totally different set of facilities and only for renal patients.
Because we're just routing messages and not really managing business processes here, utilzing orchestrations for the sole purpose to call out to the business rule engine is a little overkill here, especially considering that we'd probably need a seperate orchestration for each ADT type because of how schemas work. Is it possible to implement filter rules like this without using using orchestrations? The filters functionality of send ports looks a little too rudimentary for what we need, but at the same time I'd rather not develop and manage orchestrations.
You might be able to do this with property schemas...
You need to create a property schema and include the properties (from the other schemas) that you want to use for routing. Once you deploy the schema, those properties will be available for use as a filter in the send port. Start from here, you should be able to find examples somewhere...
As others have suggested you can use a custom pipeline component to call the Business Rules Engine.
And rather then trying to create your own, there is already an open source one available called the BizTalk Business Rules Engine Pipeline Framework
By calling BRE from the pipeline you can create complex rules which then set simple context properties on which you can route your messages.
Full disclosure: I've worked with the author of that framework when we were both at the same company.
Related
I'm reading all over the net that you your separate your "external schemas" from your "internal schemas" and never expose the "internal schemas" to any external actor.
If my solution only acts as a messagebus to create a loose coupling between 2 existing systems, will I really need any internal schemas?
System A makes a Request(Message with SchemaA) to Biztalk
Biztalk Maps SchemaA to SchemaB
Biztalk forwards request of type SchemaB to SystemB
SystemB returns ResponseB
Biztalk maps ResponeB to ResponeA
Biztalk routes the result back to System A
I can't see the pro's of having an internal schema and map:
SchemaA -> SchemaInternal -> SchemaB
?
The term canonical schema is often used to describe the creation of schemas internal (SchemaInternal in your last example) to an integration mechanism such as BizTalk.
Use of canonical schemas is widely regarded as a best practice, as it decouples your BizTalk flow control mapping from any 'other' system's schemas (other system here could be internal to your organisation or external to it, e.g. a supplier, customer or partner system). This way, if any of the systems integrated via BizTalk change, it is just the external schemas, and maps to the canonical schemas which need to be changed. It also prevents foreign conventions, naming and hierarchy differences inherent in external schemas from leaking into your internal BizTalk artefacts.
Generally, transformation of incoming messages to a canonical schema is done as early as possible e.g. on a receive, and similarly, transformation out of canonical done as late as possible, e.g. on a send port map.
A common scenario for Canonical Schemas (CS) is where a single orchestration or message flow is common to multiple trading parties (e.g. you may have many suppliers with different systems, however, all of them submit invoices for processing). In this case, each new supplier system just needs to be integrated with your CS - no new processing logic needs to be added or duplicated - CS can actually reduce the overall effort in such instances. (The n x m problem is explained in detail here). Another example of where CS are vital is where your business IS switching of messages - e.g. a Medical industry switch will have many doctor and practice systems sending authorisation requests and invoices and these need to be mapped and routed to multiple medical fund (medical aid) systems.
And FWIW:
IMO CS make most sense in an when BizTalk is the end-end solution in an EAI or ESB scenario, e.g. direct integration of 2 or more line of business systems. Otherwise, if BizTalk is just one endpoint on a larger corporate ESB, then it probably makes sense to use the corporate ESB schemas internally, and hence map external schemas directly to the ESB schemas (i.e. no need for another set of CS within BizTalk, provided that you have a good change management / version control mechanism across your enterprise).
If standard schemas (e.g. EDIFACT) exist for your industry, it is moot as to whether it is a goal to adopt these as internal CS. In general these may conflict with the meaning of Canonical as being 'simple', as industry schemas often need to be verbose in order to model all flavours and 'edge cases' of the document). Personally I would ensure that I have a mapping to / from said industry schemas, but would use a custom schema internally.
In described solution you don't have need in internal schemas. Well you can hide the schemas of System X from users of System Y, but that is not so important.
In this context, External = Public, meaning outside your organization.
The guidance is to protect internal implementation details, naming conventions and such, from others.
If both System A and System B are inside your organization then 'security' is less of an issue but your application can still offer an 'external' schema to consumers in order to protect them from internal changes to your application.
I am new to BizTalk. I got a requirement as below.
Requirement is below:-
Source: Oracle (table). I created a generated schema in BizTalk.
Target: Webservice which receives "object array" (Table of source records from BizTalk) as an input.
Source and Target systems have same structure. Hence no mapping should be implemented. Logic should be in pipelines or orchestration.
Need info on below two topics:
How to incorporate the logic in pipeline or orchestration to map data from source schema to target WS schema.
This question was posed (now deleted) on the other big BizTalk forum. So I'll share my answer here.
What you're asking is simply not possible. It doesn't matter that the source and destination are logically the same. They are represented by two different schemas in BizTalk. There is no way around this except by developing the Web Service to accept the WCF Oracle message directly.
Because of that, you must transform from the source to the destination. Maps are how that is done. While there are technically other ways, they are harder to write, bug prone and would likely offer a less desirable performance profile.
A ban on Maps is just counter-productive and as a long time BizTalk Developer I could not accept a project with such a requirement.
It's not very clear what you are asking for to be honest. Your requirement states that no mapping is required, but then you go on to ask how to incorporate mapping in pipeline or orchestrations.
A standard approach to delivering this would be;
Setup your input process from Oracle by using "Consume Adapter
Service" from visual studio's "add generated item". Use the oracle
binding, setup connection properties for typed polling along with
your query (see here for an example on MS SQL) change to a
service contract type (for inbound operations) and you'll get a set
of schemas representing your dataset, and a binding for your type
receive port poller.
Use "Consume WCF Service" to point to your "sending" web service and
you'll get the schemas, binding and a helpful orchestration with
port types add to your project
Create a simple map mapping your inbound oracle recordset schema to
your web service schema - this should be pretty straight forward if
they are identical, although I suspect you'll have to deal with
multiple sets of data - depends on your data.
Complete by wiring together your orchestration.
I appreciate this is a high level view of what you need to do, but there are plenty of example you can google to get you started. Hope that helps.
I’m a complete newbie at BizTalk and I need to create a BizTalk 2006 application which broadcasts messages in a specific way. I’m not asking for a complete solution, but for advise and guidelines, which capabilities of BizTalk I should use.
There’s a message source, for simplicity, say, a directory where the user adds files to publish them. There are several subscribers, each having a directory to receive published files. The number of subscribers can vary in the course of exploitation of the program. There are also some rules which determine if a particular subscriber needs to receive a particular file, based on the filename. For example, each subscriber has a pattern or mask of filename which files they receives must match. Those rules (for example, patterns) can change in time as well.
I don’t know how to do this. Create a set of send ports at runtime, each for each destination? Is it possible? Use one port changing its binding? Would it work correctly with concurrent sendings? Are there other ways?
EDIT
I realized my question may be to obscure and general to prefer one answer over another to accept. So I just upvoted them.
You could look at using dynamic send ports to achieve this - if your subscribers are truly dynamic. This introduces a bit of complexity since you'll need to use an orchestration to configure the send port's properties based on your rules.
If you can, try and remove the complexity. If you know that you don't need to be truly dynamic when adding subscribers (i.e. a subscriber and it's rules can be configured one time only) and you have a manageable number of subscribers then I would suggest configuring each subscriber using it's own send port and use a filter to create subscriptions based on message context properties. The beauty of this approach is that you don't need to create and deploy an orchestration and this becomes a highly performant and scalable solution.
If the changes to the destination are going to be frequent, you are right in seeking a more dynamic solution. One nice solution is using dynamic send ports and the Business Rules Engine. You create rule set for the messages you are receving. This could be based on a destination property or customer ID in the message. Using these facts, the rules engine can return a bunch of information like file mask, server name, ip address of deleiver server, etc. You can thenuse this information to configure the dynamic send in the orchestration. The real nice thing here is that you can update the rule set in the rules engine without redeploying the whole solution. As a newb, these are some advanced concepts, but not as diificult as you may think.
For a simpler solution, you might want to look at setting the FILE Send adapters properties via it's Propery Schema (ie. File name, Directory, etc.). You could pull these values from a database with a helper class inside an expresison shape. On each message ogig out, use the property shcema to set where the message will be sent and named. This way, you just update the database as things change.
Good Luck!
I am not sure if I ask the right question, but this is the scenario I am trying to run:
Multiple files (XML and a few related files, "attachments") have to get into BizTalk as a single message. I have looked into existing adapters, and don't see that done with existing once. To be more accurate, files are taken from file system. Files are not found at the same time, but arrive one at a time, when order is not ensured. XML (content) file is the one that knows what attachments it has to have (what other files).
We are looking into BizTalk 2009 and I was wondering would be that responsibility of a custom Adaptor, or something else. And were I could look for samples.
Thanks.
It is probably possible to do what you want using a custom adapter, though I'd recommend against it. You can achieve what you require using orchestration.
What you are looking for is likey a convoy, or at the least some use of correlation.
In BizTalk a convoy is a messaging pattern (as opposed a BizTalk feature) that allows groups of messages to be processed by a single orchestration.
You essentially use correlation on a receive port to group messages together in either a parallel (what you probably want) or sequential fashion.
There is an article [here](http://msdn.microsoft.com/en-us/library/ms942189(BTS.10\).aspx) by Stephen W. Thomas about convoys (it is for BT 2004 but the concepts still hold) and there is a lot of additional information on the web and in books (Professional BizTalk server 2006 has a subsection on them)
Without more details on your scenario it is hard to know exactly how the convoy would be built but below are two approaches to look at (also, I've not had a chance to properly use BT2009 so there may be extended support for correlation scenarios that help you out).
Flexible Correlation
If you don't know anything about the files listed in the context XML you will probably need a pattern like the one described by Charles Young in this post.
Non-uniform sequential convoy
If you do have a little bit of info before hand one way might be as follows (basically a Non-uniform sequential convoy):
This makes the assumption that there is some way of linking all the files together so you can correlate them.
Create a single orchestration that subscribes to you inbound receive port (which contains the file receive location).
This orchestration will have a single activation receive shape that is set up for your content file.
Once the orchestration is started by a content file a second correlated receive shape starts picking up the messages that match that content file. (this second receive could possible be in a loop to allow for variable numbers of files)
You then pack them all together into a single outbound file of your design and send them out once the full number of files has been received.
Seems to me a better approach would be to implement the above requirements with a combination of a custom pipeline component and/or a custom adapter. I assume you do not really need to manipulate the incoming files - except for the content XML file - or that you couldn't since they are in binary format. This calls for a custom pipeline component.
What you can do is develop a custom BizTalk adapter to interact with the file system and to implement the listening and looping logic. Next you can develop a custom pipeline component to create a single BizTalk message perhaps with base64 data type in it for binary data. Additionally you could also promote messages right in this component to enable orchestration subscriptions.
Orchestrations are more suited for implementing business work-flow scenarios where the messages are already in XML format. This do not appear to be the case. In any case I think at the very least a custom pipeline component would be needed.
David's answer is the correct answer.
Even in cases where you don't know absolutely nothing about the contents of the expected attachments, surely you know their names and locations. Therefore you can use the Flexible Correlation linked to in david's answer like this:
The key to the solution is to correlate on the builtin BTS.ReceivedFileName property.
First, create a custom receive pipeline, with a custom pipeline component that promotes the BTS.ReceivedFileName context property of the received messages. This simple custom component is fairly easy to write but you can make it straightforward by using third-party frameworks such as, (shameless plug, here) my PipelineComponentBase class or the excellent BizTalk Server Pipeline Component Wizard.
Now for the easy part:
Attachments are received in a specific location, designated by its path on the filesystem.
Create a receive location that listens to an alternate location, used only to control when files are actually swallowed by BizTalk.
In your orchestration, create a correlation type with the BTS.ReceivedFileName property and a correlation set base on this correlation type.
When you want to receive binary attachments, send a dummy message with the BTS.ReceivedFileName context property set to the filename of the binary attachment but with the path matching the alternate location ; the one used by the receive location. Initialize the correlation on the send shape.
Use an expression shape to copy the binary file from its original location to the one used by the receive location.
Finally, use a receive shape bound to the receive port that contains the receive location whose custom receive pipeline will promote the BTS.ReceivedFileName property.
Notice that you actually need to send a message in order to initialize the correlation. It does not matter what message you send actually. What I'd do is send the message through a send pipeline that contains an empty pipeline component. That is a pipeline component that reads the message but return null (so that the message vanishes into thin air before it reaches the adapter). A more elaborate solution would be to use a null adapter. That is an adapter that reads the message but does not do anything about it.
These two solutions avoid having many files accumulate in a temporary location somewhere, just for the sake of initializing a correlation!
We have two client apps (a web app and an agent app) accessing methods on the same service, but with slightly different requirements. My team wants to control behaviour on the service side by passing in a ApplicationType parameter to every method - which is essentially an enum containing the name of the calling client application - which is then used as a key for a database lookup to configure the service with client-specific options.
Something about this makes me uneasy as I don't think the service should really have to be aware of which client is calling it. I'm being told that it's easier to do it this way than pass a load of options dynamically through the method call.
Is there anything wrong with the client application telling the service who they are? Or is there really no difference between passing a config key versus a set of parameterized options?
One immediate problem I can see is that if we ever opened the service to another client run by a third party, we'd have to maintain their configuration settings locally for them. At the moment we own both client apps so it's not so much of a problem.
How would you do it?
In a layered solution, you should always consider your layers as onion-like layers, and dependencies should always go inwards, never outwards.
So your GUI/App layer should depend on the businesslogic layer, the businesslogic layer should depend on the data access layer, and similar.
Unless you categorize the clients (web, win, wpf, cli), or generalize it with client profiles (which client applications can configure), I would never pass in the name of the calling application, as this would make the business logic layer aware of and dependent upon the outside layer.
What kind of differences are we talking about that would depend on the type of application? If you elaborate a bit on the differences here, perhaps someone can come up with some helpful advice on other ways to solve this.
But I would definitely look for other ways before going down your described path.
Can't you create two different services, one for each application? The two services will share a lot of code or call a single internal service with different parameterization depending on what outer service was called.
From a design perspective, this is no different than having users with different profiles. From a security perspective, I hope your applications are doing something to identify themselves, lest users of one application figure out a way to invoke the other applications logic as a hack. (Image a HR application being used by the mafia and a bank at the same time, one customer would be interesting in hacking the other customer's application on a shared application host)
In .net the design doesn't feel this way because the credentials live on the thread (i.e. when you set the IIPrincipal, that info rides on the thread-- it is communicated along with each method call, but not as a parameter.)
Maybe what you are looking for in terms of a more elegant design is an ApplicationIdentity attribute. You'd have to write a custom one, I don't know of one in the framework right now.
This is a hard topic to discuss without a solid example.
You are right for feeling that way. Sending in the client type to change behaviour is not correct. It's not a bad idea for logging... but that's about it.
Here is what I would do:
Review each method to see what needs to be different and why.
Create different methods for different usages. The method name should be self explanatory. If you ever need to break compatibility, you have more control (assuming you're not using a versioning system which would be overkill for an in-house-only service).
In some cases request parameters (flags/enum values) are more appropriate.
In some cases knowing the operating environment is more appropriate (especially for data security). The operating environment almost always sent during a login request. Something like "attended"/"secure" (agent client) vs "unattended"/"not secure" (web client). Now you must exchange a session key (HTTP cookie or an application level session id). Sessions obviously doesn't work if you need to be 100% stateless -- especially if you want to scale-out without session replication... if you have that requirement, send a structure in every request.
Think of requests like functions in your code. You wouldn't put a magic parameter that changes the behaviour of the function. You would create multiple functions that each behave differently. Whoever is using the function makes the decision which one to call.
So why is client type so wrong? Client type has no specific meaning on its own. It has many meanings and they may change over time. It's simply informational which is why it is a handy thing to log. An operating environment does have a specific meaning.
Here is a scenario to consider: What if a new client type is developed that is slightly different in a way that would break compatibility with the original request? Now you have two requests. 2 clients use Request A and 1 client uses Request B. If you pass in a client type to each request, the server is expected to work for every possible client type. Much harder to test and maintain!!