In my BizTalk Project, I need to have a Receive Pipeline that will disassemble four different flat files that each have a unique schema. That's to say, the pipeline must resolve the schema of the flat file sent through as 1 of 4 flat file schemas dynamically at runtime.
The best approach I have heard to do this is to just have 4 Flat File Disassemble shapes in the Disassemble stage of my pipeline. The logic behind this is that BizTalk will run through the disassemble shapes one by one until it matches the schema of the document to one of the schemas designated in the disassembler components - sort of like an if statement on the schema type. However, no matter which of the 4 documents I pass through, BizTalk seems to always want to go with the very first schema in line in the pipeline disassemble shapes.
So my question(s): Can someone explain in more detail exactly what happens when more than one flat file disassemble shape gets added to a pipeline? Is there a better alternative than to take this approach?
Exactly how the Flat File Disassembler probes the messages is not well documented. However, it usually doesn't matter because if it doesn't work, well, it just doesn't work in you case.
What you can do is wrap the Flat File Disassembler and implement you own, more robust, detection logic.
Here's an example: http://biztalkxin.blogspot.com/2012/11/biztalk-2010-create-dynamic-flat-file.html
Have you implemented IProbeMessage interface. This interface probe method allows to return true or false based on which pipeline execution goes to next pipeline component
Related
I am using BizTalk Deployment Framework (BTDF) for deploying my BizTalk solution. If I have made any changes to BizTalk bindings, I export them from BizTalk Administration Console and replace my PortBindingsMaster.xml bindings file (created by BTDF) with my exported bindings. I think many of you do the same. The problem is that when I do the export, BizTalk mixes up the order of XML nodes in bindings file, so when I am trying to merge with my source control, I'm getting over 9000 conflicts.
Is there any cool way to merge these BizTalk bindings?
Too much trouble trying to merge these. Get comfortable with the structure of a binding file and extract the parent node you need and copy it over to PortBindingsMaster.
The other problem with merging is if you use settingsfilegenerator, you'll need to merge that which can be potentially everywhere in PortBindingsMaster.
There's a coolway by using NotePad++ Pretty XML plugin. Apply pretty xml to both Binding Files.
I've got stuck with a problem. I've an BizTalk 2010 application which references a Third party Schema dll. Our Architect told us not to directly reference it as it'll take more time to serialize this huge around 9MB dll which will cause Biztalk work more.
Since this Third part dll is a schema dll it'll be deployed to MgmtDb under any of the applications prior any other app deployment. Our orchestration messages has messagetypes which are referenced from this schema dll.
What I want to know is where excatly this serialization of this external dll taking place as the this dll already been deployed and Orchestration instance can reference this against any request messages which comes in.
Do serialization happen for each message which creates an orchestration instance.
Please share your thoughts.
Thanks.
While its true that the referenced assembly will be added into MgmtDB, AFAIK it is only metadata about the assembly and the artifacts in it which is added, e.g.
use BizTalkMgmtDb
select * from dbo.bts_assembly
... dbo.bts_orchestration
... dbo.bt_DocumentSpec
etc.
Possibly he/she is refering to instances of messages created from schema classes in the assembly (and are stored in the messagebox). But the size of the messages will be determined by the size of the data in it, not by the size of the assembly.
Since you seem to need the referenced message schemas, there isn't much option but to reference it in your new project (e.g. unless you have the source to the 3rd party assembly where you could refactor it and split it into several smaller assemblies). The 3rd party assembly needs to be deployed on your BizTalk servers and signed and GACed.
However if this referenced schema assembly also contains other artifacts like custom classes used in orchestrations as variables, these classes will also need to be serializable as soon as the orchestration hits a dehydration point (to avoid this you would need to scope the variables out before the dehydration and / or use an atomic scope to prevent BizTalk from dehydrating at all, but this is generally a bad idea as it will limit scalability)
Your Architect has made an incorrect assumption about when BizTalk performs validation of a Document against its defined Schema.
Validating a large Document against a hefty Schema, such as an EDIFACT or OASIS, can take a lot of resources. BizTalk therefore will not validate an incoming Document against its relevant schema unless you explicitly ask it to do so in the Receive Pipeline. By default, most Pipeline components will have their 'ValidateDocument' property set to 'False'. BizTalk will therefore only perform Document recognition, based on the namespace and root node, and this is done while stream-reading the first couple of hundred bytes of the document stream.
So, you can freely 'reference' the third party DLL, the only performance penalty will be at compile and deployment time. If, for some reason, you need to validate a Document against this Schema, you would need to have it in the Management DB regardless.
When starting a BizTalk project I generally follow the naming conventions found here. Where you name your projects and assembly's something like:
MyCompany.MyProject.Orchestrations.dll
MyCompany.MyProject.Schemas.dll
MyCompany.MyProject.Pipelines.dll
MyCompany.MyProject.Transforms.dll
MyCompany.MyProject.PipelineComponents.dll
A couple of questions for other BizTalk folks:
1) I usually find myself having more than one project with schemas or a need to separate schemas. Do you stick them in separate assemblies and if yes, what convention do you then follow for naming the project/assembly. If no, do you stick them in a subfolder in one assembly.
2) I believe, could be wrong, that it's been sort of a BizTalk convention to name the project and assembly the same, like above. I've thought about getting away from naming the projects the same as the full assembly name, so I might have the project named Maps and it's assembly is named MyCompany.MyProject.Maps. Do others do this?
Starting with BTS 2009 we named our projects and assemblies according to the application they belong to plus an optional sub-application or concern scope:
MyCompany.Biz.MyFirstApp.dll
MyCompany.Biz.MyFirstApp.Util.dll
MyCompany.Biz.MyFirstApp.ConcernOne.dll
MyCompany.Biz.MySecondApp.dll
We took the path to keep orchestrations, schemas and maps together because multi-assembly dependencies can make deployment a real hassle.
Our main goal was to separate source and target systems to avoid direkt references. We achieved this be introducing "core" components for all concerns we're dealing with:
BTS application MyFirstApp
MyCompany.Biz.MyFirstApp.OrderProcessing.dll
MyCompany.Biz.MyFirstApp.Util.dll
BTS application CORE
MyCompany.Biz.CORE.OrderProcessing.dll
BTS application MySecondApp
MyCompany.Biz.MySecondApp.OrderProcessing.dll
Both MyFirstApp and MySecondApp will reference schemas in CORE.OrderProcessing.
Update
MyCompany.Biz.MyFirstApp.OrderProcessing would contain the message schema for incoming order documents and a map for mapping those into the core order message schema (contained in MyCompany.Biz.CORE.OrderProcessing). If needed it could also contain an orchestration for receiving messages and (receive) pipeline components (when dealing with flat files for example).
MyCompany.Biz.MySecondApp.OrderProcessing would contain the message schema for outgoing documents and a map for mapping from the core order message schema (to outgoing).
In this basic layout CORE will merely be a container for your internal message schemas but it will be the best location to add information to your order documents - for example an orchestration which awards a global discount for class A customers (Business Rules!). In short basically any step you'd do twice or even more times when sending or receiving messages and you do not want to touch if incoming or outgoing message schemas changes or new application is added.
Here is a wonderful BizTalk Naming Conventions guide from Scott Colestock
I need to make a call to a web service written in .NET. The application making the call is written in ColdFusion. One of the parameters the web service expects is a DataSet object. I can't instantiate a .NET DataSet object in ColdFusion, how can I pass the web service something it will accept? I have no problem writing the SOAP request in raw XML, I just don't know what the XML for a DataSet object would look like.
All objects that .NET expects are serialized by Axis and are available to you. Unfortunately ColdFusion does not make it easy to get to.
To get to the stubs you must:
Access the WSDL in any way with coldfusion.
Look in the CF app directory for the stubs. They are in a "subs"
directory, organized by WSDL.like:
c:\ColdFusion8\stubs\WS\WS-21028249\com\foo\bar\
Copy everything from "com" on down into a new directory that exists in
the CF class path. or you can make one like:
c:\ColdFusion8\MyStubs\com\foo\bar\
If you created a new directory add it to the class path. and restart CF services.
Use them like any other java object with or CreateObject()
MyObj = CreateObject("java","com.foo.bar.MyObject");
Your dataset object should be in there somewhere in whatever java format Axis decided it should be. Most likely you're going to need to do almost all of this in cfscript
EDIT FOR QUESTIONS
THe SOAP object will define the object structure and Axis will create methods for manipulating it. Take a look at the Java object that axis creates. Remember that you can use CFDUMP to look at the methods and properties.
Now I HAVE seen .NET objects that Axis gets confused by, like the dreaded non-generic collection that turns into a "ArrayOfAnyType". It's important for .NET developers to use Generics in their services so that Axis can define the arrays properly....if they don't then it sucks and you may not be able to work with it in soap.
but have no fear obi-won...there is another way. You can always interact with .NET web services in a XML/RPC kind of style. It's not automatic, its a lot of hand parsing of XML, it sucks, but sometimes it's the only way to do it. You should be able to get some help from .NET by hitting up the .asmx file without the "?wsdl" on the end. If you do that .NET will generate a bunch of documentation and examples of what the calls and the XML look like. In that case, you can just create the XML and pass it over the wire as specified by using cfhttp. Good Luck!
P.S. I should note also that as far as I know there is no way to mix hand rolled XML with the ColdFusion/Apache Axis objects, there is also no way to model your own object for use with CF/Axis...you must use the stubs or nothing
Could you use JSON?
http://json.org/
What method do you use to get a compile time error when the database schema changes occur in an ASP.NET project?
For example, if you have a GridView bound to a DataSource, I can only get runtime errors when a schema change occurs, not a compile time error. Intellisense works fine on the code behind using datasets, LINQ, etc, but I cant seem to get a compile time error on an ASP.NET page when I change the schema.
Any advice?
Create a unit test that verifies the correctness of you data access layer, and make sure it covers all your DB-related code. Not everything can be caught at compile time...
One way I can think of easily achieving this behavior would be to databind to a dynamic DAL. There are some tools that can help do this DAL generation, I'd recommend taking a look at SubSonic.
Once you have something like SubSonic in place you can bind to the resulting business objects. These business objects will automatically change in the case of a schema change in the database and this will break your binding code which will result in a compile time error.
Update
Assaf's recommendation to use Unit Tests is also a good idea. It doesn't solve your stated problem but it is definitely something that should be in place and is a great tool for flagging these type of problems.
We use a modest system (xml to c++) to create schemas from an independent description, this system also creates names for tables and columns that we use inside the code, when there is a change in the schema the names change, as the names we originally used are not there anymore and the compiler will flag an error.
You could probably configure a lot of the DAO generation tools to do something similar.
One solution would be to version your database and map an application build to a specific version (maybe in a properties file). In the entry point of your app, you can compare the expected version to the actual version and handle the error accordingly.
I'm not sure whats the equivalent in ASP.net of Migrations in Rails or dbdeploy in Java for versioning your database. But any DB versioning tool that makes schema changes incremental and versioned and tracks the version in a Version table will suit the purpose.
But if you want a compile time error while building your app, you might as well upgrade your schema to the latest version as part of your build process, avoiding the possibility of schema changes in the first place.