I have legacy BizTalk app that have about 10 orchestrations and 20 maps built on external webservice schemes.Now this old webservice will be removed and replaced with new web-service with similar (almost the same) schemes.
What would be best strategy to replace schemes from old webservice into all orchestrations and maps ? I can go through every orchestration and replace all message types ports and transformations manually.
Is there better way ?
Please advise.
ACK:I know that more convenient way of building BizTalk applications is to create internal type (xsd) and design all orchestrations and maps around internal type.Than create one map to transform from external(webservice) type to internal, so in case of changing web-service only this one map will be changed.
Unfortunately this is not the way legacy app was build.
UPD:
problem is that old webservice types are being used into a lot of orchestrations and maps. if I pull old webservice out and import new webservice I will get an error in all of them.So I have manually change all of them for using new type. I'm trying to find a way to cheat and not to change them.
new web-service with similar (almost the same) schemes.
If that is indeed the case, you probably don't have to replace much if anything. Just update the existing BizTalk app with the 'minor' changes to accommodate the new service.
However, if the current schema is used in multiple places, you can just use a Map on the Receive Port to transform the new message to the old one. It perfectly fine if the Root Element and Namespace are the same, all you would need to do is set the old one explicitly in the XmlDisassembler. Maps always work on the .Net Type only.
Related
I have inherited supporting and making BizTalk applications as part of my development role.
I'm a general C# developer so I was pleased to see I can create Classes and call the Methods from the Expression shape of an Orchestration.
This means I can do all the data manipulation using code I am familiar and faster with rather than learn the BizTalk ways.
At this point I am not concerned with if it's a good idea or not.
Are there any purely technical reasons I should not do this?
You would have to make sure that whatever external methods you are calling is muli-threading capable and can handle high throughput.
If you don't achieve the above then you will either get some very strange issues (caused by cross thread contamination) or you will cause a bottle neck in BizTalk which will reduce message throughput.
You also need to make sure that errors are handled, retried and propagated back correctly to the calling Orchestration on failure. I came across one solution where the developer for some reason had decided to call a web service using an external class. Every so often this web service would throw an error, but the class would just pass the error message as it was back to the Orchestration as if it was a valid response message. This would cause a failure later on in the Orchestration when it tried to use the message and it did not match the expected message. When I got allocated budget I replaced this class with a properly configured send port, which also automatically retried the message when it encountered the web service error and then it successfully processed.
Technically you are adding deployment and maintaining complexity with the externals assemblies, for example any change in some contract will require changes in the assembly and the orchestration.
And you're losing all the advantages of the BizTalk mapping engine for data transformation which is in general an easy part to learn.
I say this, it's very important for future maintainability to develop BizTalk apps in the "BizTalk Way".
For example, if you did message manipulation in an external class that should have been done in a Map or with XPath, I would fail that in Code Review and you'd have to refactor.
The reason is because whoever might take this over from you should expect a BizTalk app. I've seen situations such as you describe and it does make it harder to upgrade, enhance, support new business requirements because now the developer has to accommodate the BizTalk and external processes.
Technically using .NET classes does not break anything in the BizTalk. Lots of BizTalk components are based of .NET framework such as Adapters, pipeline components etc. In my view, use the best of both BizTalk and .NET depending on the scenarios. e.g. if you need to map an inbound XML to outbound XML use BizTalk maps as they are lot easier and quicker to implement, in this scenario using a .NET class is more tedious then using map. I don't think there is a big learning curve in using BizTalk features such as Maps, orchestrations, pipeline components.
I'm working on a project structured as N-Tier / 3-Tier Architectural Style. The layers (client-server) communicate with each other by HTTP and WCF. I want to use AppInsights to track everything in the project. However, I'm confused about where in the project to create or add AppInsight packages. I have created two instances of resources under one resource group on Azure. Then, I configured inst. key and other things regarding projects in the solution. Although I overrode handle attribute with mine(as explained in AppInsights docs), I couldn't see all call methods and exception tracks from client to server along with layers. Is it not possible to track exceptions in that project type or I'm mistaken?
UPDATE for additional info to define problem clearly: When I use one resource and one instrumentation key, AppInsights defines well whole project and shows relation of my solution and its dependencies. But, when one request failed due to exception in the server, I can't reach whole exception trace, just last method where request is asked from server. Because of that I use two resource instance, one for server and one for client(not real client, its like middle layer).
if you are using multiple apps/keys and multiple layers, you might need to introduce custom code to set/pass through something like a correlationId through http headers, and code that reads and uses that as application insights's "operation id". If you do the work across different applications/ikeys, searching across the apps will become more complicated.
instead, you might want to use just one ikey across all the layers, and instead use different ikeys for different environments, like dev/staging/prod instead.
there are some examples of reading/using operationId, like this blog post:
http://blog.marcinbudny.com/2016/04/application-insights-for-owin-based.html#.WBkL24WcFaQ
where someone does something similar with OWIN.
For Exception tracking, if the exceptions are happening at one of the other asp.net layers, you might need to enable some extra settings to collect exceptions or more information for them:
http://apmtips.com/blog/2016/06/21/application-insights-for-mvc-and-mvc-web-api/
I am new to BizTalk development. I have an existing SOAP web service, which has around 50 different operations. I want to connect this service to another application, but use the BizTalk server as an intermediary in this communication. So service and application should not know each other directly, BizTalk should be able to log all messages going through etc etc.
What is the best approach to make this work in BizTalk Server 2013?
So far I tried to create a new BizTalk Application and import the SOAP web service there. Then however it seems that I need to create around 50 different orchestrations, each one just mapping the incoming message in BizTalk to the external service for each service operation. This seems very cumbersome. Also publishing all those orchestrations becomes painful, as BizTalk cannot merge those into a single endpoint again. Ideally I would like to publish a single endpoint for BizTalk server on IIS that is using the exact same WSDL as the target SOAP service, ideally without having to create any orchestrations at all. Is this possible?
Thanks!
So, yes, but...what you want is absolutely doable but there would be lots of answers for that. Once you learn how things in BizTalk are actually working, it's obvious how to do this.
For example, a single Receive Location (IIS endpoint) can receive any number of request types provided they are the same protocol/format, SOAP, REST/JSON for example. The only difference in the IIS site is any metadata, so just don't publish that. The Message differentiation is done in the Pipelines just like any other BizTalk Message.
You don't really need Orchestrations for Maps, you can apply those at the Port Level provided it's a 1-1 relationship between the SOAP call and Map.
Please try a few things. I'll become clear. You can always come back for any specific issues.
I'm working on a web service that uses ASP.NET security model (i.e. with AspNetCompatibilityRequirements set to allowed). Like many others, I got an error saying the Anonymous access is required because the mexHttpBinding requires it and the only way to get around it is to remove the mex endpoint from each service as described here:
WCF - Windows authentication - Security settings require Anonymous
I thought by removing mex endpoint I will no longer able to generate WSDL or add a reference to the service from Visual Studio but to my surprise everything still works. I quickly googled the "mex binding" but most web sites just say it's for "Metadata Exchange" without going into too much detail on what it actually does.
Can anyone tell me what's the side effect of removing the mex binding?
If your WCF service does not expose service metadata, you cannot add a service reference to it, neither from within Visual Studio (Add Service Reference), nor will another client be able to interrogate your service for its methods and the data it needs.
Removing Metadata Exchange (mex) basically renders the service "invisible", almost - a potential caller must find out some other way (e.g. by being supplied with a WSDL file, or by getting a class library assembly with a client he can use) about what the service can do, and how.
This might be okay for high risk environment, but most of the time, being able to interrogate the service and have it describe itself via metadata is something you want to have enabled. That's really one of the main advantages of a SOAP based service - by virtue of metadata, it can describe itself, its operations, all the data structures needed. That feature is used to make it really easy to call that service - you just point to the mex endpoint, and you can find out all you need to know about that service.
Without the metadata exchange, you won't be able to use svcutil.exe to automatically generate the proxy classes.
I'm currently working with web services that return objects such as a list of files e.g. File array.
I wanted to know whether its best practice to bind this type of object directly to my front end code for example a repeater/listview or whether to first parse it into my own list of "file class" e.g. customFiles[]
If the web service changes then it will break my front end code, however if I create my own CustomFile class, then i would only need to change my code in one place to fix the issue, but it just seems like a lot of extra work to create the same classes from a web service, i wanted to know what is the best practice for this type of work.
There is a delicate balancing act in properly encapsulating implementation details. Too little encapsulation is a maintenance nightmare as small changes in any area break the application. Too many layers is a different kind of maintenance headache altogether.
In this particular case I would create a small layer in your application to encapsulate the web service calls. This will ease your maintenance in both the application and the service as they will be loosely coupled.
It sounds like you have already answered your own problem. Best practice is to create your own custom class for the reasons you point out, but it is significant extra work.
If the webservice isn't likely to change then just use the existing classes, but if you need to cater for change then create your own.
Returning a class is fine as long as your client knows how to deserialize it. If it's truly a web service, where you don't have control over both ends of the conversation, it's more common to start with schemas for XML request and response streams. That decouples the client from the web service a bit more and allows any client that can send XML via HTTP and consume an XML response fair game.