I have seen lots of documentation on how Agile Asp.Net Request handling is? I want to know is the case same with WCF Request handling. Can we rely on the fact that the Thread that starts Wcf request handling will finish it?
I am maintaining a Wcf Application where at lots of places ThreadStatic variables are used. Although the code is working but is it reliable? Is it worth changing it or should I keep it as it is?
When creating a WCF service you can set the threading and service instantiating behaviour, by decorating the service implementation class with a ServiceBehavior attribute:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single)]
class SingleCachingHttpFetcher : IHttpFetcher
The above code snippet is from http://msdn.microsoft.com/en-us/library/system.servicemodel.servicebehaviorattribute.concurrencymode.aspx
EDIT
I dit a bit more research and found this article: http://blogs.microsoft.co.il/blogs/applisec/archive/2009/11/23/wcf-thread-affinity-and-synchronization.aspx. It basically says that no, you cannot be sure that the same thread starting the request will be the one finishing it.
EDIT 2
This question has been discussed before at StackOverflow. It links to How to make a WCF service STA (single-threaded) where there is a description on how to create an OperationBehavior which will force a single threaded apartment. The example deals with calling GUI components, but it should work for other single threaded requirements as well.
Related
I have inherited supporting and making BizTalk applications as part of my development role.
I'm a general C# developer so I was pleased to see I can create Classes and call the Methods from the Expression shape of an Orchestration.
This means I can do all the data manipulation using code I am familiar and faster with rather than learn the BizTalk ways.
At this point I am not concerned with if it's a good idea or not.
Are there any purely technical reasons I should not do this?
You would have to make sure that whatever external methods you are calling is muli-threading capable and can handle high throughput.
If you don't achieve the above then you will either get some very strange issues (caused by cross thread contamination) or you will cause a bottle neck in BizTalk which will reduce message throughput.
You also need to make sure that errors are handled, retried and propagated back correctly to the calling Orchestration on failure. I came across one solution where the developer for some reason had decided to call a web service using an external class. Every so often this web service would throw an error, but the class would just pass the error message as it was back to the Orchestration as if it was a valid response message. This would cause a failure later on in the Orchestration when it tried to use the message and it did not match the expected message. When I got allocated budget I replaced this class with a properly configured send port, which also automatically retried the message when it encountered the web service error and then it successfully processed.
Technically you are adding deployment and maintaining complexity with the externals assemblies, for example any change in some contract will require changes in the assembly and the orchestration.
And you're losing all the advantages of the BizTalk mapping engine for data transformation which is in general an easy part to learn.
I say this, it's very important for future maintainability to develop BizTalk apps in the "BizTalk Way".
For example, if you did message manipulation in an external class that should have been done in a Map or with XPath, I would fail that in Code Review and you'd have to refactor.
The reason is because whoever might take this over from you should expect a BizTalk app. I've seen situations such as you describe and it does make it harder to upgrade, enhance, support new business requirements because now the developer has to accommodate the BizTalk and external processes.
Technically using .NET classes does not break anything in the BizTalk. Lots of BizTalk components are based of .NET framework such as Adapters, pipeline components etc. In my view, use the best of both BizTalk and .NET depending on the scenarios. e.g. if you need to map an inbound XML to outbound XML use BizTalk maps as they are lot easier and quicker to implement, in this scenario using a .NET class is more tedious then using map. I don't think there is a big learning curve in using BizTalk features such as Maps, orchestrations, pipeline components.
ASP.NET is known to exhibit what is called "thread agility". In short, it means that multiple threads may be employed to fulfill a single request, although not more than one thread at a time. This is an optimization that means a thread waiting for asynchronous I/O may be returned to the pool and used to service other requests.
However, ASP.NET does not migrate all thread-related data when moving a request. Microsoft either forgot to do so, or thought that using thread-local storage (made easy by the ThreadStatic attribute) was something only the people coding ASP.NET themselves should do.
Based on quick googling, it seems to me that the only way to avoid the issue is to rely on HttpContext instead. The context is indeed migrated if ASP.NET decides to switch threads mid-request, so this overcomes the problem. But it creates a brand new headache instead: It ties your application logic to HttpContext, and therefore to a web context. That's not acceptable in all situations (in fact, I'd say it's unacceptable in most). Besides, since HttpContext is sealed and has internal constructors, you cannot mock or stub it, and therefore your logic also becomes untestable.
According to this (old) blog post, CallContext does NOT work, which is pretty infuriating given that a call context is conceptually precisely a logical thread!
Is there a simple way to reliably implement "per-LOGICAL-thread" isolation that will work in asp.net contexts as well as other contexts?
If not, does anyone know of a lightweight third-party framework that solves the problem? Does StructureMap behave correctly when ASP.NET migrates threads?
I would like a general answer, but in case anyone wonders, the specific use case I'm looking at is for use of Entity Framework in a SharePoint context. We're unfortunately stuck with SP-2010 and EF 3.5 for a while. EF basically requires that data is saved using the same context as they were originally read from - or else you have to keep track of changes yourself. I would like to introduce a "current model" concept. The first time the model is called upon in processing each HTTP request it should be instantiated, and then that same model instance should be used for the duration of the request. But the code relying on "Model.Current" should also work if executed in the context of a timer job. I'm fine with the timer job code explicitly disposing of the model when done with it (a task I'd like to give to a handler for HttpApplication.EndRequest in the SharePoint web context).
There may be reasons not to do this, and that's interesting too, but I would anyway really appreciate to learn of a way to achieve "logical thread isolation" in an asp.net context, as it'd be remarkably useful.
There is a nice post related to the problem: Implicit Async Context ("AsyncLocal").
If I got everything right, Logical CallContext i.e. CallContext.LogicalGetData and CallContext.LogicalSetData make it real to migrate immutable data correctly given you live in the world past .NET 4.5. This immutable limitation is a nut but still...way to go.
I have had a read on what's new in .NET4.6 and one of the things is ASP.NET 5 which I am quite excited about.
One of the new things is New modular HTTP request pipeline, however there is no more info on how exactly is it going to change.
The only reference in the article is
ASP.NET 5 introduces a new HTTP request pipeline that is lean and
fast. This pipeline is modular so you can add only the components that
you need. By reducing the overhead in the pipeline, your app will
experience better throughput. The new pipeline also supports OWIN.
What are major differences between ASP.NET4.5 and ASP.NET5 Http pipelines? How modularity will be controlled?
The biggest difference in my opinion is the modularity of the new request pipeline. In the past, the application lifecycle followed a relatively strict path that you could hook into via classes implementing IHttpModule. This would allow you to affect the request, but only at certain points along the way by subscribing to the different events that occur (e.g. BeginRequest, AuthenticateRequest, etc.).
The full descriptions of these can be found on MSDN: IIS 5 & 6 or IIS 7, and a walkthrough of creating such a module can be found here.
In the new ASP.NET 5 world, the request pipeline is decoupled from System.Web and IIS. Instead of a pre-defined path, it uses the concept of middleware. If you are familiar with OWIN, the idea is nearly identical, but the basic idea is that these Middleware Components are registered and then the request passes through them in the order that they are registered.
Each middleware component is provided a RequestDelegate (the next middleware component in the pipeline) and the current HttpContext per-request. On each request, the component is invoked, and then has the opportunity to pass the request along to the next in the chain if applicable. For example, an authentication component might opt not to pass the request along to the next component if authentication fails. Using this system, you can really handle a request any way you choose, and can be as light-weight or as feature-rich as you need it to be.
This example is a little bit dated now (e.g. IBuilder has been renamed to IApplicationBuilder), but it is still a great overview of how building and registering these components looks.
I've just read this interesting article regarding simultaneously calling multiple methods on a WCF service from Silverlight:
http://weblogs.asp.net/olakarlsson/archive/2010/05/20/simultaneously-calling-multiple-methods-on-a-wcf-service-from-silverlight.aspx
The article states: "It turns out that the issue is founded in a mix of Silverlight, Asp.Net and WCF, basically if you’re doing multiple calls to a single WCF web-service and you have Asp.Net session state enabled, the calls will be executed sequentially by the service, hence any long running calls will block subsequent ones."
I am assuming that the blocking is only an issue if you are making multiple calls to the same service, and that two simultaneous calls to two different methods on two different services should not result in one blocking the other?
The suggested solution to the problem in SL3 involves using the following syntax in the Application_Startup method:
WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp);
The session state will then have to be maintained on WCF calls by seting up a cookie container, and sharing it across all of your proxies (see http://forums.silverlight.net/forums/p/174322/393032.aspx)
Is this still the recommended solution in Silverlight 4? Has anyone used an alternative approach?
In .NET 4, you can do this in Application_BeginRequest
if (Context.Request.Path.EndsWith("xxx.svc"))
Context.SetSessionStateBehavior(SessionStateBehavior.Disabled);
If you are making a call into an ASP.Net application, and you are using session cookies, then all the calls into the application are serialized (apart from ones where the page explicitly opts out of session state).
Normally this isn't a big issue, because a client browser typically hits an ASP.Net page plus a bunch of resources (images, js, css etc...), and these latter aren't mapped to ASP.Net so IIS serves them up natively. But if you try and hit two ASP.Net pages at the same time (eg in a frameset) you will see them load up one after another.
Now I don't know that this happens with WCF, but based on what you say, if you see that behaviour for one service I would expect to see that for all of them, because the session is per-user, not per-service.
In ASP.Net you can 'opt out' of session state on a page by page basis. If that's possible for a hosted WCF service, and viable for your scenario (to make the services stateless) that would aleviate the issue. Or move one or more services to a different ASP.Net application (different session)
Bear in mind that you can see other issues here to do with the instancing and reentrancy models of the service. Your problem as described above is a per-user concurrency issue, but there are others. For example, if you set the service up as a singleton (InstanceContextMode.Single) and non-reentrant (ConcurrencyMode.Single) then only one request will ever be processed at a time across all users.
Update: Doing some doco reading:
WCF services aren't enrolled into ASP.Net sessions unless you ask for it (using )
WCF services can on a per service basis opt in, using the [AspNetCompatibilityRequirements] attribute on the service implementation
There doesn't seem to be any way of opting in but not opting into Session state.
There's a good blog post about this on Wenlong Dong's site
So from what I can see you should be able to use AspNetCompatibilityRequirementsMode.NotAllowed to opt out individual services completely from ASP.Net services. Alternatively, leave it off by default and only opt-in the ones that need access to the ASP.Net session (bearing in mind that unless you really need to share the same session with ASP.Net, just using WCF's session services is probably a better bet).
I'm currently working with web services that return objects such as a list of files e.g. File array.
I wanted to know whether its best practice to bind this type of object directly to my front end code for example a repeater/listview or whether to first parse it into my own list of "file class" e.g. customFiles[]
If the web service changes then it will break my front end code, however if I create my own CustomFile class, then i would only need to change my code in one place to fix the issue, but it just seems like a lot of extra work to create the same classes from a web service, i wanted to know what is the best practice for this type of work.
There is a delicate balancing act in properly encapsulating implementation details. Too little encapsulation is a maintenance nightmare as small changes in any area break the application. Too many layers is a different kind of maintenance headache altogether.
In this particular case I would create a small layer in your application to encapsulate the web service calls. This will ease your maintenance in both the application and the service as they will be loosely coupled.
It sounds like you have already answered your own problem. Best practice is to create your own custom class for the reasons you point out, but it is significant extra work.
If the webservice isn't likely to change then just use the existing classes, but if you need to cater for change then create your own.
Returning a class is fine as long as your client knows how to deserialize it. If it's truly a web service, where you don't have control over both ends of the conversation, it's more common to start with schemas for XML request and response streams. That decouples the client from the web service a bit more and allows any client that can send XML via HTTP and consume an XML response fair game.