Trying to fix a problem in a classic ASP application, however I am inexperienced. Tried to find more info but was unable to.
The app instantiates a COM object for data retrieval which is not thread-safe, so the following instructions are added.
comObject=CreateObject("comServer.comObject")
returnValue=comObject.DoWork(.......)
...
comObject = Nothing
However, when processing two different http requests at the same time, the latter one seems to overwrite the first request, giving the first requester an error. It looks as if the comObject variable is shared between the requests.
How to instantiate the object in such a way that every separate request in IIS, gets it's own instance of the comObject?
Without knowing what the object does or how it does it, it's impossible to give specific advice. A general description will have to do:
The object is broken/buggy. It is the object's responsibility to handle the problem.
A COM object is supposed to handle all threading issues internally, or defer to COM STA apartments if it cannot do it, or doesn't want to (for those aspects that an STA can handle). This goes deep into the design of the object.
Regardless of COM Apartment choice, a DoWork(...) method with a semantic that precludes multiple separate COM objects in separate threads from handling simultaneous calls - is a seriously problematic design at best. A proper design would either include mechanisms to handle the conflict explicitly, or just hide the conflict from the calling code and handle the conflict internally.
Depending on the details of what DoWork() does, there might be ways to fix the object in such a way that the calls can succeed in parallel, or block each other so the calls are effectively serialized, or to cause the second call to throw a "You already called me" error. Again, which approach is more appropriate depends heavily on what the method does.
If you can't modify this broken component, your best option would be to write a COM wrapper that ensures serialization to the real object.
In any case, there is nothing reasonable you can do from the client (ASP VBScript) side.
Related
I'm developing an app with VS2013, using EF6.02, and Web API 2. I'm using the ASP.NET SPA template, and creating a RESTful api against an entity framework data source backed by a sql server. (In development, this resides on the SQL Server local instance.)
I've got two API methods so far (one that just reads data, one that writes data), and I'm testing them by calling them in the javascript. When I only call a single method in my script, either one works perfectly. But if I call both in script (without waiting for either's callback to fire), I get bad results and different exceptions in the debugger. Some exceptions state that the save can't be completed because there are pending transactions. Another exception stated something about a conflict with other threads. And sometimes, the read operation fails with a null pointer exception when trying to read a result set.
"New transaction is not allowed because there are other threads running in the session."
This makes me question if I'm correctly getting a new DBContext per request. My code for this looks like:
static Startup()
{
context = new Data.SqlServer.AppDbContext();
...
}
and then whenever instantiating a unit of work, I access Startup.context.
I've tried to implement the unit of work pattern, and each request shares a single UOW object which has a single DBContext object.
My question: Do I have additional responsibility to ensure that web requests "play nicely" with eachother? I hope that this is a problem that others have already dealt with. Perhaps the errors that I'm seeing are legitimate in the sense that if one user's data is being touched, it is temporarily in an invalid state and if other requests come in at that exact moment, they indeed will fail (and I should code anticipating these failures). I guess that even if each request has its own DBContext, they still share the same underlying SQL data source so perhaps that's causing issues.
I can try to put together a testcase, but I get differing behavior depending on where I put breakpoints and how long I spend on them, reaffirming to me that this is timing related.
Thanks for any help or suggestions...
-Ben
Your problem is where you are setting your context. The Startup method is for when the entire application starts, thus any request made will all use the same context. This is not a per request setup, but rather a per application setup. As to why you are getting the errors, EntityFramework is NOT thread-safe. Since IIS spawns many threads to handle concurrent request, your single context is being used across multiple threads.
As for a solution, you can look into
-Dependency Injection frameworks (such as Ninject or Unity)
-place a using statement in your UnitOfWork classes
using(var context = new Data.SqlServer.AppDbContext()){//do stuff}
-Or, I have seen instances of people creating a class that gets the context for that request and stores it in the HttpContext.Cache[] element (using a unique name so you can retrieve it in another class easily), making it so that you will reuse the same context for the same request. Something like this:
public AppDbContext GetDbContext()
{
var httpContext = HttpContext.Current;
if (httpContext == null) return new AppDbContext();
const string contextTypeKey = "AppDbContext";
if (httpContext.Items[contextTypeKey] == null)
{
httpContext.Items.Add(contextTypeKey, new AppDbContext());
}
return httpContext.Items[contextTypeKey] as AppDbContext;
}
To use the above method, make a simple call var context = GetDbContext();
Note
We have all of the above methods, but this is specifically to the third method. It seems to work well with two caveats. First, do not use this in a using statement as it will not be available to any other classes during the scope of the request (you dispose it). And secondly, ensure that you have a call on Application_EndRequest that does actually dispose of it. We saw these little buggers hanging around after the request ended in memory causing a huge spike in memory usage.
I'm doing unit testing on a class library that uses NHibernate for persistence. NHibernate is using a Sqlite in-memory database for testing purposes. Under normal circumstances, it's easy to get StructureMap to kick out a session for me.
However, because I'm using the in-memory database to improve testing speed, I need to have a single session available for the duration of a test (because it blows the database away when I create a new one). And there is another wrinkle. The case that is currently burning me is testing a custom NHibernate-based ASP.NET membership provider. These are created apparently once per AppDomain, so I shouldn't inject the session into it, for obvious reasons.
Is there a way in structuremap to tell it to get rid of an instance of a particular type while still maintaining the bits that tell it how to instantiate that type? Really, if I could get away with it, I would just make it act like the HttpScoped object lifetime, but apparently I can only do that within the context of an Http request. Is there a straightforward way to manually control the lifetime of an object coming out of structuremap?
I apologize for the length of this and the possibility that it is a dumb question. I'm solo on this project, so I don't really have anyone to bounce ideas off of.
You could wrap the session in your own ISession implementation which delegates to a real session which lifetime you control. Then register your own ISession as instance.
I ended up making two constructors for my provider along with a private variable of type Func. By default, its value was set to my standard code for creating a session using StructureMap's ObjectFactory.
The overloaded constructor accepted as a parameter an object of type Func. That way, I can inject a strategy for creating an instance of that type if needed, but otherwise don't have to go through any extended effort. In the case of my test, I created the session in the NUnit setup method and destroyed it in the Teardown. I don't love this idea, but I don't currently hate it enough to rip it out....yet.
This got rid of the error I was experiencing in regard to the tables. However, it appears that NHibernate for some reason cannot write to an in-memory sqlite database under the conditions I created. I'm now working on testing to see if I can write to one in the file system. It isn't ideal, but it will be a good long while (I hope), before the performance of writing to disk really starts hurting.
We have a flex application that connects to a proxy server which handles authentication. If the authentication has timeout out the proxy server returns a json formatted error string. What I would like to do is inspect every URLRequest response and check if there's an error message and display it in the flex client then redirect back to login screen.
So I'm wondering if its possible to create an event listener to all URLRequests in a global fashion. Without having to search through the project and add some method to each URLRequest. Any ideas if this is possible?
Unless you're only using one service, there is no way to set a global URLRequest handler. If I were you, I'd think more about architecting your application properly by using a delegate and always checking the result through a particular service which is used throughout the app.
J_A_X has some good suggestions, but I'd take it a bit farther. Let me make some assumptions based on the limited information you've provided.
The services are scattered all over your application means that they're actually embedded in multiple Views.
If your services can all be handled by the same handler, you notionally have one service, copied many times.
Despite what you see in the Adobe examples showing their new Service generation code, it's incredibly bad practice to call services directly from Views, in part because of the very problem you are seeing--you can wind up with lots of copies of the same service code littered all over your application.
Depending on how tightly interwoven your application is (believe me, I've inherited some pretty nasty stuff, so I know this might be easier said than done), you may find that the easiest thing is to remove all of those various services and replace them by having all your Views dispatch a bubbling event that gets caught at the top level. At the top level, you respond to that event by calling one instance of your service, which is again handled in one place.
You may or may not choose to wrap that single service in a delegate, but once you have your application archtected in a way where the service is decoupled from your Views, you can make that choice at any time.
Would you be able to extend the class and add an event listener in the object's constructor? I don't like this approach but it could work.
You would just have to search/replace the whole project.
I have an ASP.NET site and I've been doing some work refactoring code to try to remove some long running processes (in the order of an hour) from the actual http Request by creating a BackgroundWorker and sending the work off to that to process. This was running fine on cutdown tests but when I applied the logic to the real code I found problems accessing Session variables from the code running in the Background Worker. It seems that the HttpContext object that was passed has a null session and if I ask for HttpContext.Current I get null back.
I'm assuming that this is because they are in a different thread and that the session and HttpContext.Current are both reliant on being in the same thread. Is there any way I can get access to the Session from the background worker or am I stuck with finding all the variables I need from session and putting them in an usable data structure and then putting them back in session (if appropriate) afterwards? It obviously complicates the refactor massively if I need to do this so I'd rather not.
Thanks for any thoughts you might have. I'm open to other suggestions on how I might do this other than BackgroundWorker processes (which were suggested to me in another question).
I'm not sure of all of your requirements, but you may be able to get away with using the Application Cache instead of the Session if you're not looking for the long process to be tied to an individual user's request.
If so, I would try swapping out your use of Session to:
HttpRuntime.Cache.Set("CacheKeyName");
HttpRuntime.Cache.Get("CacheKeyName");
Here's an MSDN link that sheds some light on this.
The text in particular is :
If an asynchronous action method calls a service that exposes methods by using the BeginMethod/EndMethod pattern, the callback method (that is, the method that is passed as the asynchronous callback parameter to the Begin method) might execute on a thread that is not under the control of ASP.NET. In that case, HttpContext.Current will be null, and the application might experience race conditions when it accesses members of the AsyncManager class such as Parameters. To make sure that you have access to the HttpContext.Current instance and to avoid the race condition, you can restore HttpContext.Current by calling Sync() from the callback method.
I need to have some object hanging around between two events I'm interested in: PreRequestHandlerExecute (where I create an instance of my object and want to save it) and PostRequestHandlerExecute (where I want to get to the object). After the second event the object is not needed for my purposes and should be discarded either by storage or my explicit action. So the ideal context where my object should be stored is per request (with guaranteed no sharing issues when different threads are serving requests... or processes/servers :) )
Take into account that actual implementation I can do is being made from a HttpModule and is supposed to be a pluggable solution for already written web apps (so the option to provide some state using static/instance variables in Global.asax doesn't look good - I will have to modify Global.asax on every web application).
Cache seems to be too broad for this use. I tried to see whether httpContext.Application (of type HttpApplicationState) is good for me or not, but cannot get whether it is exactly per HttpApplication instance or not (AFAIK you can have several instances of HttpApplications used on different threads and therefore serving several requests simultaneously - then using storage shared between threads will not work correctly; otherwise I would use it because one HttpApplication instance serves exactly one request at a time). Something could be done with storing state on the HttpModule instances if I know for sure that it's exactly bound 1-to-1 with every HttpApplication instance running (but again I need a proof that HttpApplication instance is 1-to-1 with my HttpModule's instance). Any valuable and reputable links on these topics are much appreciated...
Would be great to find something particularly well-suited for per request situation (because otherwise I may end up with something ulgy... probably either some 'broader' scoped storage and some hacks to have different keys in the storage for different requests, OR using a thread-local thing and in this way commit to the theory that IIS/ASP.NET will not ever serve first event from one thread and the second event from the other thread and so on)
try HttpContext.Current.Items collection. It is per Request.
as Fahad had mentioned, HttpContext.Current.Items is the way to go. Be aware that it is per-request and if there are multiple threads serving the request (which sometimes happens - different modules are served by different thread) HttpContext.Current.Items is still shared between them. Some info which you might find helpful