I'm working on a new website, written in VB.Net using ASP.NET MVC2, there is a need to call "legacy" VB6 code for various complex bits of business logic. The VB6 is a framework consisting of many dlls and is very stateful, we are pretty much emulating how the framework is used in our client application, ie the application runs (lots of state setup), a user logs on (even more state) and then loads a file (even more state).
I've been provided with a "web service interface framework" to get this up and running for use in the web app, this "web framework" hides the legacy code behind a thin layer running under IIS. The idea being that thread pooling provided by IIS will reduce memory use etc etc. I can't help but believe that the guy who provided this has missed the point, since each instance is so stateful there is no way that a thread pool can work, since once a user logs on using one particular object from the pool, no other object will be capable of servicing that client (since it wont have the state)! Also, adding a web service interface and associated SOAP marshalling is a huge overhead compared to calling the objects directly.
The only way I can think of doing this is either a single legacy interface instance which is used by all clients and blocked by each call until it completes, or a thread per client with each legacy interface object being created in a new thread and living for the life of the client.
None of these is ideal but with the amount of code in question and the prolonged migration programme to .net (2+ years and still stateful) I can't think of an alternative. We run the original client app in a citrix environment for some customers so I expect that it could also run ok with thread per client given a beefy enough server and that the overheads of the framework itself should be lower than when the client app is involved.
Any ideas??
I suggest that you take a look at this framework Visual WebGui. I am an employee with this company and therefore wouldn’t sound objective but I believe Visual WebGui had solved some of the major issues with scaling statefull applications and turning single user environment into multi user environment. Worth a look.
Here's an option but it won't be pretty.
It sounds like you need to associate a long lived object (the stateful object to your backend tier) with individual users.
You could store this object in Application state and associate it with the users Session state with a key. You'd need to provide a wrapper to keep track of them all. When the session dies you could capture the event and destroy the backend object.
Application state is a key/value store just like Session. You can access through HttpContext.Application
The big downfall to this is that the objects you put in there stick around until you destroy them so your wrapper and session destroying code need to be spot on. Other than that this might be a quick way to get up and running.
Like I said, it won't be optimal, but it'll probably work.
More info on implications:
http://msdn.microsoft.com/en-us/library/bf9xhdz4(VS.71).aspx
EDIT:
You could also make this work in a web farm environment. Store the information needed to recreate your stateful legacy object in Session state which can be shared between the machines using the built in SQL Provider. If a user bounces to a server where the object doesn't exist your Application state wrapper can just recreate it from the Session state info.
This just leaves how to clean up the stateful object on servers where it isn't needed. In your retrieval wrapper update a hashtable or something with the access time each time the given stateful object is accessed. Have a periodic cleanup routine in th wrapper detroy the stateful objects that haven't been accessed since a little more than the session timeout value of your web app.
Related
Is it a bad idea to implement my own session state provider that conditionally switches based on key between the redis session provider and the inproc session provider?
I am working in a very large legacy asp.net application that currently uses the inproc session provider. We are migrating to Redis as a session state provider so that it persists deploys, however the application is chock full of session abuses (e.g. way too large objects, non-serializable object, I saw a thread in there for some reason?).
We plan to slowly correct these abuses but until they are all corrected we cannot really move to redis. I am hoping we can slowly start migrate serializable-safe keys into redis while the abuses remain in memory until we address them.
Does anyone have any advice on this? Or perhaps alternative suggestions for migrating to out of process from in process?
Thanks!
In ASP.NET Web Form and MVC, using Redis for Session State is just a couple of line of modification in Web.config. Then add SerializableAttribute to classes. There is no side effects of applying it to a class.
Based on my experience when migrating to Azure few years ago, Session State is not worth migrating slowly.
Caching is different story. It requires code changes, so we end up implementing two classes - MemoryCacheManager and RedisCacheManager, and register at run-time in IoC container. Then inject ICacheManager to dependent classes.
Source for the session state: https://github.com/Microsoft/referencesource/blob/master/System.Web/State/
Docs: https://learn.microsoft.com/en-us/dotnet/api/system.web.sessionstate?view=netframework-4.7.2
I'd start by checking out the reference source so you can search the codebase. One interface jumps out as potentially interesting.. IPartialSessionState (When implemented in a type, returns a list of zero or more session keys that indicate to a session-state provider which session-state items have to be retrieved.) Source is here
https://learn.microsoft.com/en-us/dotnet/api/system.web.sessionstate.ipartialsessionstate?view=netframework-4.7.2
I stumbled on https://www.wiktorzychla.com/2007/06/wrapped-inprocsessionstatestore.html
via ASPNET : Switch between Session State Providers ?.
This technique could theoretically be used with the Redis provider as well. You'd have to either maintain a list of keys suitable for storing in Redis or do some kind of try to serialize/catch/cache result of which types can be serialized and adaptively fall back to the InProc behavior. You should be able to use HttpContext.Current.Items to flow information between events in the request processing pipeline.
The SessionStateModule (the module responsible for retrieving session, locking, saving, unlocking, etc.) seems to treat InProc as special in a few places. Search its code for InProc. Essentially you're trying to plug in a magical provider that is Custom and yet still has all of the InProc semantics applied by the one and only SessionStateModule. You won't be able to/probably won't want to modify that module, but you may be able to hook up another one adjacent to it that hooks into related events in the request pipeline and does whatever needs to be done that is either In-Proc or Custom-specific. You'll probably run into internal/private methods for which you'd need to use reflection. Not sure how the licensing works on the reference source (MS-PL I think), but another option would be to copy & paste the code from SessionStateModule into your own, make adjustments as needed, unregister the original and register your replacement.
I think you're going to be stuck dealing with a lot of reflection code to get this to work.
I'm wondering if it is a good approach in the ASP.NET project if I set a field which "holds" a connection to a DB as a static field (Entity Framework)
public class DBConnector
{
public static AdServiceDB db;
....
}
That means it'll be only one object for entire application to communicate with a DB. I'm also wondering about if that object will be refreshing data changes from DB tables, or maybe it shouldn't be static and I shoud create a connection dyniamically. What do You think ?
With connection pooling in .NET, generally creating a new connection for each request is acceptable. I'd evaluate the performance of creating a new one each time, and if it isn't a bottleneck, then avoid using the static approach. I have tried it before, and while I haven't run into any issues, it doesn't seem to help much.
A singleton connection to a database that is used across multiple web page requests from multiple users presents a large risk of cross-contamination of personal information across users. It doesn't matter what the performance impact is, this is a huge security risk.
If you don't have users or personal information, perhaps this doesn't apply to your project right now, but always keep it in mind. Databases and the information they contain tend to evolve in the direction of more specifics and more details over time.
This is why you should not use a singleton design pattern with your database connection
Hope it helps
Is using a singleton for the connection a good idea in ASP.NET website
Bad idea. Besides the potential mistakes you could make by not closing connections properly and so forth, accessing a static object makes it very difficult to unit test your code. I'd suggest using a class that implements an interface, and then use dependency injection to get an instance of that class wherever you need it. If you determine that you want it to be a singleton, that can be determined in your DI bindings, not as a foundational point of your architecture.
I would say no.
A database connection should be created when needed to run a query and cleaned up after that query is done and the results are fetched.
If you use a single static instance to control all access to the DB, you may lose out on the automatic Connection Pooling that .NET provides (which could impact performance).
I think the recommendation is to "refresh often."
Since none of the answers have been marked as an answer and I don't believe any have really addressed question or issue thereof...
In ASP.NET, you have Global or HttpApplication. The way this works is that IIS will cache instances of your "application" (that is an instance of your Global class). Normally (default settings in IIS) you could have up to 10 instances of Global and IIS will pick any one of these instances in order to satisfy a request.
Further, keep in mind that, there could be multiple requests at any given moment in time. Which means multiple instances of your Global class will be used. These instances could be ones that were previously instantiated and cached or new instances (depending on the load your IIS server is seeing).
IIS also has a notion of App Pools and worker processes. A Worker process will host your application and all the instances of your Global classes (as discussed earlier). So this translates to an App Domain (in .NET terms).
Just to re-cap before moving on…
Multiple instances of your Global class will exist in the Worker process for your application (in IIS). Each one waiting to be called upon by IIS to satisfy a request. IIS will pick any one of these instances. They are effectively threads that have been cached by IIS and each thread has an instance of your Global class. When a request comes in, one of these threads is called upon to handle the request-response cycle. If multiple requests arrive simultaneously, then multiple threads (each contains an instance of your Global class) will be called upon to satisfy each of those requests.
Moving on…
Since there will be only one instance of a static class per App Domain you'll effectively have one instances of your class shared across all (up to 10) instances of Global. This is a bad idea because when multiple simultaneous requests hit your server they'll either be blocked (if your class’s methods use locks) or threads will be stepping on each other’s toes. In other words, this approach is not inherently thread-safe and if you make it thread safe using thread synchronization primitives then you’re unnecessarily blocking threads, negatively impacting performance and scalability of your web application, with no gain whatsoever.
The real solution (and I use this in all my ASP.NET apps) is to have an instance of your BLL or DAL (as the case may be) per instance of Global. This will ensure the following:
1. Multiple threads are not an issue since IIS guarantees one request-response per instance of Global) at any given moment in time. So you’re code is inherently threads-safe.
2. You only have up to 10 instances of your BLL/DAL up and running at any given moment in time ensuring that you're not constantly creating and disposing instances of (typically) large objects to satisfy each request, which on busy sites is huge
3. You get really good performance well due to #2 above.
You do have to ensure that your BLL/DAL is truly stateless or that you reset any state at the start of each Request-Response cycle. You can use the BeginRequest event in Global to do that is you need to.
If you go down this route, be sure to read my blog post on this
Instantiating Business Layers – ASP.NET
I have a lot of Singleton implementation in asp.net application and want to move my application to IIS Web Garden environment for some performance reasons.
CMIIW, moving to IIS Web Garden with n worker process, there will be one singleton object created in each worker process, which make it not a single object anymore because n > 1.
can I make all those singleton objects, singleton again in IIS Web Garden?
I don't believe you can ( unless you can get those IIS workers to use objects in shared memory somehow ).
This is a scope issue. Your singleton instance uses process space as its scope. And like you've said, your implementation now spans multiple processes. By definition, on most operating systems, singletons will be tied to a certain process-space, since it's tied to a single class instance or object.
Do you really need a singleton? That's a very important question to ask before using that pattern. As Wikipedia says, some consider it an anti-pattern ( or code smell, etc. ).
Examples of alternate designs that may work include...
You can have multiple objects synchronize against a central store or with each other.
Use object serialization if applicable.
Use a Windows Service and some form of IPC, eg. System.Runtime.Remoting.Channels.Ipc
I like option 3 for large websites. A companion Windows Service is very helpful in general for large websites. Lots of things like sending mail, batch jobs, etc. should already be decoupled from the frontend processing worker process. You can push the singleton server object into that process and use client objects in your IIS worker processes.
If your singleton class works with multiple objects that share state or just share initial state, then options 1 and 2 should work respectively.
Edit
From your comments it sounds like the first option in the form of a Distributed Cache should work for you.
There are lots of distributed cache implementations out there.
Microsoft AppFabric ( formerly called Velocity ) is their very recent move into this space.
Memcached ASP.Net Provider
NCache ( MSDN Article ) - Custom ASP.Net Cache provider of OutProc support. There should be other custom Cache providers out there.
Roll out your own distributed cache using Windows Services and IPC ( option 3 )
PS. Since you're specifically looking into chat. I'd definitely recommend researching Comet ( Comet implementation for ASP.NET?, and WebSync, etc )
maybe the question is wrong but here is what i want to achieve maybe there is other way to do that.
I have ASP.NET application running .net 3.5, there is a client list and few others List based objects that are shared among all users of application. ie. when client logged in his userID and few other properties are saved within some List in Application state. Problem is that this application is heavy and it's application pool needs to be restarted once a day or so so all the information saved in these List objects is lost. While client personal data which is saved in Out-of-Proc mode on external server is saved.
Is there any way to workaround it ? Shared Session? Something like that.
PLEASE NO MSSQL SOLUTIONS...
Cheers, pros !!!!
Have you looked at caching the lists of data?
This SO article has some good detials.
You should only use Application State as a cache for data persisted elsewhere. You would then use Application_Start or some Lazy loading wrapper class to retrieve such persisted data into the application object.
If you are storing volatile data not persisted elsewhere in the application object then you are in trouble. Hopefully you would have abstracted access to the application object behind some wrapper object so that all your code is accessing the wrapper not tha application object. Now you would need to ensure the modifications are saved elsewhere so that they can be recovered on restart.
To be frank the Application state object is really an aid in porting ASP-Classic sites. Since you should really just treat the application state as a cache, there is an overlap in functionality between it and the ASP.NET Cache object.
Recently, the book on threading for Winforms application (Concurrent programming on Windows by Joe Duffy) was released. This book, focused on winforms, is 1000 pages.
What gotchas are there in ASP.NET threading? I'm sure there are plenty of gotchas to be aware of when implementing threading in ASP.NET. What should I be aware of?
Thanks
Since each http request received by IIS is processed separately, on it's own thread anyway, the only issues you should have is if you kick off some long running process from within the scope of a single http request. In that case, I would put such code into a separate referenced dependant assembly, coded like a middle-tier component, with no dependance or coupling to the ASP.Net model at all, and handle whatever concurrency issues arose within that assembly separately, without worrying about the ASP.Net model at all...
Jeff Richter over at Wintellect has a library called PowerThreading. It is very useful if you are developing applications on .NET. => Power Threading Library
Check for his presentations online at various events.
Usually you are encouraged to use the thread pool in .Net because it of the many benefits of having things managed on your behalf.....but NOT in ASP.net.
Since ASP.net is already multi-threaded, it uses the thread pool to serve requests that are mapped to the ASP.net ISAPI filter, and since the thread pool is fixed in size, by using it you are basically taking threads away that are set aside to do the job of handling request.
In small, low-traffic websites, this is not an issue, but in larger, high-traffic websites you end up competing for and consuming threads that the ASP.net process relies on.
If you want to use threading, it is fine to do something like....
Thread thread = new Thread(threadStarter);
thread.IsBackground = true;
thread.Start();
but with a warning: be sure that the IsBackground is set to true because if it isn't the thread exists in the foreground and will likely prevent the IIS worker process from recycling or restarting.
First, are you talking about asynchronous ASP.NET? Or using the ThreadPool/spinning up your own threads?
If you aren't talking about asynchronous ASP.NET, the main question to answer is: what work would you be doing in the other threads and would the work be specific to a request/response cycle, or is it more about processing global tasks in the background?
EDIT
If you need to handle concurrent operations (a better term than multi-threaded IMO) for a given request/response cycle, then use the asynchronous features of ASP.NET. These provide an abstraction over IIS's support for concurrency, allowing the server to process other requests while the current request is waiting for work to complete.
For background processing of global tasks, I would not use ASP.NET at all. You should assume that IIS will recycle your AppPool at a random point in time. You also should not assume that IIS will run your AppPool on any sort of schedule. Any important background processing should be done outside of IIS, either as a scheduled task or a Windows Service. The approach I usually take is to have a Windows Service and a shared work-queue where the web-site can post work items. The queue can be a database table, a reliable message-based queue (MSMQ, etc), files on the file system, etc.
The immediate thing that comes to mind is, why would you "implement threading" in ASP.NET.
You do need to be conscious all the time that ASP.NET is multi-threaded since many requests can be processed simulatenously each in its own thread. So for example use of static fields needs to take threading into account.
However its rare that you would want to spin up a new thread in code yourself.
As far as the usual winforms issues with threading in the UI is concerned these issues are not present in ASP.NET. There is no window based message pump to worry about.
It is possible to create asynchronous pages in ASP.NET. These will perform all steps up to a certain point. These steps will include asynchronously fetching data, for instance. When all the asynchronous tasks have completed, the remainder of the page lifecycle will execute. In the meantime, a worker thread was not tied up waiting for database I/O to complete.
In this model, all extra threads are executing while the request, and the page instance, and all the controls, still exist. You have to be careful when starting your own threads, that, by the time the thread executes, it's possible that the request, page instance, and controls will have been Disposed.
Also, as usual, be certain that multiple threads will actually improve performance. Often, additional threads will make things worse.
The gotchas are pretty much the same as in any multithreaded application.
The classes involved in processing a request (Page, Controls, HttpContext.Current, ...) are specific to that request so don't need any special handling.
Similarly for any classes you instantiate as local variables or fields within these classes, and for access to Session.
But, as usual, you need to synchronize access to shared resources such as:
Static (C#) / Shared(VB.NET) references.
Singletons
External resources such as the file system
... etc...
I've seen threading bugs too often in ASP.NET apps, e.g. a singleton being used by multiple concurrent requests without synchronization, resulting in user A seeing user B's data.