How to force NHibernate to recognize db changes not made through NHibernate - asp.net

I am implementing NHibernate into an existing web application. However, we have some other processes that do bulk inserting and updating on the database. How can I make NHibernate aware that changes are occurring on the backend db that were not initiated through NHibernate?
Most of the info that I have read around NHibernate use in asp.net have mentioned storing the Session object in the HttpContext or CallContext. This would then store the session object for the duration of the application lifecycle. This is what I have implemented. I was afraid of the costs of initializing NHibernate on each request. Isn't there a significant performance hit with this approach with initializing the Session object on each request?
Also, would it make more sense to store the SessionFactory in the the HttpContext or CallContext so that the mappings don't have to be regenerated on each request?

You shouldn't. NHibernate sessions are there to help you work in an ACID environment, which means that one transaction is not aware of any concurrent transactions. You should be using short sessions which do small sets of actions. You should not be holding sessions open for long periods of time. If you do need long periods of time for working with domain objects, then you should be detaching and then re-attaching the domain objects from and to different sessions.
Once you open a new session, any changes done to the database before the session was opened will be made available through NHibernate.

You should not store the Session over multiple requests. Bad, bad idea.
There is little to no overhead with recreating it on every call. It should use database connection pooling - which is where the bulk of the overhead would be.

By default NHibernate will not cache anything between sessions. If your sessions are short lived (per request), you shouldn't have much to worry about.
If you are using second level caching or query caching though, you may need to flush the cache manually. SessionFactory.Evict or SessionFactory.EvictQueries should help there. Restarting the app should also do it, but that's probably not the answer you're looking for.
In an ASP.NET app, the general usage I've seen is to create one SessionFactory for the app and create a new Session for each request.
The SessionFactory takes a while to initialize, is thread safe, and only needs to be initialized once.
The Sessions are not thread safe and are pretty quick to create.
Anything stored in the HttpContext will only be alive for the length of the request. Storing the session in the context is normal and should give you the desired result. The SessionFactory is usually stored in a static variable and will last as long as the app.
See the NHIbernateHelper class here for an example.

Related

How can I slowly migrate to using Redis as a Session State Provider from in process?

Is it a bad idea to implement my own session state provider that conditionally switches based on key between the redis session provider and the inproc session provider?
I am working in a very large legacy asp.net application that currently uses the inproc session provider. We are migrating to Redis as a session state provider so that it persists deploys, however the application is chock full of session abuses (e.g. way too large objects, non-serializable object, I saw a thread in there for some reason?).
We plan to slowly correct these abuses but until they are all corrected we cannot really move to redis. I am hoping we can slowly start migrate serializable-safe keys into redis while the abuses remain in memory until we address them.
Does anyone have any advice on this? Or perhaps alternative suggestions for migrating to out of process from in process?
Thanks!
In ASP.NET Web Form and MVC, using Redis for Session State is just a couple of line of modification in Web.config. Then add SerializableAttribute to classes. There is no side effects of applying it to a class.
Based on my experience when migrating to Azure few years ago, Session State is not worth migrating slowly.
Caching is different story. It requires code changes, so we end up implementing two classes - MemoryCacheManager and RedisCacheManager, and register at run-time in IoC container. Then inject ICacheManager to dependent classes.
Source for the session state: https://github.com/Microsoft/referencesource/blob/master/System.Web/State/
Docs: https://learn.microsoft.com/en-us/dotnet/api/system.web.sessionstate?view=netframework-4.7.2
I'd start by checking out the reference source so you can search the codebase. One interface jumps out as potentially interesting.. IPartialSessionState (When implemented in a type, returns a list of zero or more session keys that indicate to a session-state provider which session-state items have to be retrieved.) Source is here
https://learn.microsoft.com/en-us/dotnet/api/system.web.sessionstate.ipartialsessionstate?view=netframework-4.7.2
I stumbled on https://www.wiktorzychla.com/2007/06/wrapped-inprocsessionstatestore.html
via ASPNET : Switch between Session State Providers ?‏.
This technique could theoretically be used with the Redis provider as well. You'd have to either maintain a list of keys suitable for storing in Redis or do some kind of try to serialize/catch/cache result of which types can be serialized and adaptively fall back to the InProc behavior. You should be able to use HttpContext.Current.Items to flow information between events in the request processing pipeline.
The SessionStateModule (the module responsible for retrieving session, locking, saving, unlocking, etc.) seems to treat InProc as special in a few places. Search its code for InProc. Essentially you're trying to plug in a magical provider that is Custom and yet still has all of the InProc semantics applied by the one and only SessionStateModule. You won't be able to/probably won't want to modify that module, but you may be able to hook up another one adjacent to it that hooks into related events in the request pipeline and does whatever needs to be done that is either In-Proc or Custom-specific. You'll probably run into internal/private methods for which you'd need to use reflection. Not sure how the licensing works on the reference source (MS-PL I think), but another option would be to copy & paste the code from SessionStateModule into your own, make adjustments as needed, unregister the original and register your replacement.
I think you're going to be stuck dealing with a lot of reflection code to get this to work.

NHibernate, Sqlite, missing tables and IOC fun

I'm doing unit testing on a class library that uses NHibernate for persistence. NHibernate is using a Sqlite in-memory database for testing purposes. Under normal circumstances, it's easy to get StructureMap to kick out a session for me.
However, because I'm using the in-memory database to improve testing speed, I need to have a single session available for the duration of a test (because it blows the database away when I create a new one). And there is another wrinkle. The case that is currently burning me is testing a custom NHibernate-based ASP.NET membership provider. These are created apparently once per AppDomain, so I shouldn't inject the session into it, for obvious reasons.
Is there a way in structuremap to tell it to get rid of an instance of a particular type while still maintaining the bits that tell it how to instantiate that type? Really, if I could get away with it, I would just make it act like the HttpScoped object lifetime, but apparently I can only do that within the context of an Http request. Is there a straightforward way to manually control the lifetime of an object coming out of structuremap?
I apologize for the length of this and the possibility that it is a dumb question. I'm solo on this project, so I don't really have anyone to bounce ideas off of.
You could wrap the session in your own ISession implementation which delegates to a real session which lifetime you control. Then register your own ISession as instance.
I ended up making two constructors for my provider along with a private variable of type Func. By default, its value was set to my standard code for creating a session using StructureMap's ObjectFactory.
The overloaded constructor accepted as a parameter an object of type Func. That way, I can inject a strategy for creating an instance of that type if needed, but otherwise don't have to go through any extended effort. In the case of my test, I created the session in the NUnit setup method and destroyed it in the Teardown. I don't love this idea, but I don't currently hate it enough to rip it out....yet.
This got rid of the error I was experiencing in regard to the tables. However, it appears that NHibernate for some reason cannot write to an in-memory sqlite database under the conditions I created. I'm now working on testing to see if I can write to one in the file system. It isn't ideal, but it will be a good long while (I hope), before the performance of writing to disk really starts hurting.

Implement second level cache in ASP.Net

Is there any way to use caching in ASP.Net except SQL Server second level cache. As it is the first time to work with caching I want any way with an example. I have found that NHibernate implements this but we are using .netTiers as an application framework.
The Session cache seems to be the appropriate caching mechanism here. The Session cache is a fault-tolerant cache of objects.
Inserting an object
Session["Username"] = "Matt";
Reading an object
string username = (string)Session["Username"];
Removing an object
Session.Remove("Username");
I say fault-tolerant because if the value with the key you specify doesn't exist in the Session cache, it will not through an exception, it will return null. You need to consider that when implementing your code.
One thing to note, if you are using Sql Server or State Server, the objects you can put in the cache need to be serializable.
Memcached is also a very good way to go, as it is very flexible. It is a windows service that runs on any number of machines and your app can talk to the instances to store and retrieve from the cache. Good Article Here

ASP.NET A static object to hold connection with a DB. Is it a good idea?

I'm wondering if it is a good approach in the ASP.NET project if I set a field which "holds" a connection to a DB as a static field (Entity Framework)
public class DBConnector
{
public static AdServiceDB db;
....
}
That means it'll be only one object for entire application to communicate with a DB. I'm also wondering about if that object will be refreshing data changes from DB tables, or maybe it shouldn't be static and I shoud create a connection dyniamically. What do You think ?
With connection pooling in .NET, generally creating a new connection for each request is acceptable. I'd evaluate the performance of creating a new one each time, and if it isn't a bottleneck, then avoid using the static approach. I have tried it before, and while I haven't run into any issues, it doesn't seem to help much.
A singleton connection to a database that is used across multiple web page requests from multiple users presents a large risk of cross-contamination of personal information across users. It doesn't matter what the performance impact is, this is a huge security risk.
If you don't have users or personal information, perhaps this doesn't apply to your project right now, but always keep it in mind. Databases and the information they contain tend to evolve in the direction of more specifics and more details over time.
This is why you should not use a singleton design pattern with your database connection
Hope it helps
Is using a singleton for the connection a good idea in ASP.NET website
Bad idea. Besides the potential mistakes you could make by not closing connections properly and so forth, accessing a static object makes it very difficult to unit test your code. I'd suggest using a class that implements an interface, and then use dependency injection to get an instance of that class wherever you need it. If you determine that you want it to be a singleton, that can be determined in your DI bindings, not as a foundational point of your architecture.
I would say no.
A database connection should be created when needed to run a query and cleaned up after that query is done and the results are fetched.
If you use a single static instance to control all access to the DB, you may lose out on the automatic Connection Pooling that .NET provides (which could impact performance).
I think the recommendation is to "refresh often."
Since none of the answers have been marked as an answer and I don't believe any have really addressed question or issue thereof...
In ASP.NET, you have Global or HttpApplication. The way this works is that IIS will cache instances of your "application" (that is an instance of your Global class). Normally (default settings in IIS) you could have up to 10 instances of Global and IIS will pick any one of these instances in order to satisfy a request.
Further, keep in mind that, there could be multiple requests at any given moment in time. Which means multiple instances of your Global class will be used. These instances could be ones that were previously instantiated and cached or new instances (depending on the load your IIS server is seeing).
IIS also has a notion of App Pools and worker processes. A Worker process will host your application and all the instances of your Global classes (as discussed earlier). So this translates to an App Domain (in .NET terms).
Just to re-cap before moving on…
Multiple instances of your Global class will exist in the Worker process for your application (in IIS). Each one waiting to be called upon by IIS to satisfy a request. IIS will pick any one of these instances. They are effectively threads that have been cached by IIS and each thread has an instance of your Global class. When a request comes in, one of these threads is called upon to handle the request-response cycle. If multiple requests arrive simultaneously, then multiple threads (each contains an instance of your Global class) will be called upon to satisfy each of those requests.
Moving on…
Since there will be only one instance of a static class per App Domain you'll effectively have one instances of your class shared across all (up to 10) instances of Global. This is a bad idea because when multiple simultaneous requests hit your server they'll either be blocked (if your class’s methods use locks) or threads will be stepping on each other’s toes. In other words, this approach is not inherently thread-safe and if you make it thread safe using thread synchronization primitives then you’re unnecessarily blocking threads, negatively impacting performance and scalability of your web application, with no gain whatsoever.
The real solution (and I use this in all my ASP.NET apps) is to have an instance of your BLL or DAL (as the case may be) per instance of Global. This will ensure the following:
1. Multiple threads are not an issue since IIS guarantees one request-response per instance of Global) at any given moment in time. So you’re code is inherently threads-safe.
2. You only have up to 10 instances of your BLL/DAL up and running at any given moment in time ensuring that you're not constantly creating and disposing instances of (typically) large objects to satisfy each request, which on busy sites is huge
3. You get really good performance well due to #2 above.
You do have to ensure that your BLL/DAL is truly stateless or that you reset any state at the start of each Request-Response cycle. You can use the BeginRequest event in Global to do that is you need to.
If you go down this route, be sure to read my blog post on this
Instantiating Business Layers – ASP.NET

Mixing VB6 "legacy code" and a web application

I'm working on a new website, written in VB.Net using ASP.NET MVC2, there is a need to call "legacy" VB6 code for various complex bits of business logic. The VB6 is a framework consisting of many dlls and is very stateful, we are pretty much emulating how the framework is used in our client application, ie the application runs (lots of state setup), a user logs on (even more state) and then loads a file (even more state).
I've been provided with a "web service interface framework" to get this up and running for use in the web app, this "web framework" hides the legacy code behind a thin layer running under IIS. The idea being that thread pooling provided by IIS will reduce memory use etc etc. I can't help but believe that the guy who provided this has missed the point, since each instance is so stateful there is no way that a thread pool can work, since once a user logs on using one particular object from the pool, no other object will be capable of servicing that client (since it wont have the state)! Also, adding a web service interface and associated SOAP marshalling is a huge overhead compared to calling the objects directly.
The only way I can think of doing this is either a single legacy interface instance which is used by all clients and blocked by each call until it completes, or a thread per client with each legacy interface object being created in a new thread and living for the life of the client.
None of these is ideal but with the amount of code in question and the prolonged migration programme to .net (2+ years and still stateful) I can't think of an alternative. We run the original client app in a citrix environment for some customers so I expect that it could also run ok with thread per client given a beefy enough server and that the overheads of the framework itself should be lower than when the client app is involved.
Any ideas??
I suggest that you take a look at this framework Visual WebGui. I am an employee with this company and therefore wouldn’t sound objective but I believe Visual WebGui had solved some of the major issues with scaling statefull applications and turning single user environment into multi user environment. Worth a look.
Here's an option but it won't be pretty.
It sounds like you need to associate a long lived object (the stateful object to your backend tier) with individual users.
You could store this object in Application state and associate it with the users Session state with a key. You'd need to provide a wrapper to keep track of them all. When the session dies you could capture the event and destroy the backend object.
Application state is a key/value store just like Session. You can access through HttpContext.Application
The big downfall to this is that the objects you put in there stick around until you destroy them so your wrapper and session destroying code need to be spot on. Other than that this might be a quick way to get up and running.
Like I said, it won't be optimal, but it'll probably work.
More info on implications:
http://msdn.microsoft.com/en-us/library/bf9xhdz4(VS.71).aspx
EDIT:
You could also make this work in a web farm environment. Store the information needed to recreate your stateful legacy object in Session state which can be shared between the machines using the built in SQL Provider. If a user bounces to a server where the object doesn't exist your Application state wrapper can just recreate it from the Session state info.
This just leaves how to clean up the stateful object on servers where it isn't needed. In your retrieval wrapper update a hashtable or something with the access time each time the given stateful object is accessed. Have a periodic cleanup routine in th wrapper detroy the stateful objects that haven't been accessed since a little more than the session timeout value of your web app.

Resources