Sessions randomly clear on Win2008 ASP.NET website - asp.net

I couldn't find anything about this online so I thought I'd ask here. Do any of you have issues with sessions just randomly clearing on a Windows 2008 Server environment? This problem is completely random and very unpredictable. I have no code that clears sessions except on logout, and not quite sure what could be causing it (well, I have ideas...)
My host, who I've been with for many years (and never had a problem with) is telling me that Windows 2003 is better at managing session variables and that I will likely be rid of this session clearing issue if I were to move to a 2003 Server environment. Thing is, I'm already set up and running on IIS 7 with the URL Rewrite module and I'd rather not move or reconfigure URL rewriting. Tech support says the App Pool I am running on is configured properly. My session timeout is set to 60 minutes in Web.config and my host tells me that session timeout is set to 60 minutes for my domain.
I could optionally go with an Azure AppFabric Cache for sessions but I'd rather not pay an extra $50 a month--it's a pretty small and low income site. I'm currently using a SQL Azure database but from what I hear, database sessions are not ideal on SQL Azure.
Thoughts?

Are you modifying any files in the web site?
Changes to the folder or file structure of the web site often triggers an app pool recycling, resetting sessions. The work-around is to use a durable session store like the SQL Server Session State provider.

Most likely answer is your app pool is recycling on you for some reason or another which will dump your in process session every time. Proximate causes can be lots of things, especially if app pools are shared. An easy way to see if your app pool is getting dumped is to take advantage of asp.net heartbeat monitoring, it could be configured to email you when these events occur.

Related

ASP.NET Session limit best practice

We're running a PaaS ASP.NET application in an Azure App Service with 3 instances and managing session data outproc in a SQL Server database.
The application is live and we've noticed a large amount of session data for some users when following certain paths e.g. some users have session data upwards of 500k (for a simply site visit with no login the average session is around the 750 - 3000 mark which is what I'd expect).
500k sounds excessive but was wondering what is normal in large enterprise applications these days and the cons of holding so much data in session.
My initial thoughts would be,
No affect on Web App CPU (possible decrease in fact) because not constantly doing queries,
No affect on Web App Memory because we running outproc,
Large spikes in DTU on Sql Server session database when garbage collection runs,
Application may be a bit slower because it takes longer to read and write session data between requests,
May not be ideal for users with poor internet connections,
Possible increase in memory leaks if objects aren't scoped correctly.
Does my reasoning make sense or have I missed something?
Any thoughts and advice would be appreciated,
Many thanks.
I totally agree with your reasoning behind using out-proc session management in Azure App instances.Using IN-PROC sessions in the cloud is a strict no. The reason to host to cloud is to have high availability which is done by having a distributed environment.
Understanding from your point, i assume that speed is a concern to you or if matters to most of the web application , To overcome this , you might think of using Azure redis cache.
Here is the article for configuring session management using Azure redis cache:
Refer the documentation here: https://learn.microsoft.com/en-us/azure/redis-cache/cache-aspnet-session-state-provider

Multiple Azure Web App Instances - Inconsistent DB Queries / Data

I have an Azure Web App with autoscaling configured with a minimum of 2 instances. Database is SQL Azure.
User will make changes to the data e.g. edit a product's price. The change will make it to the database and I can see it in SSMS. However, after user refreshes the page, the data may or may not get updated.
My current theory is something to do with having multiple instances of the web app, because if I turn off autoscale and just have 1 instance, the issue is gone.
I haven't configured any sort of caching in Azure at all.
It sounds like what is happening is the data may or may not appear because it is stored in memory on the worker server (at least temporarily). When you have multiple worker servers, a different one may serve the request, in which case that server would not have the value in memory. The solution is to make sure that your application's code is re-fetching the value from the database in every case.
Azure Web Apps has some built in protection against this, called the ARR affinity cookie. Essentially each request has a cookie which keeps sessions "sticky". i.e. if a worker server is serving requests to a certain user, that user should receive subsequent requests from that server as well. This is the default behavior, but you may have disabled it. See: https://azure.microsoft.com/en-us/blog/disabling-arrs-instance-affinity-in-windows-azure-web-sites/

Application restart state management

Our asp.net web application restarts randomly and kicks off users while they are filling a big batch process form.
- the users have to re logging and fill everything afresh
- so keeping in mind that the application/session restarts randomly - which is the most appropriate technology to use for state management - session state (with MS-SQ L server) or asp.net cache ?
You need to configure something other than InProc for your Asp.Net session state provider. What you use depends on how durable the session data needs to be. The session state service is fine but if it does down it will still affect your application like InProc does today. Using a database is the most durable method, although this costs some performance.
You'll have to try and test which one is most appropriate for your needs.
You might also want to figure out why your app is crashing and fix it. I'm assuming its not a scheduled recycle as you said it was random. Recycling is really a Band-Aid for a badly behaving application.

system.web.caching - At what level is the cache maintained?

I am looking at implementing caching in a .net Web App. Basically... I want to cache some data that is pulled in on every page, but never changes on the database.
Is my Cache Element unique to each:
Session?
App Pool?
Server?
If it is session, this could get out of hand if thousands of people are hitting my site and each cache is ~5k.
If App Pool, and I had several instances of one site running (say with a different DB backend, all on one server, though) then I'd need individual App Pools for each instance.
Any help would be appreciated... I think this data is probably out there I just don't have the right google combination to pull it up.
By default it is stored in memory on the server. This means that it will be shared among all users of the web site. It also means that if you are running your site in a web farm, you will have to use an out-of-process cache storage to ensure that all nodes of the farm share the same cache. Here's an article on MSDN which discusses this.
"One instance of this class is created per application domain, and it remains valid as long as the application domain remains active" - MSDN

How to deploy website to production with minimal impact to users

I'm trying to find the best server architecture solution to deploy monthly updates to an Asp.net external public facing website. What I'm looking for are ways to release a new version of a website with minimal impact to users. Besides deploying the standard way (ie. stop IIS, copy new website over existing website, start IIS), what are some "better" solutions for deployment out there? It would be nice if they kept their session and didn't have to see a "Website under maintenance" message during the update.
My server configuration
We have 2 IIS web servers (2003) and are trying to figure out the best way to utilize them for deployments. My first thought was to update the non-active web server with the latest release. Then to gracefully point the web traffic to that server with minimal impact to users (best case, the user doesn't lose his session). How would you go about "repointing" the web traffic from server 1 to server 2? Changing firewall NAT? Changing DNS records? Some other way?? We need to be able to test the live site immediately after we release the new changes (duh).
BTW, we are using nant and cruise control to automate the builds, and a custom web service to deploy the build to production. So it's all automated with the click of a button.
Could a better solution be achieved using a 3rd server? If so how?
The way we do is
We have a load balancer from netscaler,
take one webserver out of loadbalancer , do all deployments, do a iisreset and the put back in load balancer.
Do the same thing for server2 .
Finally invalidate loadbalancer cache.
Well, there are a couple of things here:
First, consider using a load balancing solution. Windows 2003 server ships with windows load balancing (WLBS), though its not the greatest product. It is, though, free. With that, you can point all traffic to one server, update it, and then do the opposite.
Secondly, you may want to consider looking at how you're working with sessions. HTTP is stateless, which means that as long as you can reconstruct a user's session on any page hit, you should be fine. One ideal step towards this is using ASP.NET Forms Authentication - the cookie written by it isn't tied to an ASP.NET session. Of course, this approach leads to greater risk - there is a chance users will get an error screen if they hit something JUST AS you're copying files. And then there will be a delay while the app pool refreshes.
Overall, your better option is load balancing. Even with it, though, consider trying the second option as well - having sessions that can regenerate works well if users fail to be sticky to one of the servers in the pool.
Just wanted to add this for brevity. At my previous work, we achieved seamless deployments by using the following setup:
A load balancer would point to the production ASP.NET webservers (two in your case, but we had three), and the webservers would have their session setup to pull from a third server dedicated to hosting OutOfProc ASP.NET session.
To deploy a site, we'd pull one of the servers out of the load balancer, update the files, fire it back up, and place it back into the load balancer pool. Repeat for the rest of the webservers.
Because each webserver got the session data from the one central server, taking one webserver out, did not log out the users on that server.
If we had code changes that were incompatible with the existing session data, we'd wait till a scheduled maintenance window to deploy. Otherwise, users with that session data would get errors till they logged out.
Additionally, since this setup relies on the webserver being up, if you wanted to increase reliability, you could change the OutOfProc to SQL based session servers. You would need several servers that replicated the same session database and point the webservers to them. More complicated, but would reduce site downtime.

Resources