appfabric caching failure exceptions=getandlock requests for session state - asp.net

I'm using the session provider in an asp.net app with a 3 host appfabric cluster.
The cluster is version 3 and is running on windows server 2008.
My sessions cluster has secondaries set to 1 and min secondaries set to 0.
When I look at the cache statistics I notice a very large (disproportionate) number in the miss count category. In fact it almost equals the request count category. So with this I decided to look at the performance counters to figure out why the session provider does not seem to be able to hold the object correctly or why it keeps missing.
What I found was that the getandlocks/sec are identical to the failure exceptions/sec counters. It's also constantly running which is not normal considering that there is only so much traffic being generated by our staff. The object count is not large but the rejection rate is clearly much higher than the number of objects that should be coming out of it. I'm not writing or modify that much info to the session, for the most part it doesn't change but clearly I'm getting a significantly larger number of requests than my users can create.
Any help is welcome.
PS.
Ideally I'd love to know what these failure exceptions say but there is no way to capture them it seems.

Related

Can multiple requests affect users in one single IIS instance?

I'm having a problem on my application. It's an ASP.NET application set up on IIS 10.
Let's say one system page is accessible by 20 users. The page works perfectly (no logical error on coding) every action works and delivers the expected values requested by users.
The problem is, whenever someone requests let's say, the same method as another user at the same time (with different values), the application randomly throws an error to one of these users. We've checked for log errors and all of them are system index out of range errors, which never happened in our QA server.
I randomly thought about testing that exact scenario (adding different values with another user at the same time) and I saw it happen for the first time on the QA server. We've managed to reproduce the error multiple times.
While we don't discard the possibility that this could be another issue, did anyone else experience something like that?
The question is: Can IIS manage the same requests, multiple times at the same time within the same instance without any trouble? Does it run on multiple threads or something like that?
Thanks for taking time for answering this, if you need any info
Stick to your question
Yes IIS can handle very easily (more efficient as well)
As per your application concern without code I can't point out but you may consider few points
Is it happening for just one method or for all. If it happening for just one that means you are trying to use such a code that may used by another user
You are using such a array or list which is null or empty for other user. Like a user has First Name Followed by Last Name But other user don't fill last name and you are using that last name property
May be u r using HttpContext and trying to use same as for different users
May be You are using types which are not Thread safe
So these can be possible cases but without code we can't assume.
About your problem, for multiple requests from different user, iis will create a thread in the application pool for each request. For multiple requests from the same user, it will only run in one thread and affect only the user's instance. Unless the instance or resource is a shared resource and your code does not perform any lock operations.
IIS, including most web servers, use threads to process requests, so multiple requests will be executed in parallel unless you place a lock. A web server usually has a minimum and a maximum number of work programs. These work programs are adjusted according to the CPU or memory of the current hardware. If resources are exhausted, new requests will be queued until new resources are available.
So what you need to do may be to modify the application code to take multi-threading and synchronization into consideration.

How to prevent proxy timeouts with SQL Server Reporting Services

We have a system running Windows Server 2008R2 x64 and SQL Server 2008R2 x64 with SSRS installed/configured. This is a shared reporting server used by a large number of people, with some fairly large inefficient databases (400-500gb of data ish), and these users use the system to generate ad-hoc reports based of a reporting model that sits on top of the aforementioned databases. Note that the users are using NTLM to logon and identify for running reports.
Most reports are quick, but if you are running a report for 1 or 2 years worth of data, they can take a while to return (5minutes ish). This is fine for most users, however some of the users are stuck behind a proxy, which has a connection timeout set at 2minutes. As SSRS 2008R2 does not seem to send back a "keep-alive" signal (confirmed via wireshark), when running one of these long reports the proxy server thinks the connection has died, and as such it just gives up and kills the connection. This gives the user a 401 or 503 error and obviously cancels the report (the incorrect error is a known bug in SSRS which Microsoft refuse to fix).
We're getting a lot of flak from the user's about this, even though it's not really our issue..so I am looking for a creative solution.
So far I have come up with:
1) Discovering some as yet unknown setting for SSRS that can make it keep the connection alive.
2) installing our own proxy in between the users and our reports server, which WILL send a keep-alive back (not sure this will work and it's a bit hacky, just thinking creatively!)
3) re-writing our reports databases to be more efficient (yes this is the best solution, but also incredibly expensive)
4) ask the experts :) :)
We have a call booked in with Microsoft Support to see if they can help - but can any experts on Stack help out? I appreciate that this may be a better question for server fault (and I may post it there) but it's a development question too really :)
Thanks!
A few things:
A. For SSRS overall on it's service:
I personally use a keep alive service as I believe the default recycle is 12 hours for SSRS server. I use a tool someone turned me onto called 'VisualCron' that can do many task processes automatically. You can also just make a call in a WCF service or similar to. Basically I know the first report from a user for the day is generally slow. Usually you need to hit http:// (servername)/ReportServer to keep it alive.
B. For cachine report level items:
If this does not help I would suggest possibly caching DataSets when possible. Some people have data that is up to the moment but for a lot of people that is not the case. You may create a shared dataset in SSRS and then cache that on a schedule. So if you have domain like tables that only need to be updated once in a blue moon put them there. Same with data that is nightly or in batches. If you are transactional based shop that is up to the moment this may not help but for batch based businesses this can help tremendously.
You can also cache the reports for their data as a continuation of this. Under 'Manage' drop down for a report when in the /Reports landing page you can set the data to run under a specific schedule. You can also set a snapshot which is an extension of this as it executes with some default parameters set on a schedule and is a copy of the report when it was ran.
You are mentioning ASP.NET so I am not certain how much some of this will work if you are doing this all through a site you are setting up internally as a pass through. But you could email or save files on a schedule as well through SSRS's subscription service.
C. Change how you store your data for reporting.
You can create a Report Warehouse of select item level values of queries. Create a small database that is just a few recent years of data and only certain fields and certain tables. Then index it to death and report off of that. In my experience this method will fly in terms of performance but it does take the extra overhead of setting it up. Generally most companies will whine about this but it often takes a single day to set up and then you create one SSMS job that does it all nightly or an SSIS package then you don't worry about it. I like this method as I know my data is not being reported off of production and is isolated personally.

Cross-server In-memory data (as variable) per user or global (for all users)

My question is regarding aggregated data for fast access across several servers on Amazon EC2. In an ASP.NET application, I would probably store that data in Application["somevar"] variable so it can be accessed quickly (in memory) by all users.
The problem starts when I want that aggregated data to be gathered and its value equal on all servers. If I chose to deploy two servers, the user might be transmitting data to different servers every time (the servers are under a load balancer or ElasticBean), and if for example I count the number of times the user asked for the page, each server's Application var will have different value
For example:
Server 1:
Application["counter1"] = 120
Server 2:
Application["counter1"] = 130
What I want is a variable that be the same on all servers. The reason I want the data in Application-like variable is that I want that data in memory for fast access, then I might write that data to the database.
What I want to know is how can I achieve this. I though about using Amazon ElasticCache so even if I have 10 server under the load balancer, I can access the ElasticCache variable via API and it doesn't matter from which server I access the memcache variable, it will get/set the same variable, and therefore I can achieve my goal in keeping a cross-server global variable.
I wanted to know if it's a good practice and wherever there is a better way to implement such feature.
I am developing my application in ASP.NET C# and with MySQL. Also take into consideration that some of the aggregated data should be written to the database, and I do that to prevent a lot of writes at the same time, and write data after it reaches 20 writes for example and then the data will be written to the database.
Just to clear up a few things. First lets make sure that we understand how to use ElasticCache. The API for ElasticCache doesn't give us any CRUD operations on the cache cluster, the API from Amazon is strictly for managing the servers and configuration. You will need to use a memcached library for .NET to connect to the cluster. Using a cache like memcached is a good solution for you're first problem. It will easily and quickly store simple application variables in a distributed environment. Using a cache is generally a good practice even with smaller applications.
I'm not sure how many users you have or how many you expect to have but one thing I've learned in my years programming is that over optimization is usually a bad idea. Over optimization is when you start to optimize you're code before it's really necessary. Take you're proposed optimization for example. We know that making 1 write on the database is quicker than making 20 writes, generally speaking of course. However, unless your database is the bottleneck in your application to implement such a feature you introduce a significant amount of complexity for no immediate benefit. If a memcached cluster server crashes, which it will, then the data waiting to be written to the database is lost. If you really do have a lot of users then you have to start thinking about concurrency and locks on the memcached items.
Without knowing more about your application i can't make any real recommendations except to say that make sure your optimization are required before you spend time increasing the complexity of your application for nothing.

Classic ASP 'Requests Executing' never greater than 1

We have a complex app that serves AJAX JSON streams (using ADO to grab the data) using brief ASP servlets. Any given session can fire up from 10-20 of these requests simultaneously. We encountered a significant performance problem way earlier than we expected as load built. (Server is a dual-XEON, RAID 5, 4gb, etc). Sleuthing around in perfmon we noticed that the 'Requests Executing' figure is perpetually stuck at 1. Never gets any higher. Research indicates that numbers of 20-50 are not uncommon. Requests Queued will hover around 10-20 and Wait Time climbs as well.
We have fiddled with ASPProcessorThreadMax set to 40 from default of 25 with no effect. It seems to be only able to work a single request at a time, which, needless to say, won't work. I can't find anything that describes this particular problem. Anny help is greatly appreciated.
ASP Session object is constrained to a Single Threaded Apartment (STA). As a result requests to ASP scripts for the same session can only be processed sequentially.
An additional reason why you might only ever see 1 executing ASP script even across multiple sessions is where debugging has be enabled for ASP. This causes the ASP processing to ignore ASPProcessorThreadMax and pretend it were set to 1.
To eliminate the problem ensure debugging is not enabled and turn off "Enable Session State". If you are using the Session object in your code you will need to find an alternative, like DB backed state.
However, how many active concurrent sessions are you expecting in the live production? Perhaps the overall user experience will not truely be impacted by the serialisation of requests per session.

sql session time out recommendation

One of my applications uses sql session state, the timeout is currently set to 20 minutes. My question is, since this is stored in the database and not in server memory, I should be able to increase the timeout without any significant performance issues right?
I don't really understand the importance of the timeout for the database session state scenario, since the database should easily be able to handle a lot of sessions.
I think the timeout's relevance is more for public-facing websites where you could potentially get a lot of hits and fill up your database fairly quickly. That being said, infinite isn't exactly what you want either...
I was looking for confirmation of your opinion, too-- that if harddrive space is cheap, I should be able to have 8 hour sessions in SqlSessionState without noticable performance issues (beyond what 20 minute sql server session cause), given a medium sized office level intranet application.
Just try to keep in mind that the advice about session deals with how many users you can deal with at once, how likely it is that users will start some work, get interrupted for a long time, and need to continue.
And finally if you are storing authentication tokens or roles in session, then you may want to expire those more often to check the user still is a user and still has those roles.
Length of a session should be determined by the functionality (e.g. on-line banking would tend to shorter timeout, while a site like SO instead allows longer period to type up an entry), not by the implementation mechanism.
Using out-of-process mode allows retaining session context in case of IIS re-cycles, and requires less direct (used by IIS itself) memory resources. But that has no relation to whether a session should last 8 hours or 5 min.

Resources