I need some information from you.I have used session.TimeOut=540 in application.Is that effects on my Application performance after some time.When number of users increases its getting very slow. response time nearly more that 2 minutes for a button click also.This is hosted in server in Application pool .I don't know about Application pool much.If Session Timeout is the problem i will remove it.Please suggest me the way to for more users.
Job Numbers,CustomerID,Tasks will come from one database.when the user click start Button then the data saved in another Database.I need this need to be faster for more Users
I think that you have some page(s) that make some work that takes time, or for some reason or a bug is keep open for more time than the usual.
This page is keep lock the session and hold the rest page from response because the session holds all the pages.
Now, together with the increase of the timeout this page is lock everything and here is you response time near to 2 minutes.
The solution is to locate the page that have the long running problem and fix it or make it faster by optimize the process, or if this page must keep the long time running, then disable the session for that one.
relative:
Web app blocked while processing another web app on sharing same session
What perfmon counters are useful for identifying ASP.NET bottlenecks?
Replacing ASP.Net's session entirely
Trying to make Web Method Asynchronous
Does ASP.NET Web Forms prevent a double click submission?
About server
Now from the other hand, if your server suffer from hardware, or bad setup then here is one other answer with points that you need to check to make it faster.
Find out where the time is spent
add the StopWatch in the method which you said "more that 2 minutes for a button click". you can find which statment spent the most time.
If it is a query on DB that cost time. Check your sql statement.
are you using "SELECT Count(*)" instead of "SELECT Count(Id)"? the * is always slower. also, don't try "SELECT * FROM...."
Use cache.
there are many ways to do cache. both in ASPX pages and your biz layer.
the OutputCache is the most easy way.
and also, cache the page (for example a blog post) on the first time when a user visit it.
Did you use memory paging?
be careful when doing paging on gridview or other list. If you just call DataSource=xxx and DataBind(), even with PagedDataSource, this is likely a memory paging. It cost a lot of performance. Please use stored procedures to do paging.
Check your server environment
where did you deploy the website? many ISP will limit brandwide and IIS connection count and also CPU time to your account.
if you have RD access to your server. you can watch CPU and memory usage to see if they are high when many user comes to your site. If the site is slow and neither CPU nor memory useage is high, it may be a network brandwide problem.
Here are some simple steps to narrow down the issue -
1) Get HTTPWatch (theres a free Basic version) available and check whats really taking time from an end user perspective. Look at number of requests, number of resources downloaded, and the payload. If there is nothing to worry move on to next
2) If its not client, then its usually the processing time on the server. Jump on to DB first - since this is quite easier to eliminate quickly. Look at how many DB calls are made (run profiler in staging or dev) and see if there are any long running queries, missing indexes or statistics, and note the IO. If all is well, move on
3) Check your app code. You could get on with VS.NET in build profiler or professional tools such as Ants. If code is fine then its your network or external calls that you make, check your network bandwidth. If you still cannot narrow down, check your environment/hardware
The best way to get to it is to apply load - You could use simple tools such as ab.exe (that comes as part of Apache Web server) to have concurrent hits on your server and run the App, DB profilers in the background to get to the issue.
Hope this helps!
Related
in an old ASP.NET Web Forms application, developed in Visual Studio 2010,
suddenly does not run anymore, and in the log file appears this message:
Session state has created a session id,
but cannot save it because the response was already flushed by the application.
No new deployment has been made, and no code modifications take place.
Until now I didn't find any solution to this.
What I have to check?
I state that the source code is no longer available, and therefore it would be very difficult to change the code and proceed with a new deployment.
Thanks in advance.
Luis
This would suggest that someone might be hitting the site and jumping directly to some URL (and thus code) that say does some response redirect to another page or some such.
Remember, when code behind runs, and say re-directs to another page, in most cases the running code for the current page is terminated, and that is normal behaviors.
However, the idea that you going to debug code and debug a web site when you don't have the code to debug? Gee, I don't see how that's going to work at all. As noted, if this just started, then it sounds like incoming requests are to pages that don't expect to be hit "first", but some pages that expect to be ONLY called from other pages in the site when some session() and imporant values are setup BEFORE such pages are to be hit.
It also not clear if the site is using sql based sessions, or just in-memory sessions. In memory can (and is) faster, but it also not particually relaible. Now, if you deployed to a new web server or new hosting, then often session errrors can now start to appear, and this is due to the MASSIVE HUGE LARGE DIFFERENT of using cloud based hosting vs that of older hosting soluions that run on a single server.
Clould computing is real utility computing, and thus when you host a web site on such systems, then in-memory session() cannot be used anymore, since multiple servers can and will be used to "dish out" web pages. Since more then one server might be used, then obvisouly in-memory sesson() can't work, since a few web pages might be served out by one server, and then a few more pages might be served out by another server. And using shared memory for a session is limited to ONE server, since multiplel servers don't and can't transfer their memory to other servers.
So, this suggests that you want to be sure that sql server based sessions are being used here - and for any kind of server farm, or any kind of system that does load balances between more then one server, then of course you HAVE to use sql server based sessions, since in memory can't work in that kind of environment.
The error could also be due to excessive server loads - often the session database is "locked" for a short period of time, and can thus often be a bottleneck. So, for say years you might not see a issue, but then as load and use of the web site increases, then this can become noticed where as in the past it was not. I suppose the database used for storing sessions could be checked, or looked at, since as you note, you don't have the ability to test + debug the code. I would REALLY but REALLY work towards solving and fixing this lack of source code for the web site, since without that, you have really no means to manage, maintain, and fix issues for that web site.
But, abrupt terminations of web pages? As noted, this could be a error triggered in code, and thus the page never finished what it was supposed to do. And as noted, perhaps a page that expects some session() values but does not have them as explained above could be the problem. It not clear if your errors also shows what URL this was occurring for.
While nothing seems to have changed - something obviously did.
Ultimate, you need to get that source code, or deal with the people + vendor that supplies the code for that site. If you don't have a vendor, and you don't have source code, you quite much attempting to work on a car that you cant even open the hood to check what's going on under that hood.
so, one suggestion here? Someone is hitting a page that expected some value(s) in session to exist. Often the simple solution is to shove ANY simple or dummy value into session so session REALLY does get created, and then when the page attempts to save the session(), there is one to save!!!
In other words, this error often occurs when session is attempted to be saved, but no sesison exists. For such pages, as noted, a simple tiny small code change of doing this session("zoozoo") = "my useless text" will fix this error. So, it sounds like session is being lost.
As noted, a error on a web page can also trigger a app-pool re-start. If app-pool re-starts, then session is lost (in memory session). Now, with session being lost, then any page that decides to terminate early AND ALSO having used session() will trigger this error.
So, this sounds like app-pool is being re-started and session is being lost. (you can google why app-pool restarts and for the many reasons). However, critical to this issue would be are you using sql based sessions, or in-memory (server) sessions? So, this sounds like some code is triggering a error, and with a error triggered, app-pool re-starts. And with app-pool being restarted, then in-memory session is blown away. And now, without ANY session at all, then attempts to save the session trigger the exact error message you see. (and this is why shoving a dummy value into the session allows and can fix some pages - since you can't save a "nothing" session, and if you do, then you get that exact error message.
but, as noted, you can't make these simple changes to code anyway, right?
But, first on this issue - are you using memory based sessions or not? And that feature can be setup and configured in IIS, and without changes to the code base. So, one quick fix might be to turn on sql server based sessions. It will cost web site performance (10%), but the increased reliability is more then worth the performance hit.
Another area to look at? Are AJAX calls being made to a page, and again without any previous session having been created? So, once again, we down to a change in end user behaviors, and possible those hitting a page first before having logged in, or done other things - and again one would see this error crop up.
I got a dedicated server running both IIS 7.5 and SQL Server 2010. Server CPU load is often near 100%. The SQL server does not take too much but the w3wp process is taking a significant amount of CPU (often 70+%).
I'd like to find out, what is causing this pressure:
* Too many requests of static files (a CDN could be added)
* Too many ajax requests (I am thinking about comet/web sockets anyways)
* Single asp.net pages consuming too much processing power (should be easy to optimize)
Where would you start looking to find out where to start optimizing?
The easiest possible way is to profile the app in production. Not sure if that is possible in your case. Some options:
look into the logs and look at the duration of the requests. Long requests are likely to put load on the system
Remote debug w3wp with Visual Studio and pause the debugger 10 times to see where it stops most. That is the hot spot
Use XPerf or PerfView to capture (managed) stacks. This has almost no impact on production performance
A good starting point would be to fire up the development tools (F12 in IE / Chrome) and look at the timings under the network tab. That will show you a waterfall-style diagram for how the page has loaded and should help you identify any particularly slow-loading static files which might be sensibly moved off to a cdn, any unnecessary requests being made, how much time is being spent getting the actual page itself, etc.
After that, profile the application with a performance profiler. A good profiler like ANTS Performance Profiler will let you look at things like execution time / hit counts for different methods, as well as what database queries are being run and how long they’re taking. A new version of ANTS (currently in EAP) will also group that activity by http request so you can see if specific pages need optimisation or are being hit too many times.
You'd also do well to check that caching is working as you intend it so that users aren’t unnecessarily re-requesting pages.
There's also a nice article on ASP.NET performance which you might want to read at http://aspalliance.com/1533_ASPNET_Performance_Tips.7.
Disclaimer: I work for Red Gate which makes ANTS.
I found an easy way to see what's going on on the server.
Nevertheless, the professional way is probably to go and use a profiling tool.
What did I do?
In IIS Console you can get a list of all current worker threads and if you choose one you can see what this thread is working on. So I was able to see that the thread was handling 100 requests in parallel, 70 of those were tracing back to the same ajax call.
The immediate solution was to reduce the frequency of that call (from every 10 to every 30 seconds). The next step will be to further optimize the call on the server side since I do have other ajax calls with the same frequency (every 10 seconds) which nearly never showed up in the active requests list since they were so fast.
Probably the easiest way to figure it out would be to install New Relic on the server. The trial lasts 30 days I think so it should give you enough time to get to the bottom of this. It'll show you long-running SQL queries, .NET methods, as well as just about everything else you can think of. It makes it very easy to identify bottlenecks.
By the way, I suggested New Relic because it sounds like your problem is in a production environment. New Relic isn't an incredibly detailed profiler. It gathers enough information to be helpful, but not so much as to slow down the server. That makes it well suited to this purpose.
If, however, you could reproduce the problem in a development environment you might try something like the free Eqatec profiler.
I am just getting in to the more intricate parts of web development. This may not be in the best place. However, when is it best to get load balancing for a web project? I understand that it depends on good design/bad design as to how many users you can get to visit a site without it REALLY effecting the performance. However, I am planning to code a new project that could potentially have a lot of users and I wondered if I should be thinking off the bat about load balancing. Opinions welcome; thanks in advance!
I should not also that the project most likely will be asp.net (webforms or mvc not yet decided) with backend of mongodb or pgsql(again still deciding).
Load balancing can also be a form of high availability. What if your web server goes down? It can take a long time to replace it.
Generally, when you need to think about throughput you are already rich because you have an enormous amount of users.
Stackoverflow is serving 10m unique users a month with a few servers (6 or so). Think about how many requests per day you had if you were constantly generating 10 HTTP responses per second for 8 hot hours: 10*3600*8=288000 page impressions per day. You won't have that many users soon.
And if you do, you optimize your app to 20 requests per second and CPU core which means you get 80 requests per second on a commodity server. That is a lot.
Adding a load balancer later is usually easy. LBs can tag each user with a cookie so they get pinned to one particular target. You app will not notice the difference. Usually.
Is this for an e-commerce site? If so, then the real question to ask is "for every hour that the site is down, how much money are you losing?" If that number is substantial, then I would make load balancing a priority.
One of the more-important architecture decisions that I have seen affect this, is the use of session variables. You need to be able to provide a seamless experience if your user ends-up on different servers during their visit. Session variables won't transfer from server to server, so I would avoid using them.
I support a solution like this at work. We run four (used to be eight) .NET e-commerce websites on three Windows 2k8 servers (backed by two primary/secondary SQL Server 2008 databases), taking somewhere around 1300 (combined) orders per day. Each site is load-balanced, and kept "in the farm" by a keep-alive. The nice thing about this, is that we can take one server down for maintenance without the users really noticing anything. When we bring it back, we re-enable our replication service and our changes get pushed out to the other two servers fairly quickly.
So yes, I would recommend giving a solution like that some thought.
The parameters here that may affect the one the other and slow down the performance are.
Bandwidth
Processing
Synchronize
Have to do with how many user you have, together with the media you won to serve.
So if you have to serve a lot of video/files to deliver, you need many servers to deliver it. Let say that you do not have, what is the next think that need to check, the users and the processing.
From my experience what is slow down the processing is the locking of the session. So one big step to speed up the processing is to make a total custom session handling and your page will no lock the one the other and you can handle with out issue too many users.
Now for next step let say that you have a database that keep all the data, to gain from a load balance and many computers the trick is to make local cache of what you going to show.
So the idea is to actually avoid too much locking that make the users wait the one the other, and the second idea is to have a local cache on each different computer that is made dynamic from the main database data.
ref:
Web app blocked while processing another web app on sharing same session
Replacing ASP.Net's session entirely
call aspx page to return an image randomly slow
Always online
One more parameter is that you can make a solution that can handle the case of one server for all, and all for one :) style, where you can actually use more servers for backup reason. So if one server go off for any reason (eg for update and restart), the the rest can still work and serve.
As you said, it depends if/when load balancing should be introduced. It depends on performance and how many users you want to serve. LB also improves reliability of your app - it will not stop when one system goes crashing down. If you can see your project growing to be really big and serve lots of users I would sugest to design your application to be able to be upgraded to LB, so do not do anything non-standard. Try to steer away of home-made solutions and always follow good practice. If later on you really need LB it should not be required to change your app.
UPDATE
You may need to think ahead but not at a cost of complicating your application too much. Do not go paranoid and prepare everything to work lightning fast 'just in case'. For example, do not worry about sessions - session management can be easily moved to SQL Server at any time and this is the way to go with LB. Caching will also help if you hit some bottlenecks in the future but you do not need to implement it straight away - good design (stable interfaces), separation and decoupling will allow for the cache to be added later on. So again - stick to good practices, do not close doors but also do not open all of them straight away.
You may find this article interesting.
Our users of a legacy web application written in ASP.net 2.0 are experiencing slow down after using the application over some time (after 2 hours or so).
For example, clicking a button performing some calculation and updating an ajax updatepanel takes several seconds while it is almost instant when they just have logged in.
If they log off and log back on of the web application, it gets back to the normal speed again, every time they do. The slow down is for a particular user and not global. It can fast for an user and slow for the user next to him.
I've looked at the IIS server and there is plenty of free memory in it and the CPU is almost always below 10%.
There are only a dozen users using this web application at time so scalability shouldn't be an issue.
The only think I can think of being affected by log offs, is session state.
I've already looked ViewState too and it is "only" 18kb at the slowest which shouldn't be the bottleneck, especially since they are on the same gigabit LAN and not remote. It is not any smaller when the applications runs fast.
Looking at session state in the code, it isn't used much and no large objects are stored.
According to my tests, it didn't grow bigger than 10 items and 500 bytes after a few minutes.
Even if there was a leak, the server should be able to cope with it considering the amounts of memory it has (4GB but only using 300MB).
I've cleared temporary files and it didn't seem to make any significant difference.
Do you think these slowdowns could come from something else than Session State, keeping in mind logging off and back on gets the speed back to normal?
If not, is there any tool you would suggest to profile sessionstate or a setting in IIS I should try to change?
what does the browser's memory use look like on the client machines when they experience the slowdown? It could be memory leaks in your javascript if you are using lots of ajax stuff and the browser's memory use continuously climbs.
Have you checked your database? (I assume you're consuming some data with an ajax app)
If you don't close the connection in the Finally block, you can wind up with zombie database connections that slow the app down. I don't know if this is the root cause, but probably worth looking at.
If this is an AJAX heavy page & the user is staying on the same page (i.e. lots of postbacks to itself) then it's possible that it's just ViewState growing to large. and the slow down you're experiencing is due to PageSize.
I would try and recreate the experience locally & monitor the page size/traffic with something like Firebug/Fiddler. If it is a ViewState/page size issue you could look at moving ViewState to the server side
See these for more info on it.
http://professionalaspnet.com/archive/2006/12/09/Move-the-ViewState-to-Session-and-eliminate-page-bloat.aspx
I'm running an ASP.NET app in which I have added an insert/update query to the [global] Page_Load. So, each time the user hits any page on the site, it updates the database with their activity (session ID, time, page they hit). I haven't implemented it yet, but this was the only suggestion given to me as to how to keep track of how many people are currently on my site.
Is this going to kill my database and/or IIS in the long run? We figure that the site averages between 30,000 and 50,000 users at one time. I can't have my site constantly locking up over a database hit with every single page hit for every single user. I'm concerned that's what will happen, however this is the first time I have attempted a solution like this so I may just be overly paranoid.
Do it Async.
Create a dll that handles the update, and in the page load do a fire and forget with parameters.
Insert-Based designs have less locking than Update-Based designs.
So if a user logged-in and then logged-out, in an Insert-Based design you would have multiple rows with a SessionID in each, one for each activity whereas in an Update-Based design, you would have a SessionId, LoginTime and a LogoutTime column and you would update the LogoutTime based on the SessionId.
I have seen many more locking and contention problems caused by Update activity more than Insert activity.
Activities such as counting and linking logins to logouts etc take more complex queries and a little more resources.
It goes without saying that your queries, especially the ones that run on every page, should be as fast as possible so that the site doesn't appear slow to users.
To keep track of how many users are currently on your site you could use performance counters. What you describe though sounds more like a full fledged logging of every page hit.
Lets say you realy have 50k users connected at any one time.
As long as you don't have contention between the updates (trying to lock the same record) a database can track a very high number of inserts and updates. You need to do some capacity planning to assure the load can be carried. 50k users visiting a page every minute will give you 50k inserts and 50k updates per minute, roughly 850 inserts and 850 updates per second, which have to commit (flush the log). Does your DB I/O subsytem support such a write pressure load, in addition to responding to all the requests (reads)?
Also 50k users doing 1 page hit per minute adds up to 72 mil hits per day, 72 mil. logging inserts, at such a rate you need to carefully plan the size capacity of the database and consider what kind of analysis you'll do on the collected data since querying ad-hoc 2 billion rows (one month data) will get you nowhere fast (actually... quite slow).
Doing it async can give you some relief over very short spikes, but not on the long run. If your DB system cannot handle the load then doing async calls will just create a backlog queue in the application process (in the ASP app pool) and this will grow until out of memory, at which moment the all vigilant IIS will 'recycle' the app pool, thus loosing all pending async updates.
I think updating the database in the begin session and end session will do the job. that will reduce the count of statements dramatically.
I think it makes no difference if you track hits or begin/end session. with hits you'll also need additional logic to subtract inactive users
EDIT: session end is not fired always. I would suggest to call an update statement/stored procedure in another session begin event (in addition to the other insert statement) that will fix invalid sessions.
I don't think that calling this "fix routine" is necessary in every page load event because I think you cant exactly count "current no. of visitors".
I would keep this in Application state instead - if possible. On ApplicationStart create some data structure saved to App state that you can update from anywhere in your application - session start, page load, wherever. Keep it out of the database. You are just using it to track "currently online" info anyway it sounds like.
If you have multiple instances of your app, or if there is a requirement to maintain historical info beyond the IIS logs, this won't work obviously. Go with chris' fire-and-forget solution in that case.
What's wrong with IIS Logs?
2009-05-01 12:30:31 207.219.27.35 GET /assocadmin/ibb-reg.asp - usernameremoved 544.566.570.575 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.0;+SLCC1;+.NET+CLR+2.0.50727;+Media+Center+PC+5.0;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30618) 200 0 0 40058
EDIT: I'd like to close this answer, but I want the comments to stay. Consider this answer withdrawn.
How about adding a small object to the session?
Something like LoggedInUserFlag:IDisposable
In the constructor, increment your counter however you decide to implement it.
Then in the Dispose method, decrement the counter.
This way, regardless of how the session is ended, the counter will always be (eventually) decremented.
see:
http://weblogs.asp.net/cnagel/archive/2005/01/23/359037.aspx
for info on using IDisposable.
I am not an ASP guy at all, but what about rather than logging all that other info, and insert their IP address?
If they have an IP address already in there, have a last_seen timestamp, and on each refresh just delete any row that isn't 10 minutes ago?
This is how I would take a shot at it. It is much more space efficient, but I am not sure about the checking and deleting so much on such a high profile site.
As a direct answer to your question, yes, running a database query in-line with every request is a bad idea:
Synchronous requests will tie up a thread, which will reduce your scalability (fewer simultaneous activities)
DB inserts (or updates) are writes to the DB, which will put a load on your log volume
DB accesses shouldn't be required in a single server / single AppPool scenario
I answered your question about how to count users in the other thread:
Best way to keep track of current online users
If you are operating in a multi-server / load-balanced environment, then DB accesses may in fact be required. In that case:
Queue them to a background thread so the foreground request thread doesn't have to wait
Use Resource Governor in SQL 2008 to reduce contention with other DB accesses
Collect several updates / inserts together into a single batch, in a single transaction, to minimize log disk I/O pressure
Return the current count with each DB access, to minimize round-trips
In case it's of any interest, I cover sync/async threading issues and the techniques above in detail in my book, along with code examples: Ultra-Fast ASP.NET.