Web application is locked up and CPU usage reached to 100% - asp.net

We have a web application running which having around 100 users logged in, All clients are connected to server using websync. I was having requirement for keeping the session always live, so I am regenerating session when it is about to expire.
But after 3 or 4 days, I found cpu reached to 100% and application locked, then we need to restart the server to make it working.
Thanks for providing solutions in advance.
Thanks

Why don't you just extend the session duration to be extremely long instead of regenerating it?
Have you run a profiler against the server when it reaches 100% cpu? This should effectively tell you which methods/classes are being run in how many different threads. With this information you can figure out why your application is running those methods/classes across what I'm guessing is a lot of threads.

We've got plenty of clients using WebSync with tens of thousands of concurrent connections (and our On-Demand cluster sits at multiple thousands of users nonstop every day as well), so if you're seeing a CPU lockup, more than likely you've got a threading issue in your code, probably in one of your events (assuming it's related to the WebSync code at all).
Don't forget that the WebSync events are all static, so if you're using shared resources, you'll need to manage them accordingly (i.e., you have to count on the fact that they're multi-threaded). All the WebSync methods themselves are threadsafe, but if you've got stuff in your own events, you'll need to manage that yourself.
Feel free to chat with us directly though, as #Anton suggested!

As we deal with before, Windows update maybe reason for 100% CPU.

Related

CaliEventHandlerDelegateProxy leaking

We have a problem on our website, seemingly at random (every day or so, up to once every 7-10 days) the website will become unresponsive.
We have two web servers on Azure, and we use Redis.
I've managed to run DotNetMemory and caught it when it crashes, and what I observe is under Event handlers leak two items seem to increase in count into the thousands before the website stops working. Those two items are CaliEventHandlerDelegateProxy and ArglessEventHandlerProxy. Once the site crashes, we get lots of Redis exceptions that it can't connect to the Redis server. According to Azure Portal, our Redis server load never goes above 10% in peak times and we're following all best practises.
I've spent a long time going through our website ensuring that there are no obvious memory leaks, and have patched a few cases that went under the radar. Anecdotally, these seem to of improved the website stability a little. Things we've checked:
All iDisposable objects are now wrapped in using blocks (we did this strictly before but we did find a few not disposed properly)
Event handlers are unsubscribed - there are very few in our code base
We use WebUserControls pretty heavily. Each one had the current master page passed in as a parameter. We've removed the dependency on this as we thought it could cause GC to not collect the page perhaps
Our latest issue is that when the web server runs fine, but then we run DotNetMemory and attach it to the w3wp.exe process it causes the CaliEventHandlerDelegateProxy and ArglessEventHandlerProxy event leaks to increase rapidly until the site crashes! So the crash is reproducible just by running DotNetMemory. Here is a screenshot of what we saw:
I'm at a loss now, I believe I've exhausted all possibilities of memory leaks in our code base, and our "solution" is to have the app pools recycle every several hours to be on the safe side.
We've even tried upgrading Redis to the Premium tier, and even upgraded all drives on the webservers to SSDs to see if it helps things which it doesn't appear to.
Can anyone shed any light on what might be causing these issues?
All iDisposable objects are now wrapped in using blocks (we did this
strictly before but we did find a few not disposed properly)
We can't say a lot about crash without any information about it, but I have some speculations about it.
I see 10 000 (!) not disposed objects handled by finalization queue. Let start with them, find all of them and add Dispose call in your app.
Also I would recommend to check how many system handles utilized by your application. There is an OS limit on number of handles and if they are exceeded no more file handles, network sockets, etc can be created. I recommend it especially since the number of not disposed objects.
Also if you have a timeout on accessing Redis get performance profiler and look why so. I recommend to get JetBrains dotTrace and use TIMELINE mode to get a profile of your app, it will show thread sleeping, threads contention and many many more information what will help you to find a problem root. You can use command line tool to obtain profile data, in order not to install GUI application on the server side.
it causes the CaliEventHandlerDelegateProxy and
ArglessEventHandlerProxy event leaks to increase rapidly
dotMemory doesn't change your application code and doesn't allocate any managed objects in profiled process. Microsoft Profiling API injects a dll (written in c++) into the profiling process, it's a part of dotMemory, named Profilng Core, playing the role of the "server" (where standalone dotMemory written in C# is a client). Profiling Core doing some work with gathered data before sending it to the client side, it requires some memory, which allocated, of course, in the address space of the profiling process but it doesn't affect managed memory.
Memory profiling may affect performance of your application. For example, profiling API disables concurrent GC when application is under profiling or memory allocation data collecting can significantly slow down your application.
Why do you thing that CaliEventHandlerDelegateProxy and ArglessEventHandlerProxy are allocated only under dotMemory profiling? Could you please describe how you explored this?
Event handlers are unsubscribed - there are very few in our code base
dotMemory reports an event handler as a leak means there is only one reference to it - from event source at there is no possibility to unsubscribe from this event. Check all these leaks, find yours at look at the code how it is happened. Anyway, there are only 110.3 KB retained by these objects, why do you decide your site crashed because of them?
I'm at a loss now, I believe I've exhausted all possibilities of memory leaks in our code base
Take several snapshots in a period of time when memory consumption is growing, open full comparison of some of these snapshots and look at all survived objects which should not survive and find why they survived. This is the only way to prove that your app doesn't have memory leak, looking the code doesn't prove it, sorry.
Hope if you perform all the activities I recommend you to do (performance profiling, full snapshots and snapshots comparison investigation, not only inspections view, checking why there are huge amount of not disposed objects) you will find and fix the root problem.

ASP Website runs slow when number of users Increases

I need some information from you.I have used session.TimeOut=540 in application.Is that effects on my Application performance after some time.When number of users increases its getting very slow. response time nearly more that 2 minutes for a button click also.This is hosted in server in Application pool .I don't know about Application pool much.If Session Timeout is the problem i will remove it.Please suggest me the way to for more users.
Job Numbers,CustomerID,Tasks will come from one database.when the user click start Button then the data saved in another Database.I need this need to be faster for more Users
I think that you have some page(s) that make some work that takes time, or for some reason or a bug is keep open for more time than the usual.
This page is keep lock the session and hold the rest page from response because the session holds all the pages.
Now, together with the increase of the timeout this page is lock everything and here is you response time near to 2 minutes.
The solution is to locate the page that have the long running problem and fix it or make it faster by optimize the process, or if this page must keep the long time running, then disable the session for that one.
relative:
Web app blocked while processing another web app on sharing same session
What perfmon counters are useful for identifying ASP.NET bottlenecks?
Replacing ASP.Net's session entirely
Trying to make Web Method Asynchronous
Does ASP.NET Web Forms prevent a double click submission?
About server
Now from the other hand, if your server suffer from hardware, or bad setup then here is one other answer with points that you need to check to make it faster.
Find out where the time is spent
add the StopWatch in the method which you said "more that 2 minutes for a button click". you can find which statment spent the most time.
If it is a query on DB that cost time. Check your sql statement.
are you using "SELECT Count(*)" instead of "SELECT Count(Id)"? the * is always slower. also, don't try "SELECT * FROM...."
Use cache.
there are many ways to do cache. both in ASPX pages and your biz layer.
the OutputCache is the most easy way.
and also, cache the page (for example a blog post) on the first time when a user visit it.
Did you use memory paging?
be careful when doing paging on gridview or other list. If you just call DataSource=xxx and DataBind(), even with PagedDataSource, this is likely a memory paging. It cost a lot of performance. Please use stored procedures to do paging.
Check your server environment
where did you deploy the website? many ISP will limit brandwide and IIS connection count and also CPU time to your account.
if you have RD access to your server. you can watch CPU and memory usage to see if they are high when many user comes to your site. If the site is slow and neither CPU nor memory useage is high, it may be a network brandwide problem.
Here are some simple steps to narrow down the issue -
1) Get HTTPWatch (theres a free Basic version) available and check whats really taking time from an end user perspective. Look at number of requests, number of resources downloaded, and the payload. If there is nothing to worry move on to next
2) If its not client, then its usually the processing time on the server. Jump on to DB first - since this is quite easier to eliminate quickly. Look at how many DB calls are made (run profiler in staging or dev) and see if there are any long running queries, missing indexes or statistics, and note the IO. If all is well, move on
3) Check your app code. You could get on with VS.NET in build profiler or professional tools such as Ants. If code is fine then its your network or external calls that you make, check your network bandwidth. If you still cannot narrow down, check your environment/hardware
The best way to get to it is to apply load - You could use simple tools such as ab.exe (that comes as part of Apache Web server) to have concurrent hits on your server and run the App, DB profilers in the background to get to the issue.
Hope this helps!

When to use load balancing?

I am just getting in to the more intricate parts of web development. This may not be in the best place. However, when is it best to get load balancing for a web project? I understand that it depends on good design/bad design as to how many users you can get to visit a site without it REALLY effecting the performance. However, I am planning to code a new project that could potentially have a lot of users and I wondered if I should be thinking off the bat about load balancing. Opinions welcome; thanks in advance!
I should not also that the project most likely will be asp.net (webforms or mvc not yet decided) with backend of mongodb or pgsql(again still deciding).
Load balancing can also be a form of high availability. What if your web server goes down? It can take a long time to replace it.
Generally, when you need to think about throughput you are already rich because you have an enormous amount of users.
Stackoverflow is serving 10m unique users a month with a few servers (6 or so). Think about how many requests per day you had if you were constantly generating 10 HTTP responses per second for 8 hot hours: 10*3600*8=288000 page impressions per day. You won't have that many users soon.
And if you do, you optimize your app to 20 requests per second and CPU core which means you get 80 requests per second on a commodity server. That is a lot.
Adding a load balancer later is usually easy. LBs can tag each user with a cookie so they get pinned to one particular target. You app will not notice the difference. Usually.
Is this for an e-commerce site? If so, then the real question to ask is "for every hour that the site is down, how much money are you losing?" If that number is substantial, then I would make load balancing a priority.
One of the more-important architecture decisions that I have seen affect this, is the use of session variables. You need to be able to provide a seamless experience if your user ends-up on different servers during their visit. Session variables won't transfer from server to server, so I would avoid using them.
I support a solution like this at work. We run four (used to be eight) .NET e-commerce websites on three Windows 2k8 servers (backed by two primary/secondary SQL Server 2008 databases), taking somewhere around 1300 (combined) orders per day. Each site is load-balanced, and kept "in the farm" by a keep-alive. The nice thing about this, is that we can take one server down for maintenance without the users really noticing anything. When we bring it back, we re-enable our replication service and our changes get pushed out to the other two servers fairly quickly.
So yes, I would recommend giving a solution like that some thought.
The parameters here that may affect the one the other and slow down the performance are.
Bandwidth
Processing
Synchronize
Have to do with how many user you have, together with the media you won to serve.
So if you have to serve a lot of video/files to deliver, you need many servers to deliver it. Let say that you do not have, what is the next think that need to check, the users and the processing.
From my experience what is slow down the processing is the locking of the session. So one big step to speed up the processing is to make a total custom session handling and your page will no lock the one the other and you can handle with out issue too many users.
Now for next step let say that you have a database that keep all the data, to gain from a load balance and many computers the trick is to make local cache of what you going to show.
So the idea is to actually avoid too much locking that make the users wait the one the other, and the second idea is to have a local cache on each different computer that is made dynamic from the main database data.
ref:
Web app blocked while processing another web app on sharing same session
Replacing ASP.Net's session entirely
call aspx page to return an image randomly slow
Always online
One more parameter is that you can make a solution that can handle the case of one server for all, and all for one :) style, where you can actually use more servers for backup reason. So if one server go off for any reason (eg for update and restart), the the rest can still work and serve.
As you said, it depends if/when load balancing should be introduced. It depends on performance and how many users you want to serve. LB also improves reliability of your app - it will not stop when one system goes crashing down. If you can see your project growing to be really big and serve lots of users I would sugest to design your application to be able to be upgraded to LB, so do not do anything non-standard. Try to steer away of home-made solutions and always follow good practice. If later on you really need LB it should not be required to change your app.
UPDATE
You may need to think ahead but not at a cost of complicating your application too much. Do not go paranoid and prepare everything to work lightning fast 'just in case'. For example, do not worry about sessions - session management can be easily moved to SQL Server at any time and this is the way to go with LB. Caching will also help if you hit some bottlenecks in the future but you do not need to implement it straight away - good design (stable interfaces), separation and decoupling will allow for the cache to be added later on. So again - stick to good practices, do not close doors but also do not open all of them straight away.
You may find this article interesting.

ASP.Net application getting slow over time

Our users of a legacy web application written in ASP.net 2.0 are experiencing slow down after using the application over some time (after 2 hours or so).
For example, clicking a button performing some calculation and updating an ajax updatepanel takes several seconds while it is almost instant when they just have logged in.
If they log off and log back on of the web application, it gets back to the normal speed again, every time they do. The slow down is for a particular user and not global. It can fast for an user and slow for the user next to him.
I've looked at the IIS server and there is plenty of free memory in it and the CPU is almost always below 10%.
There are only a dozen users using this web application at time so scalability shouldn't be an issue.
The only think I can think of being affected by log offs, is session state.
I've already looked ViewState too and it is "only" 18kb at the slowest which shouldn't be the bottleneck, especially since they are on the same gigabit LAN and not remote. It is not any smaller when the applications runs fast.
Looking at session state in the code, it isn't used much and no large objects are stored.
According to my tests, it didn't grow bigger than 10 items and 500 bytes after a few minutes.
Even if there was a leak, the server should be able to cope with it considering the amounts of memory it has (4GB but only using 300MB).
I've cleared temporary files and it didn't seem to make any significant difference.
Do you think these slowdowns could come from something else than Session State, keeping in mind logging off and back on gets the speed back to normal?
If not, is there any tool you would suggest to profile sessionstate or a setting in IIS I should try to change?
what does the browser's memory use look like on the client machines when they experience the slowdown? It could be memory leaks in your javascript if you are using lots of ajax stuff and the browser's memory use continuously climbs.
Have you checked your database? (I assume you're consuming some data with an ajax app)
If you don't close the connection in the Finally block, you can wind up with zombie database connections that slow the app down. I don't know if this is the root cause, but probably worth looking at.
If this is an AJAX heavy page & the user is staying on the same page (i.e. lots of postbacks to itself) then it's possible that it's just ViewState growing to large. and the slow down you're experiencing is due to PageSize.
I would try and recreate the experience locally & monitor the page size/traffic with something like Firebug/Fiddler. If it is a ViewState/page size issue you could look at moving ViewState to the server side
See these for more info on it.
http://professionalaspnet.com/archive/2006/12/09/Move-the-ViewState-to-Session-and-eliminate-page-bloat.aspx

ASP.NET: How to reduce chance of getting error 503

I have an asp.net app hosted at a (for now, unnamed) hosting company. This app uses a small MS SQL database.
During the past couple of months, my complete website (whether it's the app page or plain static html page) was not accessible twice due to error 503. They just shut down the website without notifying me. However I monitor the website every few hours and called them when I found the problem. Then they restarted the application pool and everything worked again.
They told me that my website/app reached their "Worker Process Limit" of "100MB" and that's the reason for their shutting it down.
This seems to be the only hosting company that advertises their Worker Process Limit with their hosting plan so I don't have a basis of comparison with other companies's plans.
So, my questions are:
1.Could someone explain what the Worker Process Limit is in Windows server environment? And is 100MB considered small or adequate?
2.How can I handle or avoid reaching this limit within my app? Is this possible or is it just the number of visitors is too high? I average only a few hundred visitors a day.
Thanks.
From my understanding, the WPL is based off of how much resources your application utilizes. Naturally, this could be a result of how many users visits your web application (or more so, concurrently) but it extends a bit beyond that.
I believe what this is saying, is that your application is memory hungry. Is that a bad thing? No, but when you only have 100MB at your disposal it is.
I would say your first step would be to get that web application local and see the actual resource utilization. I know this is a very open-ended answer, but you'll have to do the resource tuning to slim down your memory consumption. There's a chance that this is just from a bug (say, an array of objects that are not getting disposed of properly).

Resources