I have a few asp.net applications running on my IIS6 server. And after I published a new one, it froze after a few hours. When i restart the app-pool it works again.
I see after I restart the app-pool that the memory footprint of w3wp.exe increases steadily by about 4-500kb each second, and it doesn't seem to stop.
How can I find out what is causing the memory leak?
Tess's blog is the place for learning how to track these down.
.NET Debugging Demos
Related
I've been meaning to get on here for ages, alas, my recent problem is motivating me, hoping to contribute and not just sponge, but hopefully someone can assist with this issue.
We have two 2003 web servers (looking to upgrade) running IIS6 running multiple ASP.net applications on one web site (via Microsoft NLB).
We have a persistent issue with one application, that is, the white screen of death, the application just hangs, no errors.
Things I have tried / observed:
Taken servers out of NLB
Checked HTTPERR for errors at time of issue, some bad requests and connection abandoned by app pool present
IIS logs show requests coming in
Checked database for locking PID's or performance, everything ok
Checked network connections and links to other API's all good
Checked other websites on server - not working ok
Event app logs no errors found at exact time of issue
Split out app pools, found the problem application
IIS reset and shutting down all IIS process and services gets it working again
So next when application hung, recycling that specific app pool gets it working for a short period
'SOME' parts of the same application will work, even though the main function hangs and gives the white screen of death
Other things to note:
Server has 4GB ram, terrible I know, 4gb page file, 1.7gb total in use
CPU usage always <12%
Resources not 'really' a major factor
Only one w3wp for this app pool (been running like this for years)
After setting the app pool to recycle on an hourly basis and tweaking some app pool settings, the issue seemed to go away for at least six months. Now it's back for whatever reason, no updates, no changes in application or infrastructure.
I've upped the worker process for the app pool to 2, the issue still persists after a period of time
I observed THREE worker process at random intervals when checking not the TWO set up, could be part of the recycle process (one starts up, other shuts down) and I caught it at that moment.
Killing the highest RAM usage W3WP re-instates application instantly when new process starts up for that app
Common RAM usage when site failing appears to be apprx 210mb on one of the applications W3WP
I have now set recycling rules instead of hourly to recycle at a more purposeful 200mb. Why this would be a limiting factor I do not know.
I'm struggling to get to the bottom of it, but it doesn't appear to be network, hardware, NLB, Database, or any other core factors. It is specific to a worker process for this app pool hanging / crashing and not processing requests for a certain part of the site.
I am running an effort to upgrade to 2008 in the next few months (version of ESX does not support 2012 and that upgrade is not in scope). However, I'd really like to work out what the issue is rather than using web deploy to deploy a dodgy installation of a legacy application.
Any advice or help appreciated.
I deployed my application in IIS server. It is using by all users where from all regions. it is running by 24X7.The memory of app pool accumulated. It is taking 20gb of the memory. Can anyone please suggest me why It is happening? and how it can be solved?
Memory Consumption Preview
Seems like you might have a bug in your application. Anyway you can configure the app pool to recycle after reaching a defined limit here.
We have an ASP.net website built in C#/VB built on reporting services. Some of these are local (rdlc) and some sit on a reporting server (rdl).
The problem we are running into is that about every two weeks the server starts reporting OOM errors and the IIS worker process is running away with memory. The quick fix seems to be restarting iis, but this requires manual interaction and is usually reported by users.
It seems like a memory leak somewhere, but most of the reports are really simple data pulls and connections are all closed, at this point we don't really know how to debug it further.
Any ideas?
You can run the IIS Debug Diagnostic Tool to see what the offending handle is http://support.microsoft.com/kb/919789
We have a decent size web app that was .net 2.0. we upgraded it to .net 4.0 and no occasionally page loads go slow. I would say one out of ten times or so, a refresh and it'll load near instant again but i'm not sure what could cause these hang ups after upgrading to 4.0.
are there any common problems after updating that could cause this?
thanks.
There could be some other reason however the first comes in mind is Application Pool shutdown which may cause a little delay in page load. Have you verified if the random page slowness is caused by your Application Pool is shutdown?
To Verify if it is not related with Application Pool, set AppPool shutdown to 0 so it will not shutdown and then check if page reload still take time.
I developed a project in ASP.Net MVC 3, my hosting is using iis7 (Win Web Serv 2008 R2), and the first request after the website sit's idle (during about 1-2 hours) is very slow.
I use VPS with 512Mb RAM. Can this be related with a too little RAM?
Can anyone help me with possible causes of such behaviour?
After a certain amount of inactivity IIS unloads the AppDomain. And then the first request loads the application once again which is slower. You could try to configure this period in the properties of IIS but there might also be other causes that an application unloads such as for example a certain threshold of CPU or memory usage is reached. Those thresholds are also configurable in IIS.
That's not something specific for ASP.NET MVC. It's true for all ASP.NET applications in general.
We had also this problem with ruby and passenger that takes the app out of memory after a while, but I found a nice application that fixed this issue for us without changing anything in the server configuration, the app is called wekkars, and you can find it here: http://www.wekkars.com