I have a legacy ASP.NET website consisting of over 230 unique .ASPX files. This website has hundreds of thousands of hits per day across many of the different files. It leaks memory, causing periodic process recycles throughout the day (Windows Event ID 5117: A worker process with process id of '%1' serving application pool '%2' has requested a recycle because it reached its private bytes memory limit.)
I've already tested the 30 most-frequently accessed pages and fixed the memory leaks in several of them, resulting in a significant improvement. And load testing of those pages show they don't leak any more. But that leaves over 200 pages that still haven't been checked. But with 200 more files to check I wonder if there isn't something a little more organized or clever that can be done.
For instance, is there instrumentation that could be added to the Application_BeginRequest or Application_EndRequest event handlers in the Global.asax? If so, what specifically should be monitored? Example code and/or discussion would be most helpful.
The best tool you can use to get organized and plug your biggest leaks first is windbg.
It comes with the windows sdk
heres a reference
http://msdn.microsoft.com/en-us/library/windows/hardware/ff551063(v=vs.85).aspx
It can be a little tough at first to get used to the commands but heres what youll want to do.
1. Install windbg on a machine that is running the site
2. Load test all the pages and get the memory usage way up
3. (optional) Force garbage collection with gccollect()
4. Attach to w3wp.exe with windbg
5. load your symbols (.pdbs and the clr and sos)
6. dump the memory (!dumpheap -stat)
This will show you a sorted list by the number of objects in memory. When you have a leak you start to build up tons of the same object.
You need to dig deeper to get the size of these objects
1. The first number in the row will be the method table copy this number
2. Dump the method table (!dumpheap -mt #######)
3. Choose a single objects ID from the first column and copy this number
4. Get the size of the object (!objsize #######)
( # of objects) X (size of single object) = size of the leak
Find the classes that are taking up the most space and plug them first.
This may help too - CLR profiler
http://www.microsoft.com/en-us/download/details.aspx?id=14727
Related
We have a problem on our website, seemingly at random (every day or so, up to once every 7-10 days) the website will become unresponsive.
We have two web servers on Azure, and we use Redis.
I've managed to run DotNetMemory and caught it when it crashes, and what I observe is under Event handlers leak two items seem to increase in count into the thousands before the website stops working. Those two items are CaliEventHandlerDelegateProxy and ArglessEventHandlerProxy. Once the site crashes, we get lots of Redis exceptions that it can't connect to the Redis server. According to Azure Portal, our Redis server load never goes above 10% in peak times and we're following all best practises.
I've spent a long time going through our website ensuring that there are no obvious memory leaks, and have patched a few cases that went under the radar. Anecdotally, these seem to of improved the website stability a little. Things we've checked:
All iDisposable objects are now wrapped in using blocks (we did this strictly before but we did find a few not disposed properly)
Event handlers are unsubscribed - there are very few in our code base
We use WebUserControls pretty heavily. Each one had the current master page passed in as a parameter. We've removed the dependency on this as we thought it could cause GC to not collect the page perhaps
Our latest issue is that when the web server runs fine, but then we run DotNetMemory and attach it to the w3wp.exe process it causes the CaliEventHandlerDelegateProxy and ArglessEventHandlerProxy event leaks to increase rapidly until the site crashes! So the crash is reproducible just by running DotNetMemory. Here is a screenshot of what we saw:
I'm at a loss now, I believe I've exhausted all possibilities of memory leaks in our code base, and our "solution" is to have the app pools recycle every several hours to be on the safe side.
We've even tried upgrading Redis to the Premium tier, and even upgraded all drives on the webservers to SSDs to see if it helps things which it doesn't appear to.
Can anyone shed any light on what might be causing these issues?
All iDisposable objects are now wrapped in using blocks (we did this
strictly before but we did find a few not disposed properly)
We can't say a lot about crash without any information about it, but I have some speculations about it.
I see 10 000 (!) not disposed objects handled by finalization queue. Let start with them, find all of them and add Dispose call in your app.
Also I would recommend to check how many system handles utilized by your application. There is an OS limit on number of handles and if they are exceeded no more file handles, network sockets, etc can be created. I recommend it especially since the number of not disposed objects.
Also if you have a timeout on accessing Redis get performance profiler and look why so. I recommend to get JetBrains dotTrace and use TIMELINE mode to get a profile of your app, it will show thread sleeping, threads contention and many many more information what will help you to find a problem root. You can use command line tool to obtain profile data, in order not to install GUI application on the server side.
it causes the CaliEventHandlerDelegateProxy and
ArglessEventHandlerProxy event leaks to increase rapidly
dotMemory doesn't change your application code and doesn't allocate any managed objects in profiled process. Microsoft Profiling API injects a dll (written in c++) into the profiling process, it's a part of dotMemory, named Profilng Core, playing the role of the "server" (where standalone dotMemory written in C# is a client). Profiling Core doing some work with gathered data before sending it to the client side, it requires some memory, which allocated, of course, in the address space of the profiling process but it doesn't affect managed memory.
Memory profiling may affect performance of your application. For example, profiling API disables concurrent GC when application is under profiling or memory allocation data collecting can significantly slow down your application.
Why do you thing that CaliEventHandlerDelegateProxy and ArglessEventHandlerProxy are allocated only under dotMemory profiling? Could you please describe how you explored this?
Event handlers are unsubscribed - there are very few in our code base
dotMemory reports an event handler as a leak means there is only one reference to it - from event source at there is no possibility to unsubscribe from this event. Check all these leaks, find yours at look at the code how it is happened. Anyway, there are only 110.3 KB retained by these objects, why do you decide your site crashed because of them?
I'm at a loss now, I believe I've exhausted all possibilities of memory leaks in our code base
Take several snapshots in a period of time when memory consumption is growing, open full comparison of some of these snapshots and look at all survived objects which should not survive and find why they survived. This is the only way to prove that your app doesn't have memory leak, looking the code doesn't prove it, sorry.
Hope if you perform all the activities I recommend you to do (performance profiling, full snapshots and snapshots comparison investigation, not only inspections view, checking why there are huge amount of not disposed objects) you will find and fix the root problem.
I have a update panel combined with gridview with sorting and paging.
I go into task manager to monitor the memory usage of the worker process (w3wp)
What I do is just click on the sort buttons rapidly.
With each click the memory of the process increases with about 2 mb
So I go from 30 mb memory usage to about 90. Then it stops at remains there, no memory is freed up. I am not using caching or session/application state.
What can be causing this, is there a setting in IIS to reduce the mem usage?
--
I also used .net profiler to examine my app memory usage: 4 mb, so what is the other 86 used for??? Even though it repots 4mb, in task manager it says 90 mb, so this leads me to believe that the rest is unamanaged memory which must be used by IIS in some way.
The .NET GC is non-deterministic. This means that it will run whenever it decides it should run. You can try calling GC.Collect() explicitly for example in the Page_Init event to see if the memory still increases but you have better remove it from the real app otherwise you are just preventing the GC from doing its work efficiently.
The issue is actually with the GridView, and not the UpdatePanel. Its records are stored in your ViewState, so that's being passed back and forth every single postback. Also, as you click on the sort buttons rapidly, you're generating multiple requests to sort the data. Depending on how you have your sort implemented, you could be duplicating the recordset for sorting with each click request.
There is no setting in IIS to "reduce memory usage" as it simply hosts your ASP.NET application. Your application needs to address its own memory concerns.
Sorting of a large amount of data can be a resource intensive process. I would say your best bet is to disable the sort button after it's been clicked and re-enable it once your data has been sorted.
I have an asp.net app which uses legacy COM interop library. Works fine until memory reaches somewhere around 500Mb and then it is no longer able to create new COM objects (get various exceptions, e.g. Creating an instance of the COM component with CLSID {FFFF-FFFF-FFFF-FFF-FFFFFF} from the IClassFactory failed due to the following error: 80070008.). It almost looks like it is hitting some kind of memory limit, but what is it? Can it be changed?
Solved! Turns out the object was creating a Window handle and we were hitting the 10K Window handles limit (except it was happening at 2K instances for some reason when inside IIS)
Solved! Turns out the object was creating a Window handle and we were hitting the 10K Window handles limit (except it was happening at 2K instances for some reason when inside IIS)
What OS, and is it 32-bit or 64-bit? What are you using to determine memory usage?
When you say you're explicitly releasing the objects, do you mean you're using Marshal.ReleaseComObject()?
I'm assuming you have AspCompat=true in your <%# Page > tag... wouldn't expect it to run at all if you didn't.
Can you give us some details on your COM object; what does it do, and can you post some code where you're calling it, including COM object signatures? How much memory would you expect a single object to take?
My first suspect, based only on the information that I've read so far, is that 500Mb is not truly the total memory in use, and/or that you're having a memory fragmentation issue. I've seen this occur with IIS processes when less than half of the memory is in use, and the errors tend to be random, depending on what object is being created at the time. BTW, 80070008 is 'not enough storage space'.
Process limits are 2GB on a 32-bit machine, of course, but even if a process isn't using the full 2GB, if there's not a contiguous block of memory of the size needed when creating an object, you'll get an out-of-memory error when you try to allocate. Lots of concurrent users implies lots of COM objects (and other objects) being allocated and released in a short period of time... which points to fragmentation as a suspect.
Coming up with an attack plan requires more info about the COM object and how it's being used.
Use a command pattern for queueing and executing the com interop in an asynchronous thread. This can free up the number of threads being used by iis, and allow you to control the number of calls/instances of the com app.
You may think about object pooling rather than creating every time a new object.
Lets say that you are using a shared hosting plan and your application stores lots of objects
in the application state.
If they start taking too much memory does this mean that the server will just remove them?
If not what will happen then? What happens when the server has no memory left? Can you still store objects into the application or session state?
I am asking this because i am planning on developing a big site that will rely on the application state, and it will be crucial that the objects stored there don't get destroyed.
What i am afraid of is that at a certain point i might have too many objects in the application state and they might get removed to free up memory.
There are three different thresholds:
The total size of your app exceeds the maximum process size on your machine (really only applicable with an x86 OS). In that case, you'll start getting out of memory errors at first, generally followed very quickly by a process crash.
Your process, along with everything else running on the machine, no longer fits in physical memory. In that case, the machine will start to page, generally resulting in extremely poor performance.
Your process exceeds the memory limit imposed by IIS on itself, via IIS Manager. In that case, the process will be killed and restarted, as with a regular AppPool recycle.
With the Application object, entries are not automatically removed if you approach any of the above thresholds. With the Cache object, they can be removed, depending on the priority you assign.
As others have said, over-using the Application object isn't generally a good idea, because it's not scalable. If you were ever to add a second load-balanced server, keeping the info in sync from one server to another becomes very challenging, among other things.
What happens when any application takes up too much memory on a computer?
It causes the server to run everything really slowly. Even the other sites that share the computer.
It's not a good idea to store that much in application state. Use your config file and/or the database.
It sounds like you have a memory leak, the process keeps leaking memory until it crushes with an out-of-memory condition and is then automatically restarted by the server.
1.5GB is about the maximum amount of memory a 32 bit process can allocate before running out of address space.
Somethings to look for:
Do you do your own caching? when are
items removed from the cache?
Is there somewhere data is added to a
collection every once in a while but
never removed?
Do you call Dispose on every object
that implements IDisposable?
Do you access any non-managed code at
all (COM objects or using DllImport)
or allocate non-managed memory (using
the Marshal class for example)?
anything that is allocated there is
never freed by the garbage collector,
you have to free it yourself.
Do you use 3rd party libraries or any
code from 3rd parties? it can have
any of the problems in the list too.
If you use the Cache object instead of the Application object, you can minimize problems of running out of memory. If the memory utilization of the ASP.Net worker process approaches the point at which the process will be bounced automatically (the recycle limit), the memory in Cache will be scavenged. Items that haven't been used for a while are removed first, potentially preventing the process from recycling. If the data is stored in Application, ASP.Net can do nothing to prevent the process from recycling, and all app state will be lost.
However, you do need to have a way of repopulating the Cache object. You could do that by persisting the cached data in a database, as others have proposed.
Here's a short article with a good code example for handling Cache.
And here's a video of how to use Cache.
Anything stored in application state should be refreshable, and needs to be saved in current status in files or database. If nothing else happens, IIS restarts worker processes at least once a day, so nothing in application state will be there forever.
If you do run out of memory, you'll probably get an out of memory exception. You can also monitor memory usage, but in a shared host environment, that may not be enough information to avoid problems. And you may get the worker process recycled as an "involuntary" fix.
When you say that it's crucial that objects stored in application state don't get destroyed, it sounds like you're setting yourself up for trouble.
I think you should use session instead of the application sate and stored that session into sql server database. So once your application user end its session that will release your memory.
If you want more specific answer then please provide the more information about your application.
I'm building business app that will hold somewhere between 50,000 to 150,000 companies. Each company (db row) is represented with 4-5 properties/columns (title, location,...). ORM is LINQ2SQL.
I have to do some calculation, and for that I have lot of queries for specific company. Now, i go to db every time when i need something, and it produces 50-200 queries, depending on calculation complexy. I tried to put all companies to cache, and for 10,000 rows (companies) in db, it takes around 5,5MB of cache. In this scenario, I have only one query.
This application will be on shared hosting server, so my resources are limited. I'm interested, what will happen if I try to load, let say 100,000 companies (rows, objects)? Or put that in cache?
Is there any RAM limit that average hosting company give to ASP.NET application? Does it depend on dedicated Applcation Pool (I can put app to dedicated pool)?
Options are:
- load whole table to c# objects. Id did some memory profiling, 10,000 objects needs 5MB RAM
- query db to get referenced objects when needed.
Task is: for given company A, build tree of connected companies.
Table and columns:
Company : IdCompany, Title, Address, Contact
CompanyConnection: IdParentCompany, IdChildCompany
Your shared host will likely be IIS 7 on Windows Server running as a virtual machine. This machine will behave as any ordinary machine would - it is not 'aware' of being shared or virtualised.
You should expect Windows to begin paging to disk when it is out of physical RAM and then out of memory errors only get thrown only when the page file has filled the disk. Of course, you don't ever want to page any part of the warm cache to disk.
Windows itself can begin nagging you about being out of memory, but this is not the same 'urgency' and applications will continue to be able to request RAM and it will continue being given (albeit serviced from the page file).
If you application could crash and leave corrupt state or a partial transaction, then you should code defensively and check memory is available before embarking upon an action.
Create the expected number of objects in a loop with pretend data and watch the memory consumption on the box - the Working Set of the worker process is the one to watch. You can do this in Task Manager.
Watch for Page Faults. These are events when a memory operation had to be directed to disk.
Also, very large sets of objects can cause long garbage collection cycles >1second. This can be a big issue in time-sensitive applications like trading and market data.
Hope that helps.
Update: I do a similar caching thang for a mega data-mining application.
Each ORM type has a GetObject method which uses a giant cache or goes to disk and then updates the cache: Person.GetPerson( check people cache, go to db, add to people cache )
Now my queries return just the unique keys of the results. Then each key is fetched using the above method. This is slow initially until the cache builds up but...
The point being that each query result points to the same instance in memory! This means the RAM footprint is much smaller due to sharing.
The query results are then cached, too. Of course.
Where objects are not immutable, each object-write updates its own instance in the giant cache but also causes all query caches that concern that type of object to void themselves!
Of course, in this application, writes are rare as its mainly reference data.