My huge 32-bit web services LLBLGen-based data access application is running alone on a dedicated 64-bit machine. Its physical memory consumption steadily grows up to approximately 2GB when the process releases almost all of the allocated space (up to 1,5GB) and continues to grow from that point again. There is no observable increase in Page Input values or other page file usage parameters so it looks like the memory is released rather than being swapped out to page file. I am thinking what kind of profile is this? There is nothing to actually prevent the process from grabbing all memory it can, on the other hand there are unacceptable http internal errors around the memory release - probably the clean-up blocks useful work. What would be a good strategy to make the cleanup less obtrusive, given the above is an acceptable behaviour in the first place.
It sounds like you have a memory leak, the process keeps leaking memory until it crushes with an out-of-memory condition and is then automatically restarted by the server.
1.5GB is about the maximum amount of memory a 32 bit process can allocate before running out of address space.
Somethings to look for:
Do you do your own caching? when are items removed from the cache?
Is there somewhere data is added to a collection every once in a while but never removed?
Do you call Dispose on every object that implements IDisposable?
Do you access any non-managed code at all (COM objects or using DllImport) or allocate non-managed memory (using the Marshal class for example)? anything that is allocated there is never freed by the garbage collector, you have to free it yourself.
Do you use 3rd party libraries or any code from 3rd parties? it can have any of the problems in the list too.
Is it possible you are not disposing of various disposable objects (particular DB related). This would leave them around, potentially tying up large amounts of unmanaged resources until the GC runs and their finalizers are called.
It would be worth running perfmon against you process and looking to see if there is a steady growth in some critical resource, like handles, or if your DB provider exposes performance counters then connections or open result sets.
I agree with the first part of edg's answer, but where he says:
"By setting objects to null when they
are dead you can encourage the GC to
reuse the memory consumed by those
objects, this limiting the growing
consumption of memory."
is incorrect. You never need to set an object to null since the GC will eventually collect your object after it goes out of scope.
This was discussed in this answer on SO: Setting Objects to Null/Nothing after use in .NET
Don't user Arraylists (garbage collect don't work weel with them), use instead generic lists
Other common error is to have in web.config Debug=true, this consume lot of memory, change the option to "false".
Other thing to do is use CLRProfiler, to trace the problem.
Good Luck,
Pedro
The Garbage Collector doesn't automatically free memory when it releases objects, it holds on to that memory to help minimise the expense of future mallocs.
When a low memory condition is triggered that memory will be returned to the OS and you will see more available memory when looking through task manager. This will normally happen about the 2GB mark, or 3GB if you use the relevant switch.
<contentious>
By setting objects to null when they are dead you can encourage the GC to reuse the memory consumed by those objects, this limiting the growing consumption of memory.
But which objects should you set to null? Big objects, large collections, frequently created objects.
</contentious>
EDIT: There is evidence to support the value of setting objects to null. See this for detail. Of course there is no need to set objects to null, the point is does it help memory management in any way?
EDIT: We need a recent benchmark if such a thing exists rather than continuing to opine.
Ensure that you aren't putting up a debug build of your project. There's a feature* that when you have a debug build, if you instantiate any object that contains the definition for an event, even if you don't raise the event, it will hold only a small piece of memory indefinitely. Over time, these small pieces of memory will eat away at your memory pool, until it eventually restarts the web process, and start again.
*I call this a feature (and not a bug) because it's been around since the beginning of .Net 2 (not present in .Net 1.1), and there's been no patch to fix it. The memory leak must be due to some feature needed when debugging.
We were having similar situations occur and altered all our database connections to use a try/catch/finally approach.
Try was used to execute code, catch for error collection, and finally to close all variables and database connections.
internal BECollection<ReportEntity> GetSomeReport()
{
Database db = DatabaseFactory.CreateDatabase();
BECollection<ReportEntity> _ind = new BECollection<ReportEntity>();
System.Data.Common.DbCommand dbc = db.GetStoredProcCommand("storedprocedure");
try
{
SqlDataReader reader = (SqlDataReader)db.ExecuteReader(dbc);
while (reader.Read())
{
//populate entity
}
}
catch (Exception ex)
{
Logging.LogMe(ex.Message.ToString(), "Error on SomeLayer/SomeReport", 1, 1);
return null;
}
finally
{
dbc.Connection.Close();
_ind = null;
}
return _ind;
}
My first guess would be a memory leak. My second guess would be that it is normal behavior - the GC won't be fired until you have significant memory pressure. The only way to be sure is to use a combination of a profiler and things like PerfMon. Some sites:
http://blogs.msdn.com/ricom/archive/2004/12/10/279612.aspx
http://support.microsoft.com/kb/318263
Tess's excellent lab series
In addition I would make sure you aren't running in Debug mode (as already mentioned).
As far as the HTTP errors - assuming you are running in server GC mode, it tries to do everything it can to not block requests. It would be interesting to find out what those HTTP errors are - that's not normal behavior from what I've seen in the past, and might point to some more of the root of your issue.
Related
We have a problem on our website, seemingly at random (every day or so, up to once every 7-10 days) the website will become unresponsive.
We have two web servers on Azure, and we use Redis.
I've managed to run DotNetMemory and caught it when it crashes, and what I observe is under Event handlers leak two items seem to increase in count into the thousands before the website stops working. Those two items are CaliEventHandlerDelegateProxy and ArglessEventHandlerProxy. Once the site crashes, we get lots of Redis exceptions that it can't connect to the Redis server. According to Azure Portal, our Redis server load never goes above 10% in peak times and we're following all best practises.
I've spent a long time going through our website ensuring that there are no obvious memory leaks, and have patched a few cases that went under the radar. Anecdotally, these seem to of improved the website stability a little. Things we've checked:
All iDisposable objects are now wrapped in using blocks (we did this strictly before but we did find a few not disposed properly)
Event handlers are unsubscribed - there are very few in our code base
We use WebUserControls pretty heavily. Each one had the current master page passed in as a parameter. We've removed the dependency on this as we thought it could cause GC to not collect the page perhaps
Our latest issue is that when the web server runs fine, but then we run DotNetMemory and attach it to the w3wp.exe process it causes the CaliEventHandlerDelegateProxy and ArglessEventHandlerProxy event leaks to increase rapidly until the site crashes! So the crash is reproducible just by running DotNetMemory. Here is a screenshot of what we saw:
I'm at a loss now, I believe I've exhausted all possibilities of memory leaks in our code base, and our "solution" is to have the app pools recycle every several hours to be on the safe side.
We've even tried upgrading Redis to the Premium tier, and even upgraded all drives on the webservers to SSDs to see if it helps things which it doesn't appear to.
Can anyone shed any light on what might be causing these issues?
All iDisposable objects are now wrapped in using blocks (we did this
strictly before but we did find a few not disposed properly)
We can't say a lot about crash without any information about it, but I have some speculations about it.
I see 10 000 (!) not disposed objects handled by finalization queue. Let start with them, find all of them and add Dispose call in your app.
Also I would recommend to check how many system handles utilized by your application. There is an OS limit on number of handles and if they are exceeded no more file handles, network sockets, etc can be created. I recommend it especially since the number of not disposed objects.
Also if you have a timeout on accessing Redis get performance profiler and look why so. I recommend to get JetBrains dotTrace and use TIMELINE mode to get a profile of your app, it will show thread sleeping, threads contention and many many more information what will help you to find a problem root. You can use command line tool to obtain profile data, in order not to install GUI application on the server side.
it causes the CaliEventHandlerDelegateProxy and
ArglessEventHandlerProxy event leaks to increase rapidly
dotMemory doesn't change your application code and doesn't allocate any managed objects in profiled process. Microsoft Profiling API injects a dll (written in c++) into the profiling process, it's a part of dotMemory, named Profilng Core, playing the role of the "server" (where standalone dotMemory written in C# is a client). Profiling Core doing some work with gathered data before sending it to the client side, it requires some memory, which allocated, of course, in the address space of the profiling process but it doesn't affect managed memory.
Memory profiling may affect performance of your application. For example, profiling API disables concurrent GC when application is under profiling or memory allocation data collecting can significantly slow down your application.
Why do you thing that CaliEventHandlerDelegateProxy and ArglessEventHandlerProxy are allocated only under dotMemory profiling? Could you please describe how you explored this?
Event handlers are unsubscribed - there are very few in our code base
dotMemory reports an event handler as a leak means there is only one reference to it - from event source at there is no possibility to unsubscribe from this event. Check all these leaks, find yours at look at the code how it is happened. Anyway, there are only 110.3 KB retained by these objects, why do you decide your site crashed because of them?
I'm at a loss now, I believe I've exhausted all possibilities of memory leaks in our code base
Take several snapshots in a period of time when memory consumption is growing, open full comparison of some of these snapshots and look at all survived objects which should not survive and find why they survived. This is the only way to prove that your app doesn't have memory leak, looking the code doesn't prove it, sorry.
Hope if you perform all the activities I recommend you to do (performance profiling, full snapshots and snapshots comparison investigation, not only inspections view, checking why there are huge amount of not disposed objects) you will find and fix the root problem.
I have an asp.net app which uses legacy COM interop library. Works fine until memory reaches somewhere around 500Mb and then it is no longer able to create new COM objects (get various exceptions, e.g. Creating an instance of the COM component with CLSID {FFFF-FFFF-FFFF-FFF-FFFFFF} from the IClassFactory failed due to the following error: 80070008.). It almost looks like it is hitting some kind of memory limit, but what is it? Can it be changed?
Solved! Turns out the object was creating a Window handle and we were hitting the 10K Window handles limit (except it was happening at 2K instances for some reason when inside IIS)
Solved! Turns out the object was creating a Window handle and we were hitting the 10K Window handles limit (except it was happening at 2K instances for some reason when inside IIS)
What OS, and is it 32-bit or 64-bit? What are you using to determine memory usage?
When you say you're explicitly releasing the objects, do you mean you're using Marshal.ReleaseComObject()?
I'm assuming you have AspCompat=true in your <%# Page > tag... wouldn't expect it to run at all if you didn't.
Can you give us some details on your COM object; what does it do, and can you post some code where you're calling it, including COM object signatures? How much memory would you expect a single object to take?
My first suspect, based only on the information that I've read so far, is that 500Mb is not truly the total memory in use, and/or that you're having a memory fragmentation issue. I've seen this occur with IIS processes when less than half of the memory is in use, and the errors tend to be random, depending on what object is being created at the time. BTW, 80070008 is 'not enough storage space'.
Process limits are 2GB on a 32-bit machine, of course, but even if a process isn't using the full 2GB, if there's not a contiguous block of memory of the size needed when creating an object, you'll get an out-of-memory error when you try to allocate. Lots of concurrent users implies lots of COM objects (and other objects) being allocated and released in a short period of time... which points to fragmentation as a suspect.
Coming up with an attack plan requires more info about the COM object and how it's being used.
Use a command pattern for queueing and executing the com interop in an asynchronous thread. This can free up the number of threads being used by iis, and allow you to control the number of calls/instances of the com app.
You may think about object pooling rather than creating every time a new object.
Lets say that you are using a shared hosting plan and your application stores lots of objects
in the application state.
If they start taking too much memory does this mean that the server will just remove them?
If not what will happen then? What happens when the server has no memory left? Can you still store objects into the application or session state?
I am asking this because i am planning on developing a big site that will rely on the application state, and it will be crucial that the objects stored there don't get destroyed.
What i am afraid of is that at a certain point i might have too many objects in the application state and they might get removed to free up memory.
There are three different thresholds:
The total size of your app exceeds the maximum process size on your machine (really only applicable with an x86 OS). In that case, you'll start getting out of memory errors at first, generally followed very quickly by a process crash.
Your process, along with everything else running on the machine, no longer fits in physical memory. In that case, the machine will start to page, generally resulting in extremely poor performance.
Your process exceeds the memory limit imposed by IIS on itself, via IIS Manager. In that case, the process will be killed and restarted, as with a regular AppPool recycle.
With the Application object, entries are not automatically removed if you approach any of the above thresholds. With the Cache object, they can be removed, depending on the priority you assign.
As others have said, over-using the Application object isn't generally a good idea, because it's not scalable. If you were ever to add a second load-balanced server, keeping the info in sync from one server to another becomes very challenging, among other things.
What happens when any application takes up too much memory on a computer?
It causes the server to run everything really slowly. Even the other sites that share the computer.
It's not a good idea to store that much in application state. Use your config file and/or the database.
It sounds like you have a memory leak, the process keeps leaking memory until it crushes with an out-of-memory condition and is then automatically restarted by the server.
1.5GB is about the maximum amount of memory a 32 bit process can allocate before running out of address space.
Somethings to look for:
Do you do your own caching? when are
items removed from the cache?
Is there somewhere data is added to a
collection every once in a while but
never removed?
Do you call Dispose on every object
that implements IDisposable?
Do you access any non-managed code at
all (COM objects or using DllImport)
or allocate non-managed memory (using
the Marshal class for example)?
anything that is allocated there is
never freed by the garbage collector,
you have to free it yourself.
Do you use 3rd party libraries or any
code from 3rd parties? it can have
any of the problems in the list too.
If you use the Cache object instead of the Application object, you can minimize problems of running out of memory. If the memory utilization of the ASP.Net worker process approaches the point at which the process will be bounced automatically (the recycle limit), the memory in Cache will be scavenged. Items that haven't been used for a while are removed first, potentially preventing the process from recycling. If the data is stored in Application, ASP.Net can do nothing to prevent the process from recycling, and all app state will be lost.
However, you do need to have a way of repopulating the Cache object. You could do that by persisting the cached data in a database, as others have proposed.
Here's a short article with a good code example for handling Cache.
And here's a video of how to use Cache.
Anything stored in application state should be refreshable, and needs to be saved in current status in files or database. If nothing else happens, IIS restarts worker processes at least once a day, so nothing in application state will be there forever.
If you do run out of memory, you'll probably get an out of memory exception. You can also monitor memory usage, but in a shared host environment, that may not be enough information to avoid problems. And you may get the worker process recycled as an "involuntary" fix.
When you say that it's crucial that objects stored in application state don't get destroyed, it sounds like you're setting yourself up for trouble.
I think you should use session instead of the application sate and stored that session into sql server database. So once your application user end its session that will release your memory.
If you want more specific answer then please provide the more information about your application.
Deadlocks are hard to find and very uncomfortable to remove.
How can I find error sources for deadlocks in my code? Are there any "deadlock patterns"?
In my special case, it deals with databases, but this question is open for every deadlock.
Update: This recent MSDN article, Tools And Techniques to Identify Concurrency Issues, might also be of interest
Stephen Toub in the MSDN article Deadlock monitor states the following four conditions necessary for deadlocks to occur:
A limited number of a particular resource. In the case of a monitor in C# (what you use when you employ the lock keyword), this limited number is one, since a monitor is a mutual-exclusion lock (meaning only one thread can own a monitor at a time).
The ability to hold one resource and request another. In C#, this is akin to locking on one object and then locking on another before releasing the first lock, for example:
lock(a)
{
...
lock(b)
{
...
}
}
No preemption capability. In C#, this means that one thread can't force another thread to release a lock.
A circular wait condition. This means that there is a cycle of threads, each of which is waiting for the next to release a resource before it can continue.
He goes on to explain that the way to avoid deadlocks is to avoid (or thwart) condition four.
Joe Duffy discusses several techniques
for avoiding and detecting deadlocks,
including one known as lock leveling.
In lock leveling, locks are assigned
numerical values, and threads must
only acquire locks that have higher
numbers than locks they have already
acquired. This prevents the
possibility of a cycle. It's also
frequently difficult to do well in a
typical software application today,
and a failure to follow lock leveling
on every lock acquisition invites
deadlock.
The classic deadlock scenario is A is holding lock X and wants to acquire lock Y, while B is holding lock Y and wants to acquire lock X. Since neither can complete what they are trying to do both will end up waiting forever (unless timeouts are used).
In this case a deadlock can be avoided if A and B acquire the locks in the same order.
No deadlock patterns to my knowledge (and 12 years of writing heavily multithreaded trading applications).. But the TimedLock class has been of great help in finding deadlocks that exist in code without massive rework.
http://www.randomtree.org/eric/techblog/archives/2004/10/multithreading_is_hard.html
basically, (in dotnet/c#) you search/replace all your "lock(xxx)" statements with "using TimedLock.Lock(xxx)"
If a deadlock is ever detected (lock unable to be obtained within the specified timeout, defaults to 10 seconds), then an exception is thrown. My local version also immediately logs the stacktrace. Walk up the stacktrace (preferably debug build with line numbers) and you'll immediately see what locks were held at the point of failure, and which one it was attempting to get.
In dotnet 1.1, in a deadlock situation as described, as luck would have it all the threads which were locked would throw the exception at the same time. So you'd get 2+ stacktraces, and all the information necessary to fix the problem. (2.0+ may have changed the threading model internally enough to not be this lucky, I'm not sure)
Making sure all transactions affect tables in the same order is the key to avoiding the most common of deadlocks.
For example:
Transaction A
UPDATE Table A SET Foo = 'Bar'
UPDATE Table B SET Bar = 'Foo'
Transaction B
UPDATE Table B SET Bar = 'Foo'
UPDATE Table A SET Foo = 'Bar'
This is extremely likely to result in a deadlock as Transaction A gets a lock on Table A, Transaction B gets a lock on table B, therefore neither of them get a lock for their second command until the other has finished.
All other forms of deadlocks are generally caused through high intensity use and SQL Server deadlocking internally whilst allocated resources.
Yes - deadlocks occur when processes try to acquire resources in random order. If all your processes try to acquire the same resources in the same order, the possibilities for deadlocks are greatly reduced, if not eliminated.
Of course, this is not always easy to arrange...
The most common (according to my unscientific observations) DB deadlock scenario is very simple:
Two processes read something (a DB record for example), both acquire a shared lock on the associated resource (usually a DB page),
Both try to make an update, trying to upgrade their locks to exclusive ones - voila, deadlock.
This can be avoided by specifying the "FOR UPDATE" clause (or similar, depending on your particular RDBMS) if the read is to be followed by an update. This way the process gets the exclusive lock from the start, making the above scenario impossible.
I recommend reading this article by Herb Sutter. It explains the reasons behind deadlocking issues and puts forward a framework in this article to tackle this problem.
The typical scenario are mismatched update plans (tables not always updated in the same order). However it is not unusual to have deadlocks when under high processing volume.
I tend to accept deadlocks as a fact of life, it will happen one day or another so I have my DAL prepared to handle and retry a deadlocked operation.
A condition that occure whene two process are each waiting for the othere to complete befoure preceding.the result is both procedure is hang.
its most comonelly multitasking and clint/server.
Deadlock occurs mainly when there are multiple dependent locks exist. In a thread and another thread tries to lock the mutex in reverse order occurs. One should pay attention to use a mutex to avoid deadlocks.
Be sure to complete the operation after releasing the lock. If you have multiple locks, such as access order is ABC, releasing order should also be ABC.
In my last project I faced a problem with deadlocks in an sql Server Database. The problem in finding the reason was, that my software and a third party software are using the same Database and are working on the same tables. It was very hard to find out, what causes the deadlocks. I ended up writing an sql-query to find out which processes an which sql-Statements are causing the deadlocks. You can find that statement here: Deadlocks on SQL-Server
To avoid the deadlock there is a algorithm called Banker's algorithm.
This one also provides helpful information to avoid deadlock.
I have one website on my server, and my IIS Worker Process is using 4GB RAM consistently. What should I be checking?
c:\windows\system32\inetsrv\w3wp.exe
I would check the CLR Tuning Section in the document Gulzar mentioned.
As the other posters pointed out, any object that implements IDispose should have Dispose() called on it when it's finished with, preferably using the using construct.
Fire up perfmon.exe and add these counters:
Process\Private Bytes
.NET CLR Memory# Bytes in all Heaps
Process\Working Set
.NET CLR Memory\Large Object Heap size
An increase in Private Bytes while the
number of Bytes in all Heaps counter remains the same indicates unmanaged
memory consumption.
An increase in
both counters indicates managed memory
consumption
check the section on troubleshooting memory bottlenecks in Tuning .NET Application Performance
Create a mini-dump of the w3wp process and use WinDbg to see what objects are in memory. This is what the IIS support team at Microsoft does whenever they get questions like this.
If you have access to the source code, you may want to check that any objects that implement IDisposable are being referenced inside using statements or being properly disposed of when you are done with them.
Using is a C# construct, but the basic idea is that you are freeing up resources when you are done.
Another thing to check on is large objects getting put in the "in process" session state or cache.
More details would definitely help. How many applications are running inside the application pool? Are there ASP.NET applications in the pool?
If you're running ASP.NET, take a good look at what you're storing in the session and cache variables. Use PerfMon to check how many Generation 0, 1 and 2 collections are occurring. Be wary of storing UI elements in the session state or cache since this will prevent the entire page instance and all of the page instance's children from being collected as well. Finally, check to see if you're doing lots of string concatenation. This can cause lots of object instantiations since .NET strings are immutable. Look into using StringBuilder instead.
As other people noted common cause of this problem is resource leaking, also there is a known issue with win2k3 server and IIS6 KB916984