I've been profiling my ASP.NET application with ANTS Memory Profiler 6, and have seen indications of memory leaks. However, I don't know whether or not the growths I'm seeing are supposed to be there or not (for instance, System.String grows a lot each snapshot. Should it?)
I don't understand the whole memory processso I don't know if I am interpreting the results correctly or not. How do I interpret the results of the ANTS Memory Profiler?
I have kind of been able to answer my own question while solving my memory issue. Although String may be on the top of the list most of the time, I shouldn't see the instance count just keep growing and growing. It turns out in my applcation that an Object I thought was being free'd actually wasn't which held a reference to some XML files which were of course held in Strings.
My test was to go to the home page of the web site -> Click to another page -> Back to the home page. Doing this should mean no new references should have been created (instance count should remain 0 (no growth)).
Hope this can help someone else.
Related
I few days search the source on the memory leak in my software and at least found it.
So steps:
I create the GUI application, add image to the .qrc, create form in Qt Designer, add QPushButton there and in the styleSheet property write
#closeButton{ image: url(:/system/images/White/Close.png); }
(the button named as "closeButton")
Without style sheet that I add the program works fine, with style sheet - I receive memory leak.
So how to avoid memory leak in this case?
Objects that survive till the process termination aren't necessarily memory leaks, and the tool won't be able to tell you which ones are memory leaks and which aren't. Memory leaks are usually only the allocations that are made multiple times from the same program location, and never get freed. Even then, it may not always be the case. Leak detection requires a purpose-made test harness that repeats a series of operations that are supposed not to leave behind memory allocated at any given program location more than once. If you then notice that, with increasing number of operations, the number of memory blocks left behind increases, you likely have a real leak. Ideally, the test harness should take snapshots of allocated memory blocks after each "operation cycle", and flag the program locations that consistently leave stuff behind. The library should be able to capture a stack trace to give you the program location where the allocation was made. Otherwise it's useless in practice.
I'm very suspicious of code that deallocates all memory before process termination: usually it's just wasted time, and it prolongs system shutdown and is just bad UX. When the user hits the "Exit" button, make sure that data is safe (e.g. close sqlite files, save open documents - maybe just as "work in progress" that will be brought back the next time the application is used), and then call exit(0).
In general, leak detection takes a bit more than just using a library that gives you a list of memory blocks allocated at exit. The library is a tool, that you, a thinking, reasoning human developer must apply to the problem :) Just as a hammer won't be useful by banging it all over the place (unless you got lots of nails to hammer!), so won't be a "leak detector" library all by itself.
I am profiling an application(using VS 2010) that is behaving badly in production. Once of the recommendations given by VS 2010 is:
Relatively high rate of Gen 1 garbage collections is occurring. If, by
design, most of your program's data structures are allocated and
persisted for a long time, this is not ordinarily a problem. However,
if this behavior is unintended, your app may be pinning objects. If
you are not certain, you can gather .NET memory allocation data and
object lifetime information to understand the pattern of memory
allocation your application uses.
Searching on google gives the following link=> http://msdn.microsoft.com/en-us/library/ee815714.aspx. Are there some obvious things that I can do to reduce this issue?I seem to be lost here.
Double-click the message in the Errors List window to navigate to the
Marks View of the profiling data. Find the .NET CLR Memory# of Gen 0
Collections and .NET CLR Memory# of Gen 1 Collections columns.
Determine if there are specific phases of program execution where
garbage collection is occurring more frequently. Compare these values
to the % Time in GC column to see if the pattern of managed memory
allocations is causing excessive memory management overhead.
To understand the application’s pattern of managed memory usage,
profile it again running a.NET Memory allocation profile and request
Object Lifetime measurements.
For information about how to improve garbage collection performance,
see Garbage Collector Basics and Performance Hints on the Microsoft
Web site. For information about the overhead of automatic garbage
collection, see Large Object Heap Uncovered.
The relevant line there is:
To understand the application’s pattern of managed memory usage, profile it again running a.NET Memory allocation profile and request Object Lifetime measurements.
You need to understand how many objects are being allocated by your application and when, and how long they are alive for. You're probably allocating hundreds (or thousands!) of tiny objects inside a loop somewhere without really thinking about the consequences of reclaiming that memory when the references fall out of scope.
http://msdn.microsoft.com/en-us/library/ms973837.aspx states:
Now that we have a basic model for how things are working, let's
consider some things that could go wrong that would make it slow. That
will give us a good idea what sorts of things we should try to avoid
to get the best performance out of the collector.
Too Many Allocations
This is really the most basic thing that can go wrong. Allocating new
memory with the garbage collector is really quite fast. As you can see
in Figure 2 above is all that needs to happen typically is for the
allocation pointer to get moved to create space for your new object on
the "allocated" side—it doesn't get much faster than that. However,
sooner or later a garbage collect has to happen and, all things being
equal, it's better for that to happen later than sooner. So you want
to make sure when you're creating new objects that it's really
necessary and appropriate to do so, even though creating just one is
fast.
This may sound like obvious advice, but actually it's remarkably easy
to forget that one little line of code you write could trigger a lot
of allocations. For example, suppose you're writing a comparison
function of some kind, and suppose that your objects have a keywords
field and that you want your comparison to be case insensitive on the
keywords in the order given. Now in this case you can't just compare
the entire keywords string, because the first keyword might be very
short. It would be tempting to use String.Split to break the keyword
string into pieces and then compare each piece in order using the
normal case-insensitive compare. Sounds great right?
Well, as it turns out doing it like that isn't such a good idea. You
see, String.Split is going to create an array of strings, which means
one new string object for every keyword originally in your keywords
string plus one more object for the array. Yikes! If we're doing this
in the context of a sort, that's a lot of comparisons and your
two-line comparison function is now creating a very large number of
temporary objects. Suddenly the garbage collector is going to be
working very hard on your behalf, and even with the cleverest
collection scheme there is just a lot of trash to clean up. Better to
write a comparison function that doesn't require the allocations at
all.
We have a flex app that will typically run for long periods of time (could be days or weeks). When I came in this morning I noticed that the app had stopped running and a white exclamation point in a gray circle was in the center of the app. I found a post about it on the Adobe forums, but no one seems to know exactly what the symbol means so I thought I'd reach out to the SO community.
Adobe forum post: http://forums.adobe.com/message/3087523
Screen shot of the symbol:
Any ideas?
Here's an answer in the post you linked to from an Adobe employee:
The error you are seeing is the new
out of memory notification. It is
basically shielding the user when
memory usage gets near the system
resource cap. The best course of
action here (if you own the content)
is to check your application for high
memory usage and correct the errors.
If you don't own the content, it would
probably be best to contact the owners
and make them aware of the issue you
are seeing.
He also says this in a later response:
Developers can use the
System.totalMemory property in AS3 to
monitor the memory usage that the
Flash Player is taking up. This iwll
allow you to see how much memory is
used, where leaks are and allow you to
optimize your content based on this
property.
I work for a digital signage company and we have also came across this error, however, it is not only memory leak related because it can be caused by utilizing the vector code on that page provided. We have also noted that it occurs without any kind of memory spike whatsoever, and sometimes appears randomly. However we noticed that when we replicated the bug with the vector error, it was saying it was an out of memory error - which clearly was not the case.
In our internal tests we noted that this bug only occurs with flash player 10.1 and up, flash player 10 does not seem to have this issue. Further, there seems to be a weak connection between the error occurring and the use of video. I know this might not be too much help, but just thought you should know it is not only a memory leak related issue. I have submitted this bug to Adobe, and hopefully they resolve it soon.
This can occur when using a Vector.int which is initialized using an array of a single, negative int. Because of the way you initialize the vector class with code such as:
Vector.int([-2])
The -2 gets passed to the vector class as it's initial length like Array(5) would be. This causes an error somehow (and is not checked and raised as an exception).
I have also noted the issue repeating when passing negative values to length of a Vector.
A possible explanation would be that the vector tries to allocate the length its been given immediately.
Since the negative value is being forced into a uint, the negative value autumatically translates to a very large positive value. this causes the Vector to attempt allocation of too much memory (about 4GB) and hence the immediate crash.
if you pass a negative value to the length of an Array nothing happens, because apparently it does not attempt to allocate the length. but you can inspect the value and see that it is a very large positive number.
This explanattion is pure conjecture, I did not hear it anywhere. but it is consistent with as semantics and the meaning of the exclamation mark.
This said, I have search our entire code base for the use of the setter "length" and could not find it used with a Vector. Still, we are experiencing very often crashes of this sort - some of them are caused by actual high memory consumption (probably leaks) but other times it just happens when the memory is relatively low.
I cannot explain it. perhaps there are other operations that can potentially lead to allocation of large amounts of memory other the the setter "lenght"?
In our ASP.NET web app we're experiencing a quite extensive memory leak which I am investigating right now. Using WinDbg I got down to the largest memory eaters in our app which are (ran !dumpheap -stat in the WinDbg console to get these):
MethodTable Addr Count Overall size Type
...
000007fee8306e10 212928 25551360 System.Web.UI.LiteralControl
000007feebf44748 705231 96776168 System.Object[]
000007fee838fd18 4394539 140625248 System.Web.Caching.CacheDependency+DepFileInfo
000007fee838e678 4394614 210941472 System.Web.FileMonitorTarget
000007feebf567b0 18259 267524784 System.Collections.Hashtable+bucket[]
00000000024897c0 1863 315249528 Free
000007feebf56cd0 14315 735545880 System.Byte[]
000007feebf4ec90 1293939 1532855608 System.String
For all I know a large number of String objects can be quite normal; still there's definitely room for improvement. But what really makes me itch is the count of System.Web.FileMonitorTarget objects: we have over 4 million instances on the heap (à 48 bytes)! Using two memory dumps and comparing them I've found out that these objects are not being cleaned up by the GC.
What I'm trying to find out is: where are these objects coming from? I've already tried ANTS Memory Profiler to get to the root of the evil but it leads nowhere near any of our own classes. I see the connection with System.Web.Caching.CacheDependency+DepFileInfo and thus the System.Web.Cache but we do not use file dependencies to invalidate our cache entries.
Also, there are 14315 instances of System.Byte[] making up for over 700 MB on the heap which stuns me - the only place where we use Byte[] is our image uploading component but we have only around 30 image uploads per day.
What might be the source of these Byte arrays and FileMonitorTarget objects? Any hints are very welcome!
Oliver
P.S. Someone asked pretty much the same question here but the only 'answer' there was very general.
There are a couple of things I would look into. You're right the strings are often used in great number. Still you have approx. 1.4 GB worth of strings on the heap. Does that sound right? If not I would look into that. If that is withing the expected range, just ignore it.
If you suspect FileMonitorTarget and/or Byte[] to be leaking, dump the instances using !dumpheap -mt XXX where XXX is the listed MethodTable for the types. You may want to use PSSCOR2 instead of SOS, as it makes this task a bit easier (the output from !dumpheap shows a delta column and you can limit the number of instances dumped).
The next thing to do is to start looking into what is keeping specific instances alive. The !gcroot command will tell you what roots a specific instance. Pick an instance at random and inspect the roots. If everything is as expected move on to the next. If you application is leaking instances of these types chances are that you will get an instance that should have been freed. Once you get the roots you need to figure out what part of the code is holding on to these. A common source is unsubscribed events, but there are other possible reasons why objects are kept alive.
Objects of type System.Web.Caching.CacheDependency+DepFileInfo are created automatically by ASP.NET to monitor file changes to your website. So even if you are not specifically using a FileDependency cache expiration, ASP.NET itself does.
If I run a dump field against some of these objects, I get a path to my controls/pages.
0:000> !df -field _filename 0d3f24ec
Name: System.String
MethodTable: 79330b24
EEClass: 790ed65c
Size: 180(0xb4) bytes
GC Generation: 2
(C:\WINDOWS\assembly\GAC_32\mscorlib\2.0.0.0__b77a5c561934e089\mscorlib.dll)
String: C:\inetpub\wwwroot\Website\Application\Base\UserControl\Messages.ascx
Fields:
MT Field Offset Type VT Attr Value Name
79332d70 4000096 4 System.Int32 1 instance 82 m_arrayLength
79332d70 4000097 8 System.Int32 1 instance 81 m_stringLength
79331804 4000098 c System.Char 1 instance 44 m_firstChar
79330b24 4000099 10 System.String 0 shared static Empty
>> Domain:Value 000e0ba0:02581198 00109f28:02581198 <<
79331754 400009a 14 System.Char[] 0 shared static WhitespaceChars
>> Domain:Value 000e0ba0:025816f0 00109f28:02586410 <<
You can see this link describing a bit more detail: Understanding ASP.NET Dynamic Compilation
However, your case might still be different. Try running !GCRoot [obj_addr] and see what is holding onto those objects. In my case it is entirely IIS /.NET related objects.
That said, I still had a problem where millions of these cache objects were created, and I have no idea why. :| (this is the first time it happened to me, but I don't think it appeared or will disappear magically...)
Running into a prickly problem with our web app here. (Asp.net 2.0 Win server 2008)
Our memory usage for the website, grows and grows even though I would expect it to remain at a fairly static level. (We have a small amount of data that gets stored in state).
Wanting to find out what the problem is, I've run a System.GC.Collect(); a few times, taken a memory dump and then loaded this memory dump into WinDbg.
When I do a DumpHeap -Stat I get an inordinately large number on particular type hanging around in memory.
0000064280580b40 713471 79908752 PaymentOption
so, doing a DumpHeap -MT for this type, I get a stack of object references. Picking a random number of these, I do a !gcroot and the command comes back reporting that no references are held to it.
To me, this is exactly when the GC should collect these items, but for some reason they have been left outstanding.
Can anybody offer an explanation as to what might be happening?
You could try using sosex.dll in Windbg, which is an extension written to help with .NET debugging. There is a command named !refs which is similar to !gcroot, in that it will show you all the objects referencing an object, plus it will show all the objects that it too is referencing.
In the example on the author's website, !refs is used against an object and the output looks like this:
0:000> !refs 0000000080000db8
Objects referenced by 0000000080000db8 (System.Threading.Mutex):
0000000080000ef0 32 Microsoft.Win32.SafeHandles.SafeWaitHandle
Objects referencing 0000000080000db8 (System.Threading.Mutex):
0000000080000e08 72 System.Threading.Mutex+<>c__DisplayClass3
0000000080000e50 64 System.Runtime.CompilerServices.RuntimeHelpers+CleanupCode
Few things:
GC.Collect won't help you do any debugging. The garbage collector is already being called: if any objects were available for collection it would have happened already.
Idle memory on a server is wasted memory. Are you sure memory is being 'leaked', or is it just that the framework is deciding it can keep more things in memroy or keep more memory around for faster access? In this case I suspect you are leaking memory, but it's something to double check for.
It sounds like something you don't expect is keeping a reference to PaymentOption objects. Perhaps a static collection somewhere? Or separate thread?
Does PaymentObject implement a finalizer by any chance? Does it call a STA COM object?
I'd be curious to see the output of !finalizequeue to see if the count of objects that are showing up on the heap are roughly the amount of any that might waiting to be finalized. Output should probably look something like this:
generation 0 has 57 finalizable objects (0409b5cc->0409b6b0)
generation 1 has 55 finalizable objects (0409b4f0->0409b5cc)
generation 2 has 0 finalizable objects (0409b4f0->0409b4f0)
Ready for finalization 0 objects (0409b6b0->0409b6b0)
If the number of Ready for finalization objects continues to grow, and your certain garbage collections are occuring (confirm via perfmon counters), then it might be a blocked finalizer thread. You might need to take several snapshots over the lifetime of the process (before a recycle) to confirm. I usually rely on the magic number of three, as long as the site is under some sort of load.
A bug in a finalizer can block the finalizer thread and prevent the objects from ever being collected.
If the PaymentOption object calls a legacy STA COM object, then this article ASP.NET Hang and OutOfMemory exceptions caused by STA components might point in the right direction.
Not without more info on your application. But we ran into some nasty memory problems a long time ago. Do you use ASP.NET caching? As Raymond Chen likes to say, "poor caching strategy is indisitinguishable from a memory leak."
Check out another tool - CLRProfiler.exe - it will help you traverse object reference trees to see where your objects are rooted. This is also good: link text
You've heard this before - if you have to GC.Collect, something is wrong.
Is the PaymentOption object created in an asynchronous process, by any chance? I remember something about, if you don't call EndInvoke, you can get problems like this.
I've been investigating the same issue myself and was asking why objects that had no references were not being collected.
Objects larger than 85,000 bytes are stored on the Large Object Heap, from which memory is freed up less frequently.
http://msdn.microsoft.com/en-us/magazine/cc534993.aspx
A single PaymentOption may not be that big, but are they contained within collections, or are they based on something like a DataSet? You should pick on few instances of the PaymentOption / collection / DataSet and then use the sos !objsize command to see big they are.
Unfortunately this doesn't really answer the question. I like to think I can trust the .net framework to take care of releasing unused memory whenever it needs to. However I see a lot of memory being used by the worker process running the app I am looking at, even when memory looks quite tight on the server.
FYI, SOS in .NET 4 supports a few new commands that might be of assistance, namely !gcwhere (locate the generation of an objection; sosex's gcgen) and !findroots (does what it says on the tin; sosex's !refs)
Both are documented on the SOS documentation and mentioned on Tess Ferrandez's blog.