Running into a prickly problem with our web app here. (Asp.net 2.0 Win server 2008)
Our memory usage for the website, grows and grows even though I would expect it to remain at a fairly static level. (We have a small amount of data that gets stored in state).
Wanting to find out what the problem is, I've run a System.GC.Collect(); a few times, taken a memory dump and then loaded this memory dump into WinDbg.
When I do a DumpHeap -Stat I get an inordinately large number on particular type hanging around in memory.
0000064280580b40 713471 79908752 PaymentOption
so, doing a DumpHeap -MT for this type, I get a stack of object references. Picking a random number of these, I do a !gcroot and the command comes back reporting that no references are held to it.
To me, this is exactly when the GC should collect these items, but for some reason they have been left outstanding.
Can anybody offer an explanation as to what might be happening?
You could try using sosex.dll in Windbg, which is an extension written to help with .NET debugging. There is a command named !refs which is similar to !gcroot, in that it will show you all the objects referencing an object, plus it will show all the objects that it too is referencing.
In the example on the author's website, !refs is used against an object and the output looks like this:
0:000> !refs 0000000080000db8
Objects referenced by 0000000080000db8 (System.Threading.Mutex):
0000000080000ef0 32 Microsoft.Win32.SafeHandles.SafeWaitHandle
Objects referencing 0000000080000db8 (System.Threading.Mutex):
0000000080000e08 72 System.Threading.Mutex+<>c__DisplayClass3
0000000080000e50 64 System.Runtime.CompilerServices.RuntimeHelpers+CleanupCode
Few things:
GC.Collect won't help you do any debugging. The garbage collector is already being called: if any objects were available for collection it would have happened already.
Idle memory on a server is wasted memory. Are you sure memory is being 'leaked', or is it just that the framework is deciding it can keep more things in memroy or keep more memory around for faster access? In this case I suspect you are leaking memory, but it's something to double check for.
It sounds like something you don't expect is keeping a reference to PaymentOption objects. Perhaps a static collection somewhere? Or separate thread?
Does PaymentObject implement a finalizer by any chance? Does it call a STA COM object?
I'd be curious to see the output of !finalizequeue to see if the count of objects that are showing up on the heap are roughly the amount of any that might waiting to be finalized. Output should probably look something like this:
generation 0 has 57 finalizable objects (0409b5cc->0409b6b0)
generation 1 has 55 finalizable objects (0409b4f0->0409b5cc)
generation 2 has 0 finalizable objects (0409b4f0->0409b4f0)
Ready for finalization 0 objects (0409b6b0->0409b6b0)
If the number of Ready for finalization objects continues to grow, and your certain garbage collections are occuring (confirm via perfmon counters), then it might be a blocked finalizer thread. You might need to take several snapshots over the lifetime of the process (before a recycle) to confirm. I usually rely on the magic number of three, as long as the site is under some sort of load.
A bug in a finalizer can block the finalizer thread and prevent the objects from ever being collected.
If the PaymentOption object calls a legacy STA COM object, then this article ASP.NET Hang and OutOfMemory exceptions caused by STA components might point in the right direction.
Not without more info on your application. But we ran into some nasty memory problems a long time ago. Do you use ASP.NET caching? As Raymond Chen likes to say, "poor caching strategy is indisitinguishable from a memory leak."
Check out another tool - CLRProfiler.exe - it will help you traverse object reference trees to see where your objects are rooted. This is also good: link text
You've heard this before - if you have to GC.Collect, something is wrong.
Is the PaymentOption object created in an asynchronous process, by any chance? I remember something about, if you don't call EndInvoke, you can get problems like this.
I've been investigating the same issue myself and was asking why objects that had no references were not being collected.
Objects larger than 85,000 bytes are stored on the Large Object Heap, from which memory is freed up less frequently.
http://msdn.microsoft.com/en-us/magazine/cc534993.aspx
A single PaymentOption may not be that big, but are they contained within collections, or are they based on something like a DataSet? You should pick on few instances of the PaymentOption / collection / DataSet and then use the sos !objsize command to see big they are.
Unfortunately this doesn't really answer the question. I like to think I can trust the .net framework to take care of releasing unused memory whenever it needs to. However I see a lot of memory being used by the worker process running the app I am looking at, even when memory looks quite tight on the server.
FYI, SOS in .NET 4 supports a few new commands that might be of assistance, namely !gcwhere (locate the generation of an objection; sosex's gcgen) and !findroots (does what it says on the tin; sosex's !refs)
Both are documented on the SOS documentation and mentioned on Tess Ferrandez's blog.
Related
A 16kb file deserialized first time allocate about 3.6M memory~~
and the second ~only allocate 50kb memory,I know it cache the reflection infos , But How could I realse the memory by manual?
I want to know how to control the GC used in Unity3d, help~~~
First:
Second:
Unity uses Automatic Memory Management. In most cases, you don't need to manually collect garbage.
You should call GC.Collect only when you are absolutely sure it's the "right" time. You definitely don't want this process to freeze your game character.
To quote Unity on this topic:
If we know that heap memory has been allocated but is no longer used
(for example, if our code has generated garbage when loading assets)
and we know that a garbage collection freeze won’t affect the player
(for example, while the loading screen is still showing), we can
request garbage collection
You can read more on this Unity Page.
i was going through Redis RDB persistence. I having some doubts regarding RDB persistence related to its disadvantage.
Understanding So far:
We should use rdb persistence when we need to save the snapshot of dataset currently in memory at some regular interval.
I can understand that in this way we can lose some data in case of server break down. But another disadvantage that i can't understand is how fork can be time consuming when persisting large dataset using rdb.
Quoting from Documentation
RDB needs to fork() often in order to persist on disk using a child
process. Fork() can be time consuming if the dataset is big, and may
result in Redis to stop serving clients for some millisecond or even
for one second if the dataset is very big and the CPU performance not
great. AOF also needs to fork() but you can tune how often you want to
rewrite your logs without any trade-off on durability.
I know how fork works as per my knowledge When parent process forks it create a new Child process and we can allow some code that child process will execute based on its pid or we can provide it some new executable that it will work on using exec() system call.
but things that i don't understand how it will be heavy task when size of dataset is larger?
I think i know the answer but i m not sure about that
Quoted from this link https://www.bottomupcs.com/fork_and_exec.xhtml
When a process calls fork then
the operating system will create a new process that is exactly the same as the parent process. This means all the state that was talked about previously is copied, including open files, register state and all memory allocations, which includes the program code.
As per above statement whole dataset of redis will be copied to child.
Am i understanding right?
When standard fork is called with copy-on-write the OS must still copy all the page table entries, which can take time time if you have small 4k pages and a huge dataset, this is what makes the actual fork() time slow.
You can also find a lot of time and memory is required if your dataset is changing a lot in a sparse way, as copy-on-write semantics triggers the actual memory pages to be copied as changes are made to the original. Redis also performs incremental rehashing and maintains expiry etc. so an instance that is more active will typically take longer to save to disk.
More reading:
Faster forking of large processes on Linux?
http://kirkwylie.blogspot.co.uk/2008/11/linux-fork-performance-redux-large.html
I am profiling an application(using VS 2010) that is behaving badly in production. Once of the recommendations given by VS 2010 is:
Relatively high rate of Gen 1 garbage collections is occurring. If, by
design, most of your program's data structures are allocated and
persisted for a long time, this is not ordinarily a problem. However,
if this behavior is unintended, your app may be pinning objects. If
you are not certain, you can gather .NET memory allocation data and
object lifetime information to understand the pattern of memory
allocation your application uses.
Searching on google gives the following link=> http://msdn.microsoft.com/en-us/library/ee815714.aspx. Are there some obvious things that I can do to reduce this issue?I seem to be lost here.
Double-click the message in the Errors List window to navigate to the
Marks View of the profiling data. Find the .NET CLR Memory# of Gen 0
Collections and .NET CLR Memory# of Gen 1 Collections columns.
Determine if there are specific phases of program execution where
garbage collection is occurring more frequently. Compare these values
to the % Time in GC column to see if the pattern of managed memory
allocations is causing excessive memory management overhead.
To understand the application’s pattern of managed memory usage,
profile it again running a.NET Memory allocation profile and request
Object Lifetime measurements.
For information about how to improve garbage collection performance,
see Garbage Collector Basics and Performance Hints on the Microsoft
Web site. For information about the overhead of automatic garbage
collection, see Large Object Heap Uncovered.
The relevant line there is:
To understand the application’s pattern of managed memory usage, profile it again running a.NET Memory allocation profile and request Object Lifetime measurements.
You need to understand how many objects are being allocated by your application and when, and how long they are alive for. You're probably allocating hundreds (or thousands!) of tiny objects inside a loop somewhere without really thinking about the consequences of reclaiming that memory when the references fall out of scope.
http://msdn.microsoft.com/en-us/library/ms973837.aspx states:
Now that we have a basic model for how things are working, let's
consider some things that could go wrong that would make it slow. That
will give us a good idea what sorts of things we should try to avoid
to get the best performance out of the collector.
Too Many Allocations
This is really the most basic thing that can go wrong. Allocating new
memory with the garbage collector is really quite fast. As you can see
in Figure 2 above is all that needs to happen typically is for the
allocation pointer to get moved to create space for your new object on
the "allocated" side—it doesn't get much faster than that. However,
sooner or later a garbage collect has to happen and, all things being
equal, it's better for that to happen later than sooner. So you want
to make sure when you're creating new objects that it's really
necessary and appropriate to do so, even though creating just one is
fast.
This may sound like obvious advice, but actually it's remarkably easy
to forget that one little line of code you write could trigger a lot
of allocations. For example, suppose you're writing a comparison
function of some kind, and suppose that your objects have a keywords
field and that you want your comparison to be case insensitive on the
keywords in the order given. Now in this case you can't just compare
the entire keywords string, because the first keyword might be very
short. It would be tempting to use String.Split to break the keyword
string into pieces and then compare each piece in order using the
normal case-insensitive compare. Sounds great right?
Well, as it turns out doing it like that isn't such a good idea. You
see, String.Split is going to create an array of strings, which means
one new string object for every keyword originally in your keywords
string plus one more object for the array. Yikes! If we're doing this
in the context of a sort, that's a lot of comparisons and your
two-line comparison function is now creating a very large number of
temporary objects. Suddenly the garbage collector is going to be
working very hard on your behalf, and even with the cleverest
collection scheme there is just a lot of trash to clean up. Better to
write a comparison function that doesn't require the allocations at
all.
In our ASP.NET web app we're experiencing a quite extensive memory leak which I am investigating right now. Using WinDbg I got down to the largest memory eaters in our app which are (ran !dumpheap -stat in the WinDbg console to get these):
MethodTable Addr Count Overall size Type
...
000007fee8306e10 212928 25551360 System.Web.UI.LiteralControl
000007feebf44748 705231 96776168 System.Object[]
000007fee838fd18 4394539 140625248 System.Web.Caching.CacheDependency+DepFileInfo
000007fee838e678 4394614 210941472 System.Web.FileMonitorTarget
000007feebf567b0 18259 267524784 System.Collections.Hashtable+bucket[]
00000000024897c0 1863 315249528 Free
000007feebf56cd0 14315 735545880 System.Byte[]
000007feebf4ec90 1293939 1532855608 System.String
For all I know a large number of String objects can be quite normal; still there's definitely room for improvement. But what really makes me itch is the count of System.Web.FileMonitorTarget objects: we have over 4 million instances on the heap (à 48 bytes)! Using two memory dumps and comparing them I've found out that these objects are not being cleaned up by the GC.
What I'm trying to find out is: where are these objects coming from? I've already tried ANTS Memory Profiler to get to the root of the evil but it leads nowhere near any of our own classes. I see the connection with System.Web.Caching.CacheDependency+DepFileInfo and thus the System.Web.Cache but we do not use file dependencies to invalidate our cache entries.
Also, there are 14315 instances of System.Byte[] making up for over 700 MB on the heap which stuns me - the only place where we use Byte[] is our image uploading component but we have only around 30 image uploads per day.
What might be the source of these Byte arrays and FileMonitorTarget objects? Any hints are very welcome!
Oliver
P.S. Someone asked pretty much the same question here but the only 'answer' there was very general.
There are a couple of things I would look into. You're right the strings are often used in great number. Still you have approx. 1.4 GB worth of strings on the heap. Does that sound right? If not I would look into that. If that is withing the expected range, just ignore it.
If you suspect FileMonitorTarget and/or Byte[] to be leaking, dump the instances using !dumpheap -mt XXX where XXX is the listed MethodTable for the types. You may want to use PSSCOR2 instead of SOS, as it makes this task a bit easier (the output from !dumpheap shows a delta column and you can limit the number of instances dumped).
The next thing to do is to start looking into what is keeping specific instances alive. The !gcroot command will tell you what roots a specific instance. Pick an instance at random and inspect the roots. If everything is as expected move on to the next. If you application is leaking instances of these types chances are that you will get an instance that should have been freed. Once you get the roots you need to figure out what part of the code is holding on to these. A common source is unsubscribed events, but there are other possible reasons why objects are kept alive.
Objects of type System.Web.Caching.CacheDependency+DepFileInfo are created automatically by ASP.NET to monitor file changes to your website. So even if you are not specifically using a FileDependency cache expiration, ASP.NET itself does.
If I run a dump field against some of these objects, I get a path to my controls/pages.
0:000> !df -field _filename 0d3f24ec
Name: System.String
MethodTable: 79330b24
EEClass: 790ed65c
Size: 180(0xb4) bytes
GC Generation: 2
(C:\WINDOWS\assembly\GAC_32\mscorlib\2.0.0.0__b77a5c561934e089\mscorlib.dll)
String: C:\inetpub\wwwroot\Website\Application\Base\UserControl\Messages.ascx
Fields:
MT Field Offset Type VT Attr Value Name
79332d70 4000096 4 System.Int32 1 instance 82 m_arrayLength
79332d70 4000097 8 System.Int32 1 instance 81 m_stringLength
79331804 4000098 c System.Char 1 instance 44 m_firstChar
79330b24 4000099 10 System.String 0 shared static Empty
>> Domain:Value 000e0ba0:02581198 00109f28:02581198 <<
79331754 400009a 14 System.Char[] 0 shared static WhitespaceChars
>> Domain:Value 000e0ba0:025816f0 00109f28:02586410 <<
You can see this link describing a bit more detail: Understanding ASP.NET Dynamic Compilation
However, your case might still be different. Try running !GCRoot [obj_addr] and see what is holding onto those objects. In my case it is entirely IIS /.NET related objects.
That said, I still had a problem where millions of these cache objects were created, and I have no idea why. :| (this is the first time it happened to me, but I don't think it appeared or will disappear magically...)
We have a large asp.net application that is leaking memory. Perfmon shows that this leak is in managed memory as W3WP private bytes grows at the same rate as bytes in all heaps. I can also see that Gen 2 garbage collections are running but the Gen 2 heap size continues to grow.
I took a memory dump and analysed in WinDbg and can see a very large number of objects of lots of types. Strings are the biggest type and 20% of the size of the strings is made from 51 objects.
Dumping these large strings shows outputted html either from controls or entire pages. Running !gcroot on these shows the root objects being of type System.Text.RegularExpressions.Regex or System.Web.RegularExpressions.GTRegex.
Any ideas of what could be happening or how I can investigate further?
Thanks, Simon
How about using a memory profiler such as dotTrace Memory or ANTZ Memory Profiler? Both products are available as a time-limited trial version.
That strings are the most common type on your heap is not strange at all. If you for example have 10 HashSet's containing 1000 strings each, the dump will show that you have 10 HashSet's on your heap, but 100 000 strings. Many objects contain one or several strings. Thus, the number of strings shown in the dump is the sum of all strings from all objects on the heap, which tend to be a lot.
However, if you have alot of System.Text.RegularExpressions.Regex on your heap, that can very well be the root to your memory problemts. Regex in .NET tend to take a lot of resources. Hence, my advice is that you go through your code and try to find any excessive use of regex. Also, make sure that any references to Regex objects are being taken care of, that is to say, that the references to the Regex objects are not kept alive. That way, the Garbage Collector can make sure the Regex objects are deallocated properly.
Good luck!
In theory it should be quite dificult to cause a memory leak in asp.net without using unmanaged resources. If everything is single threaded then all references to managed resources should be free to be garbage collected when the page life cycle is complete. Are you firing off worker threads to do anything and are these threads continuing to live beyond the life of the page? Or do you have any long running processes exposed as web methods that can be fired off by asynchounsly and are just taking a long time to run and being called repeatedly until the memory is full?