A 16kb file deserialized first time allocate about 3.6M memory~~
and the second ~only allocate 50kb memory,I know it cache the reflection infos , But How could I realse the memory by manual?
I want to know how to control the GC used in Unity3d, help~~~
First:
Second:
Unity uses Automatic Memory Management. In most cases, you don't need to manually collect garbage.
You should call GC.Collect only when you are absolutely sure it's the "right" time. You definitely don't want this process to freeze your game character.
To quote Unity on this topic:
If we know that heap memory has been allocated but is no longer used
(for example, if our code has generated garbage when loading assets)
and we know that a garbage collection freeze won’t affect the player
(for example, while the loading screen is still showing), we can
request garbage collection
You can read more on this Unity Page.
Related
I few days search the source on the memory leak in my software and at least found it.
So steps:
I create the GUI application, add image to the .qrc, create form in Qt Designer, add QPushButton there and in the styleSheet property write
#closeButton{ image: url(:/system/images/White/Close.png); }
(the button named as "closeButton")
Without style sheet that I add the program works fine, with style sheet - I receive memory leak.
So how to avoid memory leak in this case?
Objects that survive till the process termination aren't necessarily memory leaks, and the tool won't be able to tell you which ones are memory leaks and which aren't. Memory leaks are usually only the allocations that are made multiple times from the same program location, and never get freed. Even then, it may not always be the case. Leak detection requires a purpose-made test harness that repeats a series of operations that are supposed not to leave behind memory allocated at any given program location more than once. If you then notice that, with increasing number of operations, the number of memory blocks left behind increases, you likely have a real leak. Ideally, the test harness should take snapshots of allocated memory blocks after each "operation cycle", and flag the program locations that consistently leave stuff behind. The library should be able to capture a stack trace to give you the program location where the allocation was made. Otherwise it's useless in practice.
I'm very suspicious of code that deallocates all memory before process termination: usually it's just wasted time, and it prolongs system shutdown and is just bad UX. When the user hits the "Exit" button, make sure that data is safe (e.g. close sqlite files, save open documents - maybe just as "work in progress" that will be brought back the next time the application is used), and then call exit(0).
In general, leak detection takes a bit more than just using a library that gives you a list of memory blocks allocated at exit. The library is a tool, that you, a thinking, reasoning human developer must apply to the problem :) Just as a hammer won't be useful by banging it all over the place (unless you got lots of nails to hammer!), so won't be a "leak detector" library all by itself.
I'm using SignalR for a real time data ticker with up to 10k rows contained in a single object being sent to the client per sec. The memory of IIS worker processor goes on increasing until the ticking finally freezes.
First of all, as you already could read SignalR is built on a premise that you will not send large messages. However, in a real word it is a perfectly valid scenario.
So, what's the deal with a memory issues. SignalR has a circular buffer with a default size of 1000 elements, it stores every single message into that buffer per opened connection. So basically, if you have 100 opened connection, and you've sent 1000 messages it will be total of 100*1000 messages stored inside your memory.
Another thing you should take into consideration is .Net framework's large objects heap and garbage collection. Every object with a size greater than 85kB will went to the large object heap, next thing I should point out that garbage collector treats objects inside large object heap as a second generation objects. Taken that into account, you could realize that your objects once they have been dereferenced from the SignalR's circular buffer won't be garbage collected immediatelly due to their size.
As #davidfowl said, you really could make your data smaller, but sometimes you will not be able to do it, without introducing some pretty complex mechanic both on client and on server.
Fortunatelly, there is a way to reduce default size of SignalR's circular buffer, and you could do it by setting:
GlobalHost.Configuration.DefaultMessageBufferSize = 32
What are the differences between shared pointers (such as boost::shared_ptr or the new std::shared_ptr) and garbage collection methods (such as those implemented in Java or C#)? The way I understand it, shared pointers keep track of how many times variables points to the resource and will automatically destruct the resource when the count reaches zero. However, my understanding is that the garbage collector also manages memory resources, but requires additional resources to determine if an object is still being referred to and doesn't necessarily destruct the resource immediately.
Am I correct in my assumptions, and are there any other differences between using garbage collectors and shared pointers? Also, why would anyone ever used a garbage collector over a shared pointer if they perform similar tasks but with varying performance figures?
The main difference lies, as you noted, in when the resource is released/destroyed.
One advantage where a GC might come in handy is if you have resources that take a long time to be released. For a short program lifetime, it might be nice to leave the resources dangling and have them cleaned up in the end. If resource limits are reached, then the GC can act to release some of them. Shared pointers, on the other hand, release their resources as soon as the reference count hits zero. This could be costly for frequent acquisition-release cycles of a resource with costly time requirements.
On the other hand, in some garbage collection implementations, garbage collection requires that the whole program pause its execution while memory is examined, moved around, and freed. There are smarter implementations, but none are perfect.
Those Shared Pointers (usually called reference counting) run the risk of cycles.
Garbage collection (Mark and Sweep) does not have this problem.
In a simple garbage-collected system, nobody will hold a direct pointer to any object; instead, code will hold references to table entries which point to objects on the heap. Each object on the heap will store its size (meaning all heap objects will form a singly-linked list) and a back-reference to the object in the object table which holds it (or at least used to).
When either the heap or the object table gets full, the system will set a "delete me" flag on every object in the table. It will examine every object it knows about and, if its "delete flag" was set, unset it and add all the objects it knows about to the list of objects to be examined. Once that is done, any object whose "delete me" flag is still set can be deleted.
Once that is done, the system will start at the beginning of the heap, take each object stored there, and see if its object reference still points to it. If so, it will copy that object to the beginning of the heap, or just past the end of the last copied object; otherwise the object will be skipped (and will likely be overwritten when other objects are copied).
In languages with a garbage collector (GC), the GC keeps track of and cleans up memory that isn’t being used anymore, and we don’t need to think about it. In most languages without a GC, it’s our responsibility to identify when memory is no longer being used and to call code to explicitly free it, just as we did to request it.
more details: HERE
Whats the pro's and con's of creating a collection instance on the fly or beforehand, during initialisation for use later.
I have a whole bundle of threads that each need to output a buffer which is enqueued on a priority or intervalheap queue. I was wondering if it would be more effecient in c# to create, a circular buffer of type X, of size 2048 beforehand, and just write into each one, and reuse later, or allow each thread to create a buffer on the fly and enqueue it, i.e. create them when necessary, and allow for normal cleanup as per.
I know that the GC would try and disappear the pre-created circular queue. I've had strange debugging problems in the past, looking for object that no longer the exist, because the GC removed it as per.
Any help or advice would be appreciated.
Bob.
GC won't remove an object you still have a reference to - in other words, if you were able to use your pre-created buffer, it wouldn't be garbage collected - unless you had a WeakReference to it, of course.
Do you know that this will be a performance bottleneck at all? Why not write the simplest code that works first, and measure how well it performs?
Running into a prickly problem with our web app here. (Asp.net 2.0 Win server 2008)
Our memory usage for the website, grows and grows even though I would expect it to remain at a fairly static level. (We have a small amount of data that gets stored in state).
Wanting to find out what the problem is, I've run a System.GC.Collect(); a few times, taken a memory dump and then loaded this memory dump into WinDbg.
When I do a DumpHeap -Stat I get an inordinately large number on particular type hanging around in memory.
0000064280580b40 713471 79908752 PaymentOption
so, doing a DumpHeap -MT for this type, I get a stack of object references. Picking a random number of these, I do a !gcroot and the command comes back reporting that no references are held to it.
To me, this is exactly when the GC should collect these items, but for some reason they have been left outstanding.
Can anybody offer an explanation as to what might be happening?
You could try using sosex.dll in Windbg, which is an extension written to help with .NET debugging. There is a command named !refs which is similar to !gcroot, in that it will show you all the objects referencing an object, plus it will show all the objects that it too is referencing.
In the example on the author's website, !refs is used against an object and the output looks like this:
0:000> !refs 0000000080000db8
Objects referenced by 0000000080000db8 (System.Threading.Mutex):
0000000080000ef0 32 Microsoft.Win32.SafeHandles.SafeWaitHandle
Objects referencing 0000000080000db8 (System.Threading.Mutex):
0000000080000e08 72 System.Threading.Mutex+<>c__DisplayClass3
0000000080000e50 64 System.Runtime.CompilerServices.RuntimeHelpers+CleanupCode
Few things:
GC.Collect won't help you do any debugging. The garbage collector is already being called: if any objects were available for collection it would have happened already.
Idle memory on a server is wasted memory. Are you sure memory is being 'leaked', or is it just that the framework is deciding it can keep more things in memroy or keep more memory around for faster access? In this case I suspect you are leaking memory, but it's something to double check for.
It sounds like something you don't expect is keeping a reference to PaymentOption objects. Perhaps a static collection somewhere? Or separate thread?
Does PaymentObject implement a finalizer by any chance? Does it call a STA COM object?
I'd be curious to see the output of !finalizequeue to see if the count of objects that are showing up on the heap are roughly the amount of any that might waiting to be finalized. Output should probably look something like this:
generation 0 has 57 finalizable objects (0409b5cc->0409b6b0)
generation 1 has 55 finalizable objects (0409b4f0->0409b5cc)
generation 2 has 0 finalizable objects (0409b4f0->0409b4f0)
Ready for finalization 0 objects (0409b6b0->0409b6b0)
If the number of Ready for finalization objects continues to grow, and your certain garbage collections are occuring (confirm via perfmon counters), then it might be a blocked finalizer thread. You might need to take several snapshots over the lifetime of the process (before a recycle) to confirm. I usually rely on the magic number of three, as long as the site is under some sort of load.
A bug in a finalizer can block the finalizer thread and prevent the objects from ever being collected.
If the PaymentOption object calls a legacy STA COM object, then this article ASP.NET Hang and OutOfMemory exceptions caused by STA components might point in the right direction.
Not without more info on your application. But we ran into some nasty memory problems a long time ago. Do you use ASP.NET caching? As Raymond Chen likes to say, "poor caching strategy is indisitinguishable from a memory leak."
Check out another tool - CLRProfiler.exe - it will help you traverse object reference trees to see where your objects are rooted. This is also good: link text
You've heard this before - if you have to GC.Collect, something is wrong.
Is the PaymentOption object created in an asynchronous process, by any chance? I remember something about, if you don't call EndInvoke, you can get problems like this.
I've been investigating the same issue myself and was asking why objects that had no references were not being collected.
Objects larger than 85,000 bytes are stored on the Large Object Heap, from which memory is freed up less frequently.
http://msdn.microsoft.com/en-us/magazine/cc534993.aspx
A single PaymentOption may not be that big, but are they contained within collections, or are they based on something like a DataSet? You should pick on few instances of the PaymentOption / collection / DataSet and then use the sos !objsize command to see big they are.
Unfortunately this doesn't really answer the question. I like to think I can trust the .net framework to take care of releasing unused memory whenever it needs to. However I see a lot of memory being used by the worker process running the app I am looking at, even when memory looks quite tight on the server.
FYI, SOS in .NET 4 supports a few new commands that might be of assistance, namely !gcwhere (locate the generation of an objection; sosex's gcgen) and !findroots (does what it says on the tin; sosex's !refs)
Both are documented on the SOS documentation and mentioned on Tess Ferrandez's blog.