I have an application which manipulates high resolution images (something around 100+ megapixels), and I'm having some memory issues. When the BitmapData object is created, it allocates memory to store this image. The problem, is that I already have a ByteArray with this image's pixels (which have something around 400+ MB), so when the BitmapData is created, it allocates memory to store the same data that I have on the ByteArray.
After its creation, I can set the pixels from the ByteArray to the BitmapData and free the ByteArray. But this memory peak is, sometimes, causing the runtime to raise an exception, telling that the system is out of memory.
Is there any way to tell the BitmapData to use my own ByteArray? Or any other solution that I don't have to use double the memory that I need?
In case anyone needs this, here's what I did:
I get the ByteArray, which contains the pixels of the image, from a socket. I read these pixels from the sockets in tiny parts, so I, instead of waiting for the whole image to be loaded from the socket, I put these small parts directly into the BitmapData. This prevents the application to allocate double the memory I actually need.
Related
A 16kb file deserialized first time allocate about 3.6M memory~~
and the second ~only allocate 50kb memory,I know it cache the reflection infos , But How could I realse the memory by manual?
I want to know how to control the GC used in Unity3d, help~~~
First:
Second:
Unity uses Automatic Memory Management. In most cases, you don't need to manually collect garbage.
You should call GC.Collect only when you are absolutely sure it's the "right" time. You definitely don't want this process to freeze your game character.
To quote Unity on this topic:
If we know that heap memory has been allocated but is no longer used
(for example, if our code has generated garbage when loading assets)
and we know that a garbage collection freeze won’t affect the player
(for example, while the loading screen is still showing), we can
request garbage collection
You can read more on this Unity Page.
I'm using SignalR for a real time data ticker with up to 10k rows contained in a single object being sent to the client per sec. The memory of IIS worker processor goes on increasing until the ticking finally freezes.
First of all, as you already could read SignalR is built on a premise that you will not send large messages. However, in a real word it is a perfectly valid scenario.
So, what's the deal with a memory issues. SignalR has a circular buffer with a default size of 1000 elements, it stores every single message into that buffer per opened connection. So basically, if you have 100 opened connection, and you've sent 1000 messages it will be total of 100*1000 messages stored inside your memory.
Another thing you should take into consideration is .Net framework's large objects heap and garbage collection. Every object with a size greater than 85kB will went to the large object heap, next thing I should point out that garbage collector treats objects inside large object heap as a second generation objects. Taken that into account, you could realize that your objects once they have been dereferenced from the SignalR's circular buffer won't be garbage collected immediatelly due to their size.
As #davidfowl said, you really could make your data smaller, but sometimes you will not be able to do it, without introducing some pretty complex mechanic both on client and on server.
Fortunatelly, there is a way to reduce default size of SignalR's circular buffer, and you could do it by setting:
GlobalHost.Configuration.DefaultMessageBufferSize = 32
Is there a way to set the render target to a GDI bitmap in SlimDX so that as soon as the scene is rendered I can immediately BitBlt the render out of there for processing in another thread and continue rendering?
Is it necessary to render to a texture and then copy the contents out to the bitmap? I would like to be able to do this without any unnecessary copying. I'm going to need every speedup I can get.
Sorry, you do need to render to a RenderTarget then copy that resource into a Texture2D then you can map the data and get the pixels into your bitmap.
The memory for RenderTargets is marked for a special kind of use by the graphics card and cannot be read from directly
The memory for Textures can be marked so that it can be read but only through the API as it is still held on the graphics card (some exceptions but DirectX has to go with the lowest common denominator)
If you need the extra speed reuse the same bitmap or have an array of prepared bitmaps ready to fill and keep them on rotation.
And as ever, measure how much time these things are consuming with a profiler so that you can quantify bottlenecks.
I have an application that is pretty memory hungry. It holds a large amount of data in some big arrays.
I have recently been noticing the occasional OutOfMemoryException. These OutOfMemoryExceptions are occurring long before my application (ASP.Net) has used up the 800mb available to it.
I have track the issue down to the area of code where the array is resized. The array contains a structure that is 74bytes in size. (I know that you shouldn't create struct's that are bigger than 16bytes), but this application is a port from a Vb6 application). I have tried changing the struct to a class and this appears to have fixed the problem for now.
I think the reason that changing to a class solves the problem has to do with the fact that when using a struct and the array is resized, a segment of memory that is large enough to store the new array needs to be reserved (e.g. (currentArraySize + increaseBySize)*74) cannot be found. This leads to the OutOfMemoryException.
This isn't the case with a class as each element of the array only needs 8bytes to store a pointer to the new object.
Is my thinking correct here?
Your assumptions regarding how arrays are stored are correct. Changing from struct to class will add a bit of overhead to each instance and you'll loose the advantages of locality as all data must be collected via a reference, but as you have observed it may solve your memory problem for now.
When you resize an array it will create a new one to hold the new data, then copy over the data, and you will have two copies of the same data in memory at the same time. Just as you expected.
When using structs the array will occupy the struct size * number of elements. When using a class it will only contain the pointer.
The same scenario is also true for List which increase in size over time, thus it's smart to initialize it with the expected number of items to avoid resizing and copying.
On 32bit systems you will hit outofmem around ~800mb, as you are aware of. One solution you can try is to but your structs on disk and read them when needed. Since they are a fixed size you can easily jump into the correct position at the file.
I have a project on Codeplex for handling large amounts of data. It has a type of Array with possibility for autogrowing, which might help your scenario if you run into problems with keeping it all in memory again.
The issue you are experiencing might be caused by fragmentation of the Large Object Heap rather than a normal out of memory condition where all memory really is used up.
See http://msdn.microsoft.com/en-us/magazine/cc534993.aspx
The solution might be as simple as growing the array by large fixed increments rather than smaller random increments so that as arrays are freed up the blocks of LOH memory can be reused for a new large array.
This may also explain the struct->class issue as the struct is likely stored in the array itself while the class will be a small object on the small object heap.
The .NET Framework 4.5.1, has the ability to explicitly compact the large object heap (LOH) during garbage collection.
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();
See more info in: GCSettings.LargeObjectHeapCompactionMode
And a question about it: Large Object Heap Compaction, when is it good?
I want to create a large CompatibleDC, draw a large image on it, then bitblt part of the image to other DC, in order to achieve high performance. I am using the following code to create compatible Memory DC. But when the rect becomes very large, etc: 5000*5000, the CompatibleDC created become unstable. sometimes it is OK, sometimes it failed. is there any thing wrong with my code?
input :pInputDC
output:pOutputMemDC
{
pOutputMemDC=new CDC();
VERIFY(pOutputMemDC->CreateCompatibleDC(pInputDC));
CRect rect(0,0,nDCWidth,nDCHeight);
CBitmap bitmap;
if (bitmap.CreateCompatibleBitmap(pInputDC, rect.Width(), rect.Height()))
{
pOutputMemDC->SetViewportOrg(-rect.left, -rect.top);
m_pOldBitmap = pOutputMemDC->SelectObject(&bitmap);
}
CBrush brush;
VERIFY(brush.CreateSolidBrush(RGB(255,0, 0)));
brush.UnrealizeObject();
pOutputMemDC->FillRect(rect, &brush);
}
Instead of creating a large DC and then blitting a portion of it another, smaller DC, create a DC the same size as the destination DC, or at least the same size as the blit destination. Then, offset all your drawing commands by the (-x,-y) of the sub section you want to copy. If your destination is (100,200)-(400,400) on the source then create a DC (300x200) and offset everything by (-100,-200).
This has two big advantages: firstly, the memory required is much smaller. Secondly, GDI will clip your drawing operations to the size of the DC (it always clips anyway). Although the act of clipping takes CPU time, the time saved by not drawing pixels that aren't seen more than makes up for it.
Now, if this large DC is something like an image (JPEG for example) then you need to look into other methods. One technique used by many image editing programs is to split the image into tiles and page the tiles to/from memory/hard disk. Each tile is its own DC and you only have enough source DCs to fill the target DC. As the view window moves across the large image, unload tiles that have moved out of the target rectangle and load tiles that have become visible.
Each 5000x5000 pixel image needs ca. 100MB of RAM. Depending on how much RAM your PC has, this might already be the problem.
If you have 1GB of RAM or more, then that's probably not the issue. In this case, you must have a memory leak. Where do you free the allocated bitmap? I see that you unrealize the brush but how about the bitmap?
Note that increasing your swap won't help since that will kill your performance.
Make sure you are selecting all original GDI objects to the DCs.
The problem may be that your Bitmap is still selected into the pOutputMemDC when it is being destroyed and one of them or both can't be deleted properly. Thus problems with memory might begin.