Flex consuming huge memory for large data - apache-flex

When flex array collection is handled with large amount of data for example 2,00,000 new referenced objects the memory in flex client browser shoots up 20MB. This excess 20MB is independent of the variables defined in the object. An detailed example is illustrated below.
var list:ArrayCollection = new ArrayCollection;
for(var i:int = 0;i<200000;i++)
{
var obj:Object = new Object;
list.add(obj);
}
On executing the above code there was 20MB increase in flex client browser memory. For a different scenario i tried adding an action script object into the array collection. The action script object is defined below.
public class Sample
{
public var id:int;
public var age:int;
public Sample()
{
}
}
On adding 200000 Sample class into a array collection there was still 20MB memory leak.
var list:ArrayCollection = new ArrayCollection;
for(var i:int = 0;i<200000;i++)
{
var obj:Sample = new Sample;
obj.id= i;
onj.age = 20;
list.add(obj);
}
I even tried adding the Sample Objects into flex arrayList and array but the problem still persists. Can someone explain on where this excess memory is consumed by flex?

Requesting memory to the OS is time consuming, so Flash player requests large chunks of memory (more than it really needs) in order to minimize the number of those requests.

I have no idea if the OS allocation time is a big deal anymore, we're talking on avg 1.5-2GHz Cpu's - even mobile. But Benoit is on the right track. Large chunks are reserved at a time to mainly avoid heap fragmentation. If memory was requested in only size chunks it needs at a time, along with other IO requests, the system memory would become highly fragmented very quickly. When these fragments are returned to the OS space - unless the memory manager gets a request of the same size or smaller, it cannot reallocate this chunk - thereby making it lost to the visible pool. So to avoid this issue - Flash (and it's memory manager) requests 16Mb at a time.
In your case, it wouldn't matter if you created 1 object or 100,000. You'll still start with a minimum of 16Mb private memory (aka what you see in task manager).
The flash player allocation mechanism is based on the Mozilla MMgc.
You can read about it here: https://developer.mozilla.org/en-US/docs/MMgc

Related

Downsides of streaming large JSON or HTML content to a browser in ASP.NET MVC

I am working with a legacy ASP.NET Framework MVC application that is experiencing issues with memory usage such as occasional bursts of OutOfMemory exceptions across a variety of operations. The application is frequently operating with large lists of objects (sometimes 10s to 100s of megabytes), and then serializing them to JSON to return to the client. We are not totally sure what the source of the OutOfMemory exceptions is, but believe a likely candidate is memory fragmentation due to too many large objects going on the Large Object Heap.
We are thinking a quick win is to refactor some of the controller endpoints to serialize their JSON content using a stream writer (as outlined in the JSON.NET Documentation), and to stream the content back to the browser. This won't eliminate the memory load of the data lists prior to serialization, but in theory should cut down the overall amount of data going on to the LOH.
The code is written to send the results in chunks of less than 85kb:
public async Task<ActionResult> MyControllerMethod()
{
var data = GetData();
Response.BufferOutput = false;
Response.ContentType = "application/json";
var serializer = JsonSerializer.Create();
using (var sw = new StreamWriter(Response.OutputStream, Encoding.UTF8, 84999))
{
sw.AutoFlush = false;
serializer.Serialize(sw, data);
await sw.FlushAsync();
}
return new EmptyResult();
}
I am aware of a few downsides with this approach, but don't consider them showstoppers:
More complex to implement a unit test due to the 'EmptyResult' returned by the controller.
I have read there is a small overhead due to a call to PInvoke whenever data is flushed. (In practice I haven't noticed this).
Cannot post-process the content using e.g. an HttpHandler
Cannot set a content-length header which may be useful for the client in some cases.
What other downsides or potential problems exist with this approach?

Performance of commitWriteTransaction for a big Realm file

Using Realm 1.0.2 on OS X, I have a Realm file that reached ~3.5 Gb. Now, writing a batch of new objects takes around 30s - 1min on average, which makes things pretty slow.
After profiling, it clearly looks like commitWriteTransaction is taking a big chunk of time.
Is that performance normal / expected in that case? And if so, what strategies would be available to make that saving time faster?
Realm uses copy-on-write semantics whenever changes are performed in write transactions.
The larger the structure that has to be forked & copied, the longer it will take to perform the operation.
This small unscientific benchmark on my 2.8GHz i7 MacBook Pro
import Foundation
import RealmSwift
class Model: Object {
dynamic var prop1 = 0.0
dynamic var prop2 = 0.0
}
// Add 60 million objects with two Double properties in batches of 10 million
autoreleasepool {
for _ in 0..<6 {
let start = NSDate()
let realm = try! Realm()
try! realm.write {
for _ in 0..<10_000_000 {
realm.add(Model())
}
}
print(realm.objects(Model.self).count)
print("took \(-start.timeIntervalSinceNow)s")
}
}
// Add one item to that Realm
autoreleasepool {
let start = NSDate()
let realm = try! Realm()
try! realm.write {
realm.add(Model())
}
print(realm.objects(Model.self).count)
print("took \(-start.timeIntervalSinceNow)s")
}
Logs the following:
10000000
took 25.6072470545769s
20000000
took 23.7239990234375s
30000000
took 24.4556020498276s
40000000
took 23.9790390133858s
50000000
took 24.5923230051994s
60000000
took 24.2157150506973s
60000001
took 0.0106720328330994s
So you can see that adding many objects to the Realm, with no relationships, is quite fast and stays linearly proportional to the number of objects being added.
So it's likely that you're doing more than just adding objects to the Realm, maybe you're updating existing objects, causing them to be copied?
If you're reading a value from all objects as part of your write transactions, that will also grow proportionally to the number of objects.
Avoiding these things will shorten your write transactions.

outofmemoryexception when reading from smart card

I'm using .Net framework to develop an application that interact with Gemalto smart card (adding and retrieving),
I've successively done with the addition part, however when I try to read the data that I stored in the card I got an outOfMemoryException in the host application, can anyone figure out why does this happen?
this is the code in the host application that read from the card:
for (int i = 0; i < 5; i++)
{
try
{
string bookString = service.getBook(i);
}catch (Exception x) {
MessageBox.Show("an error occur");
}
}
and in app that is loaded on the card, I have this method:
public string getBook(int index)
{
return BookList[index].getBookID() + " , " + BookList[index].getBookDesc();
}
The Gemalto .NET Card contains both persistent memory and volatile
memory that are used for data storage. The persistent memory acts as
persistent storage for the card - data persists in it even after the
card is removed from a smart card reader. Volatile memory is reset
when the card loses power and cannot be used for persistent storage.
how you store your data, and how you fill the BookList with data ? please clarify more.
you have memory limitation of course, so you cannot store up to certain size, in this .net card you have 16KB of volatile memory (RAM) and 70KB of persistent memory (that contain assemblies, storage memory).
I tested in some Gemalto .net card and able to store 20KB of data in persistent storage memory, after that limit i get the same exception OutOfMemoryException (because the other 50KB is filled with files, assemblies).
This card is not designed to store database, records and so on, its used to store critical information like keys and passwords. So don't save more than this size and your code will be fine, or use any text compression algorithm (in the client application) to reduce the size before storage in card, but in the end don't try to store more than this ~XX KB.
update:
Because of this limitation you cannot store more than 70K in persistent storage, also you cannot retrieve more than 16KB from the card to client (because this data will be stored in local variable i.e volatile memory and then retrieved back to your client, and you have constrains also here).
So this is the source of your problem, you retrieve more than volatile memory can hold:
public string getBook(int index)
{
return bookList[index].getId() + " , " + bookList[index].getName();
}
before return value, this data will be in temporarily variable, and because you can't store more than 16KB you get the exception OutOfMemoryException.
the solution is to use this storage directly from the client (you have the reference so just use it):
public Book getTheBook(int index)
{
return bookList[index];
}
and from the client you can access Book functionality(make sure your Book is struct because marshalling is supported only for struct and primitive types in Gemalto .net card):
Console.WriteLine(service.getTheBook(0).getName());
You are attempting a task not typical for smart cards. Note, that cards have RAM in the range of a handful of kByte, to be divided between operating system and I/O buffer. The latter is unlikely to exceed 2 kByte (refer to the card manual for that) and even then you need to use extended length APDUs as mentioned in this answer. So the likely cause for your error is, that the data length exceeds the amount of RAM for the I/O buffer. While enlarging the buffer or using extended APDUs will stretch the limit, it is still easy to hit it with a really long description.
I got this exception only when attempting to retrieve long string (such as 100 words). I've done with adding part and that was accomplished by simply send a string of BookDesc.
public Book[] BookList=new Book[5];
public static int noOfBooks=0;
public string addBook(string bookDesc)
{
Book newBook=new Book();
newBook.setBookDesc(bookDesc);
newBook.setBookID(noOfBooks);
BookList[noOfBooks]=newBook;
noOfBooks++;
}

Memory usage in Flash / Flex / AS3

I'm having some trouble with memory management in a flash app. Memory usage grows quite a bit, and I've tracked it down to the way I load assets.
I embed several raster images in a class Embedded, like this
[Embed(source="/home/gabriel/text_hard.jpg")]
public static var ASSET_text_hard_DOT_jpg : Class;
I then instance the assets this way
var pClass : Class = Embedded[sResource] as Class;
return new pClass() as Bitmap;
At this point, memory usage goes up, which is perfectly normal. However, nulling all the references to the object doesn't free the memory.
Based on this behavior, looks like the flash player is creating an instance of the class the first time I request it, but never ever releases it - not without references, calling System.gc(), doing the double LocalConnection trick, or calling dispose() on the BitmapData objects.
Of course, this is very undesirable - memory usage would grow until everything in the SWFs is instanced, regardless of whether I stopped using some asset long ago.
Is my analysis correct? Can anything be done to fix this?
Make sure you run your tests again in the non-debug player. Debug player doesn't always reclaim all the memory properly when releasing assets.
Also, because you're using an Embedded rather than loaded asset, the actual data might not ever be released. As it's part of your SWF, I'd say you could reasonably expect it to be in memory for the lifetime of your SWF.
Why do you want to use this ???
var pClass : Class = Embedded[sResource] as Class;
return new pClass() as Bitmap;
Sometimes dynamically resource assignment is buggy and fails to be freed up. I had similar problems before with flash player and flex, for ex. loading and unloading the same external swf... the memory was increasing constantly with the size of the loaded swf without going down even if I was calling the system.gc(); after unloading the swf.
So my suggestion is to skip this approach and use the first case you have described.
UPDATE 1
<?xml version="1.0" encoding="utf-8"?>
<s:Application
xmlns:fx = "http://ns.adobe.com/mxml/2009"
xmlns:s = "library://ns.adobe.com/flex/spark"
xmlns:mx = "library://ns.adobe.com/flex/mx"
creationComplete = "creationComplete()">
<fx:Script>
<![CDATA[
[Embed(source="/assets/logo1w.png")]
private static var asset1:Class;
[Embed(source="/assets/060110sinkhole.jpg")]
private static var asset2:Class;
private static var _dict:Dictionary = new Dictionary();
private static function initDictionary():void
{
_dict["/assets/logo1w.png"] = asset1;
_dict["/assets/060110sinkhole.jpg"] = asset2;
}
public static function getAssetClass(assetPath:String):Class
{
// if the asset is already in the dictionary then just return it
if(_dict[assetPath] != null)
{
return _dict[assetPath] as Class;
}
return null;
}
private function creationComplete():void
{
initDictionary();
var asset1:Class = getAssetClass("/assets/logo1w.png");
var asset2:Class = getAssetClass("/assets/060110sinkhole.jpg");
var asset3:Class = getAssetClass("/assets/logo1w.png");
var asset4:Class = getAssetClass("/assets/060110sinkhole.jpg");
var asset5:Class = getAssetClass("/assets/logo1w.png");
}
]]>
</fx:Script>
At this point, memory usage goes up,
which is perfectly normal. However,
nulling all the references to the
object doesn't free the memory.
That's also perfectly normal. It's rare for any system to guarantee that the moment you stop referencing an object in the code is the same moment that the memory for it is returned to the operating system. That's exactly why methods like System.gc() are there, to allow you to force a clean-up when you need one. Usually the application may implement some sort of pooling to keep around objects and memory for efficiency purposes (as memory allocation is typically slow). And even if the application does return the memory to the operating system, the OS might still consider it as assigned to the application for a while, in case the app needs to request some more shortly afterwards.
You only need to worry if that freed memory is not getting reused. For example, you should find that if you create an object, free it, and repeat this process, memory usage should not grow linearly with the number of objects you create, as the memory for previously freed objects gets reallocated into the new ones. If you can confirm that does not happen, edit your question to say so. :)
It turns out I was keeping references to the objects that didn't unload. Very tricky ones. The GC does work correctly in all the cases I described, which I suspected may have been different.
More precisely, loading a SWF, instancing lots of classes defined in it, adding them to the display list, manipulating them, and then removing all references and unloading the SWF leaves me with almost exactly the same memory I started with.

AS3 Memory Conservation (Loaders/BitmapDatas/Bitmaps/Sprites)

I'm working on reducing the memory requirements of my AS3 app. I understand that once there are no remaining references to an object, it is flagged as being a candidate for garbage collection.
Is it even worth it to try to remove references to Loaders that are no longer actively in use? My first thought is that it is not worth it.
Here's why:
My Sprites need perpetual references to the Bitmaps they display (since the Sprites are always visible in my app). So, the Bitmaps cannot be garbage collected. The Bitmaps rely upon BitmapData objects for their data, so we can't get rid of them. (Up until this point it's all pretty straightforward).
Here's where I'm unsure of what's going on:
Does a BitmapData have a reference to the data loaded by the Loader? In other words, is BitmapData essentially just a wrapper that has a reference to loader.content, or is the data copied from loader.content to BitmapData?
If a reference is maintained, then I don't get anything by garbage collecting my loaders...
Thoughts?
Using AMF a bit with third party products has lead me to believe the Loader class attempts to instantiate a new class of the given content type (in this case it would be a Bitmap class instance). You are probably constructing a new BitmapData object from your Bitmap instance. From that I would assume that the Loader instance references the Bitmap instance, and in your case your code also references the Bitmap instance. Unless at some point you are calling BitmapData.clone().
There are also a couple of ways to force GC. Force Garbage Collection in AS3?
You may find it useful to attach some arbitrarily large object to something, then force the GC to see if that thing is getting cleaned up or floating around. If you are using Windows something like procmon (http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx) is more helpful than task manager for doing this kind of external inspection.
This of course is a bit trial and error but for lack of something like Visual VM (https://visualvm.dev.java.net/) we are kind of screwed in the Flash world.
It's a good question, but to the best of my knowledge, the answer is no -- neither Bitmap nor BitmapData objects possess references to the loaders that load them, so you can safely use them without concern for their preventing your Loaders from being collected.
If you want to make absolutely sure, though, use the clone() method of the BitmapData class:
clone()
Returns a new BitmapData object that
is a clone of the original instance
with an exact copy of the contained
bitmap.
For example:
private function onCreationComplete():void
{
var urlRequest:URLRequest = new URLRequest("MyPhoto.jpg");
var loader:Loader = new Loader();
loader.contentLoaderInfo.addEventListener(Event.COMPLETE, loader_complete, false, 0, true);
loader.load(urlRequest);
}
private function loader_complete(event:Event):void
{
var img1:Image = new Image();
img1.source = Bitmap(event.target.content);
addChild(img1);
var img2:Image = new Image();
img2.source = new Bitmap(event.target.content.bitmapData.clone());
addChild(img2);
}
Here, img1's source is a Bitmap cast explicitly from the BitmapData object returned by the loader. (If you examine the references in FlexBuilder, you'll see they are identical.) But img2's source is a clone -- new bunch of bytes, new object, new reference.
Hope that helps explain things. The more likely culprits responsible for keeping objects from being garbage collected, though, are usually event handlers. That's why I set the useWeakReference flag (see above) when setting up my listeners, pretty much exclusively, unless I have good reason not to:
useWeakReference:Boolean (default =
false) — Determines whether the
reference to the listener is strong or
weak. A strong reference (the default)
prevents your listener from being
garbage-collected. A weak reference
does not.
you may set a variable in the complete listener that stores the bitmap and then destroy the object later
public function COMPLETEListener(e:Event){
myBitmap = e.target.loader.content;
}
public function destroy(){
if(myBitmap is Bitmap){
myBitmap.bitmapData.dispose();
}
}
works fine for me load some big image and see the difference in the taskmanager

Resources