outofmemoryexception when reading from smart card - out-of-memory

I'm using .Net framework to develop an application that interact with Gemalto smart card (adding and retrieving),
I've successively done with the addition part, however when I try to read the data that I stored in the card I got an outOfMemoryException in the host application, can anyone figure out why does this happen?
this is the code in the host application that read from the card:
for (int i = 0; i < 5; i++)
{
try
{
string bookString = service.getBook(i);
}catch (Exception x) {
MessageBox.Show("an error occur");
}
}
and in app that is loaded on the card, I have this method:
public string getBook(int index)
{
return BookList[index].getBookID() + " , " + BookList[index].getBookDesc();
}

The Gemalto .NET Card contains both persistent memory and volatile
memory that are used for data storage. The persistent memory acts as
persistent storage for the card - data persists in it even after the
card is removed from a smart card reader. Volatile memory is reset
when the card loses power and cannot be used for persistent storage.
how you store your data, and how you fill the BookList with data ? please clarify more.
you have memory limitation of course, so you cannot store up to certain size, in this .net card you have 16KB of volatile memory (RAM) and 70KB of persistent memory (that contain assemblies, storage memory).
I tested in some Gemalto .net card and able to store 20KB of data in persistent storage memory, after that limit i get the same exception OutOfMemoryException (because the other 50KB is filled with files, assemblies).
This card is not designed to store database, records and so on, its used to store critical information like keys and passwords. So don't save more than this size and your code will be fine, or use any text compression algorithm (in the client application) to reduce the size before storage in card, but in the end don't try to store more than this ~XX KB.
update:
Because of this limitation you cannot store more than 70K in persistent storage, also you cannot retrieve more than 16KB from the card to client (because this data will be stored in local variable i.e volatile memory and then retrieved back to your client, and you have constrains also here).
So this is the source of your problem, you retrieve more than volatile memory can hold:
public string getBook(int index)
{
return bookList[index].getId() + " , " + bookList[index].getName();
}
before return value, this data will be in temporarily variable, and because you can't store more than 16KB you get the exception OutOfMemoryException.
the solution is to use this storage directly from the client (you have the reference so just use it):
public Book getTheBook(int index)
{
return bookList[index];
}
and from the client you can access Book functionality(make sure your Book is struct because marshalling is supported only for struct and primitive types in Gemalto .net card):
Console.WriteLine(service.getTheBook(0).getName());

You are attempting a task not typical for smart cards. Note, that cards have RAM in the range of a handful of kByte, to be divided between operating system and I/O buffer. The latter is unlikely to exceed 2 kByte (refer to the card manual for that) and even then you need to use extended length APDUs as mentioned in this answer. So the likely cause for your error is, that the data length exceeds the amount of RAM for the I/O buffer. While enlarging the buffer or using extended APDUs will stretch the limit, it is still easy to hit it with a really long description.

I got this exception only when attempting to retrieve long string (such as 100 words). I've done with adding part and that was accomplished by simply send a string of BookDesc.
public Book[] BookList=new Book[5];
public static int noOfBooks=0;
public string addBook(string bookDesc)
{
Book newBook=new Book();
newBook.setBookDesc(bookDesc);
newBook.setBookID(noOfBooks);
BookList[noOfBooks]=newBook;
noOfBooks++;
}

Related

Downsides of streaming large JSON or HTML content to a browser in ASP.NET MVC

I am working with a legacy ASP.NET Framework MVC application that is experiencing issues with memory usage such as occasional bursts of OutOfMemory exceptions across a variety of operations. The application is frequently operating with large lists of objects (sometimes 10s to 100s of megabytes), and then serializing them to JSON to return to the client. We are not totally sure what the source of the OutOfMemory exceptions is, but believe a likely candidate is memory fragmentation due to too many large objects going on the Large Object Heap.
We are thinking a quick win is to refactor some of the controller endpoints to serialize their JSON content using a stream writer (as outlined in the JSON.NET Documentation), and to stream the content back to the browser. This won't eliminate the memory load of the data lists prior to serialization, but in theory should cut down the overall amount of data going on to the LOH.
The code is written to send the results in chunks of less than 85kb:
public async Task<ActionResult> MyControllerMethod()
{
var data = GetData();
Response.BufferOutput = false;
Response.ContentType = "application/json";
var serializer = JsonSerializer.Create();
using (var sw = new StreamWriter(Response.OutputStream, Encoding.UTF8, 84999))
{
sw.AutoFlush = false;
serializer.Serialize(sw, data);
await sw.FlushAsync();
}
return new EmptyResult();
}
I am aware of a few downsides with this approach, but don't consider them showstoppers:
More complex to implement a unit test due to the 'EmptyResult' returned by the controller.
I have read there is a small overhead due to a call to PInvoke whenever data is flushed. (In practice I haven't noticed this).
Cannot post-process the content using e.g. an HttpHandler
Cannot set a content-length header which may be useful for the client in some cases.
What other downsides or potential problems exist with this approach?

Amazon DynamoDBMapper.delete method does not delete item

I used AWS DynamoDBMapper Java class to build a repository class to support CRUD operations. In my unit test, I created an item, saved it to DB, loaded it and then deleted it. Then I did a query to DB with the primary key of deleted item, query returns an empty list, all looked correct. However, when I check the table on AWS console the deleted item is still there, and another client on a different session can still find this item. What did I do wrong? Is there any other configuration or set up required to ensure the "hard delete" happened as expected? My API looks like this:
public void deleteObject(Object obj) {
Object objToDelete = load(obj);
delete(obj);
}
public Object load(Object obj) {
return MAPPER.load(Object.class, obj.getId(),
ConsistentReads.CONSISTENT.config());
}
private void save(Object obj) {
MAPPER.save(obj, SaveBehavior.CLOBBER.config());
}
private void delete(Object obj) {
MAPPER.delete(obj);
}
Any help/hint/tip is munch appreciated
Dynamodb is eventually consistent by default. Creating -> Reading -> Deleting immediately would not always work.
Eventually Consistent Reads (Default) – the eventual consistency
option maximizes your read throughput. However, an eventually
consistent read might not reflect the results of a recently completed
write. Consistency across all copies of data is usually reached within
a second. Repeating a read after a short time should return the
updated data.
Strongly Consistent Reads — in addition to eventual consistency,
Amazon DynamoDB also gives you the flexibility and control to request
a strongly consistent read if your application, or an element of your
application, requires it. A strongly consistent read returns a result
that reflects all writes that received a successful response prior to
the read.

Sample Grabber Sink release() issue

I use Sample Grabber Sink in my Media session using most of code from msdn sample.
In OnProcessSample method I memcpy data to media buffer, attach it to MFSample and put this one into main process pointer. Problem is I either get memory leaking or crashes in ntdll.dll
ntdll.dll!#RtlpLowFragHeapFree#8() Unknown
SampleGrabberSink:
OnProcessSample(...)
{
MFCreateMemoryBuffer(dwSampleSize,&tmpBuff);
tmpBuff->Lock(&data,NULL,NULL);
memcpy(data,pSampleBuffer,dwSampleSize); tmpBuff->Unlock();
MFCreateSample(&tmpSample);
tmpSample->AddBuffer(tmpBuff);
while(!(*Free) && (*pSample)!=NULL)
{
Sleep(1);
}
(*Free)=false;
(*pSample)=tmpSample;
(*Free)=true;
SafeRelease(&tmpBuff);
}
in main thread
ReadSample()
{
if(pSample==NULL)
return;
while(!Free)
Sleep(1);
Free=false;
//process sample into dx surface//
SafeRelease(&pSample);
Free=true;
}
//hr checks omitted//
With this code i get that ntdll.dll error after playing few vids.
I also tried to push samples in qeue so OnProcess doesn't have to wait but then some memory havent free after video ended.
(even now it practicaly doesn't wait, Session rate is 1 and main process can read more than 60fps)
EDIT: It was thread synchronization problem. Solved by using critical section thanks to Roman R.
It is not easy to see is from the code snippet, but I suppose you are burning cycles on a streaming thread (you have your callback called on) until a global/shared variable is NULL and then you duplicate a media sample there.
You need to look at synchronization APIs and serialize access to shared variables. You don't do that and eventually either you are accessing freed memory or breaking reference count of COM object.
You need an event set externally when you are ready to accept new buffer from the callback, then the callback sees the event, enters critical section (or, reader/writer lock), does your *pSample magic there, exits from critical section and sets another event indicating availability of a buffer.

Flex consuming huge memory for large data

When flex array collection is handled with large amount of data for example 2,00,000 new referenced objects the memory in flex client browser shoots up 20MB. This excess 20MB is independent of the variables defined in the object. An detailed example is illustrated below.
var list:ArrayCollection = new ArrayCollection;
for(var i:int = 0;i<200000;i++)
{
var obj:Object = new Object;
list.add(obj);
}
On executing the above code there was 20MB increase in flex client browser memory. For a different scenario i tried adding an action script object into the array collection. The action script object is defined below.
public class Sample
{
public var id:int;
public var age:int;
public Sample()
{
}
}
On adding 200000 Sample class into a array collection there was still 20MB memory leak.
var list:ArrayCollection = new ArrayCollection;
for(var i:int = 0;i<200000;i++)
{
var obj:Sample = new Sample;
obj.id= i;
onj.age = 20;
list.add(obj);
}
I even tried adding the Sample Objects into flex arrayList and array but the problem still persists. Can someone explain on where this excess memory is consumed by flex?
Requesting memory to the OS is time consuming, so Flash player requests large chunks of memory (more than it really needs) in order to minimize the number of those requests.
I have no idea if the OS allocation time is a big deal anymore, we're talking on avg 1.5-2GHz Cpu's - even mobile. But Benoit is on the right track. Large chunks are reserved at a time to mainly avoid heap fragmentation. If memory was requested in only size chunks it needs at a time, along with other IO requests, the system memory would become highly fragmented very quickly. When these fragments are returned to the OS space - unless the memory manager gets a request of the same size or smaller, it cannot reallocate this chunk - thereby making it lost to the visible pool. So to avoid this issue - Flash (and it's memory manager) requests 16Mb at a time.
In your case, it wouldn't matter if you created 1 object or 100,000. You'll still start with a minimum of 16Mb private memory (aka what you see in task manager).
The flash player allocation mechanism is based on the Mozilla MMgc.
You can read about it here: https://developer.mozilla.org/en-US/docs/MMgc

How to find out size of ASP.NET session, when there are non-serializable objects in it?

I have a feeling that I'm putting rather much data in my ASP.NET session, but I've no idea how much and if I should be concerned. I found a similar question, but that relies on serializing objects and checking their serialized size. In my case the majority of data in the session is in objects from another library which doesn't have its classes marked as "Serializable". (I know that this restricts me to using InProc session state provider, but that's another issue). Does anyone have an idea on how to traverse the object graph and find out its size?
Added: OK, one way would be a manual traversal of the object graph and usage of Marshal.SizeOf() method. But that's a lot of writing to get that working. Is there perhaps a simpler way of achieving the same effect? I'm not aiming for byte precision, I'm interested in the order of magnitude (kilobytes, megabytes, tens of megabytes...)
For testing, you could put together a stub Custom Session provider, implementing the SessionStateStoreProviderBase abstract class. I would write backing fields that stash everything in WebCache (so that you have session data managed), and eventually generate a statistic using the Marshal.SizeOf() method when the SetAndReleaseItemExclusive method is called.
public override void SetAndReleaseItemExclusive(HttpContext context, string id, SessionStateStoreData item, object lockId, bool newItem)
{
double MemSize = 0;
foreach (object sessObj in item.Items)
{
MemSize += Marshal.SizeOf(sessObj);
}
}
Consult this question for more information on getting field size:
Getting the size of a field in bytes with C#
can't you generate a heap dump and find the size of the session from that. in java land i can dump the heap then open it up in mat, find the session object and find out the size of the subgraph.
You can probably store the session state information in the database and check the size, but I am not sure if there is any tool out there that can let you view and traverse the object graph.
If possible, check your design one more time to see if you can minimize the information in the session.

Resources