How to release the memory after CoreML model prediction - coreml

I'm developing the iOS app with CoreML.
In this app, the class that contains the CoreML model prediction is instantiated for many times. The amount of memory usage accumulates every prediction and it leads the memory leak and crash app.
My question is that how should I release the memory after prediction.
I tried to update the variable that stores the model object after prediction. This effectively reduces the memory usage, but GPU acceleration does not work.
Are there any solution about this problem?

Related

Why does valgrind produce multiple (almost) similar leak summaries?

I run valgrind version 3.12.0 from the console like this:
valgrind --log-file="valgrind.log" --leak-check=yes ./application -param
The log seems to be polluted while the application is running which is interesting already because I don't think that a memory leak can be detected with 100% certainty while the application is still running. I guess that in some scenarios (maybe threads) this is not true and valgrind is clever enough to catch those early on?
What really bothers me is that there are multiple "leak summaries" which contain more or less the same information. It seems to me that summaries logged at later stages contain more information.
Below you will find an output of valgrind executed on my Qt application. I used Notepad to list all "definitely" lost entries. You can see that there are tons of leak summaries and I don't know why the contained information is almost the same. Especially the 15 bytes leaked from the constructor of the QApplication is very strange since it is contained in every summary again and again. How does valgrind decide when to create such a summary?
One of the design goals of Valgrind is to not produce false positives (i.e., never incorrectly indicate a problem). On the whole it comes very close to this. Almost certainly you have a leak. I recommend that you do a debug build and look at the source code backreferences to debug the issue.
Leak detection is normally done when the application terminates. There are ways of triggering leak reports earlier:
Using the gdbserver commands, you can trigger leak_check
Using Valgrind client commands you can trigger VALGRIND_DO_LEAK_CHECK (there are several similar commands)
Possibly you are using the second of these.
Lastly, 'almost the same' means that they are different. You could reduce the stack depth, which would make it more likely that callstacks will be grouped together.
During execution, Valgrind will output non-leak errors as they occur.
On termination, Valgrind outputs:
Unloaded shared libraries (verbose mode)
HEAP SUMMARY - how much memory still in use, allocated and freed
LEAK SUMMARY - details on leaks found
ERROR SUMMARY
Callstacks of errors. For non-leak errors this will duplicate the previous messages, though contexts get aggregated with the occurrence count.
Used suppressions (verbose mode again I think)
ERROR SUMMARY again

EndExecute in an AsyncCodeActivity Worfklow does not get invoked if there are too many activities

I am using Workflow to run parallel AsyncCodeActivities using ParallelForEach. While I do so, if there are huge number of activities (say 1000 async activities) then I am getting System.OutOfMemory exception while I run the code.
When I researched more about this by debugging my code, I found out that my "EndExecute" method is not getting invoked for a series of async code activities, even if it has completed its process. Due to this the memory is getting stacked constantly without being released and finally System.OutOfMemory exception is raised.
I tried to reduce the number of threads to 8 (the number of processors in my machine) but even then I am receiving the exception.
Kindly help me out and I am stuck with this issue for more than a week. I could not find out the solution from anywhere else.
If you are using .NET framework 4.0, I think by limiting max thread pool size by calling SetMaxThreads() to a lower number, will help.
You can get default max thread size by calling GetMaxThread().
Currently OutOfMemory exception is thrown because runtime will keep allocating threadpool thread until memory is available.
You might have to adjust this number depending on memory required for each activity.

Pre-binding libraries in Xcode 4

I'm developing an app for a client and one of his devices (2nd gen iTouch on iOS4) is having issues starting the application. I've run a few allocation/leak tests and I concluded that there isn't anything wrong with my app's code. I noticed that there is an allocation spike at startup and I concluded that it's because of dyld which is dynamically linking the libraries on start up. How would I go about pre-binding the application in xcode4?
OS X forum seemed to be extremely non informative in that they assume you'd be able to find it. :/
Any help would be appreciated.
Thanks!
(I also wish I could make a new tag for "prebinding")
According to Apple, you shouldn't need to prebind your iOS applications. If you are getting big allocation spikes, I'm guessing it's due to your app's architecture rather than the underlying OS itself.
The memory allocated by dyld should pale in insignificance compared to even the most basic allocations made by the earliest stages of the runtime. The Objective-C runtime and other system frameworks/libraries allocate a bunch of internal structures that are required for things to work correctly.
For instance, a quick test of an app that does nothing in main but make one call to NSLog(#"FooBar"); and then sleep (i.e. never even spools up UIApplication) performed 373 allocations for a total of 52K living.
To take it a step further, if you actually start up UIKit, like this...
UIApplicationMain(argc, argv, nil, NSStringFromClass([MyAppDelegate class]));
... you'll see ~600K in ~7800 living allocations once the app reaches quiescent state. This is all unavoidable stuff. No amount of prebinding will save you this. I suggest not worrying about it.
If you're seeing orders of magnitude more memory being allocated then, as Nik Reiman said, it's your application. At the end of the day, the memory allocated by the dynamic linker is totally insignificant.

ASP.NET retrieve Average CPU Usage

Last night I did a load test on a site. I found that one of my shared caches is a bottleneck. I'm using a ReaderWriterLockSlim to control the updates of the data. Unfortunately at one point there are ~200 requests trying to update the data at approximately the same time. This also coincided with CPU usage spikes.
The data being updated is in the ASP.NET Cache. What I'd like to do is if the CPU usage is around 75%, I'd like to just skip the cache and hit the database on another machine.
My problem is that I don't know how expensive it is to create a new performance counter to check the cpu usage. Also, if I would probably like the average cpu usage over the last 2 or 3 seconds. However, I can't sit there and calculate the cpu time as that would take longer than it's taking to update the cache currently.
Is there an easy way to get the average CPU usage? Are there any drawbacks to this?
I'm also considering totaling the wait count for the lock and then at a certain threshold switch over to the database. The concern I had with this approach would be that changing hardware might allow more locks with less of a strain on the system. And also finding the right balance for the threshold would be cumbersome and it doesn't take into account any other load on the machine. But it's a simple approach, and simple is 99% of the time better.
This article from Microsoft covers Tuning .Net Application Performance and highlights which counters to collect and compare to determine CPU and I/O bound applications.
You sound like you want to monitor this during execution and bypass your cache when things get intensive. Would this not just move the intensive processing from the cache calls to your database calls? Surely you have the cache to avoid expensive DB calls.
Are you trying to repopulate an invalidated cache? What is the affect of serving stale data from the cache? You could just lock on the re-populating function and serve stale data to other requests until the process completes.
Based on the above article, we collect the following counter objects during our tests and that gives us all the necessary counters to determine the bottlenecks.
.NET CLR Exceptions
.NET CLR Memory
ASP.NET Applications
ASP.NET
Memory
Paging File
Processor
Thread
The sections in the article for CLR Tuning and ASP.NET Tuning highlight the bottlenecks that can occur and suggest configuration changes to improve performance. We certainly made changes to the thread pool settings to get better performance.
Changing and Retrieving Performance Counter Values might help with accessing the existing Processor counter via code but this isn't something I've tried personally.

Too much physical memory for an asp.net app?

My huge 32-bit web services LLBLGen-based data access application is running alone on a dedicated 64-bit machine. Its physical memory consumption steadily grows up to approximately 2GB when the process releases almost all of the allocated space (up to 1,5GB) and continues to grow from that point again. There is no observable increase in Page Input values or other page file usage parameters so it looks like the memory is released rather than being swapped out to page file. I am thinking what kind of profile is this? There is nothing to actually prevent the process from grabbing all memory it can, on the other hand there are unacceptable http internal errors around the memory release - probably the clean-up blocks useful work. What would be a good strategy to make the cleanup less obtrusive, given the above is an acceptable behaviour in the first place.
It sounds like you have a memory leak, the process keeps leaking memory until it crushes with an out-of-memory condition and is then automatically restarted by the server.
1.5GB is about the maximum amount of memory a 32 bit process can allocate before running out of address space.
Somethings to look for:
Do you do your own caching? when are items removed from the cache?
Is there somewhere data is added to a collection every once in a while but never removed?
Do you call Dispose on every object that implements IDisposable?
Do you access any non-managed code at all (COM objects or using DllImport) or allocate non-managed memory (using the Marshal class for example)? anything that is allocated there is never freed by the garbage collector, you have to free it yourself.
Do you use 3rd party libraries or any code from 3rd parties? it can have any of the problems in the list too.
Is it possible you are not disposing of various disposable objects (particular DB related). This would leave them around, potentially tying up large amounts of unmanaged resources until the GC runs and their finalizers are called.
It would be worth running perfmon against you process and looking to see if there is a steady growth in some critical resource, like handles, or if your DB provider exposes performance counters then connections or open result sets.
I agree with the first part of edg's answer, but where he says:
"By setting objects to null when they
are dead you can encourage the GC to
reuse the memory consumed by those
objects, this limiting the growing
consumption of memory."
is incorrect. You never need to set an object to null since the GC will eventually collect your object after it goes out of scope.
This was discussed in this answer on SO: Setting Objects to Null/Nothing after use in .NET
Don't user Arraylists (garbage collect don't work weel with them), use instead generic lists
Other common error is to have in web.config Debug=true, this consume lot of memory, change the option to "false".
Other thing to do is use CLRProfiler, to trace the problem.
Good Luck,
Pedro
The Garbage Collector doesn't automatically free memory when it releases objects, it holds on to that memory to help minimise the expense of future mallocs.
When a low memory condition is triggered that memory will be returned to the OS and you will see more available memory when looking through task manager. This will normally happen about the 2GB mark, or 3GB if you use the relevant switch.
<contentious>
By setting objects to null when they are dead you can encourage the GC to reuse the memory consumed by those objects, this limiting the growing consumption of memory.
But which objects should you set to null? Big objects, large collections, frequently created objects.
</contentious>
EDIT: There is evidence to support the value of setting objects to null. See this for detail. Of course there is no need to set objects to null, the point is does it help memory management in any way?
EDIT: We need a recent benchmark if such a thing exists rather than continuing to opine.
Ensure that you aren't putting up a debug build of your project. There's a feature* that when you have a debug build, if you instantiate any object that contains the definition for an event, even if you don't raise the event, it will hold only a small piece of memory indefinitely. Over time, these small pieces of memory will eat away at your memory pool, until it eventually restarts the web process, and start again.
*I call this a feature (and not a bug) because it's been around since the beginning of .Net 2 (not present in .Net 1.1), and there's been no patch to fix it. The memory leak must be due to some feature needed when debugging.
We were having similar situations occur and altered all our database connections to use a try/catch/finally approach.
Try was used to execute code, catch for error collection, and finally to close all variables and database connections.
internal BECollection<ReportEntity> GetSomeReport()
{
Database db = DatabaseFactory.CreateDatabase();
BECollection<ReportEntity> _ind = new BECollection<ReportEntity>();
System.Data.Common.DbCommand dbc = db.GetStoredProcCommand("storedprocedure");
try
{
SqlDataReader reader = (SqlDataReader)db.ExecuteReader(dbc);
while (reader.Read())
{
//populate entity
}
}
catch (Exception ex)
{
Logging.LogMe(ex.Message.ToString(), "Error on SomeLayer/SomeReport", 1, 1);
return null;
}
finally
{
dbc.Connection.Close();
_ind = null;
}
return _ind;
}
My first guess would be a memory leak. My second guess would be that it is normal behavior - the GC won't be fired until you have significant memory pressure. The only way to be sure is to use a combination of a profiler and things like PerfMon. Some sites:
http://blogs.msdn.com/ricom/archive/2004/12/10/279612.aspx
http://support.microsoft.com/kb/318263
Tess's excellent lab series
In addition I would make sure you aren't running in Debug mode (as already mentioned).
As far as the HTTP errors - assuming you are running in server GC mode, it tries to do everything it can to not block requests. It would be interesting to find out what those HTTP errors are - that's not normal behavior from what I've seen in the past, and might point to some more of the root of your issue.

Resources