Should I release the buffers within IMFSample before I release the IMFSample object? - ms-media-foundation

I can't find any information about a requirement to release the buffers within an IMFSample before actually releasing the IMFSample itself. If I do just release the IMFSample does that automatically release it's buffers? I'm writing a video player app and I'm receiving a sample from IMFSourceReader::ReadSample. While I see the code running I see a small increase in memory usage in VS2017 and I'm not sure if it's a leak yet. The code I'm using is based on the sample code in this post.
Media Foundation webcam video H264 encode/decode produces artifacts when played back
I found the IMFSample::RemoveAllBuffers method that may or may not release the buffers, it doesn't specify in the documentation. Maybe this needs to be used before releasing the IMFSample? https://msdn.microsoft.com/en-us/library/windows/desktop/ms703108(v=vs.85).aspx
(I also run across another related post in my research but I don't think it applies to my question:)
Should I release the returned IMFSample of an internally allocated MFT output buffer?
Thanks!

Regular COM interface pointer management rules apply: you just release IMFSample pointer and that's it.
In certain cases your release of a sample pointer does not lead to actual freeing of memory because samples might be pooled: the released sample object is returned to its parent pool and is ready for reuse.

Just to echo from what Roman said, if you just handle an IMFSample, you just need to release this IMFSample. If you AddBuffer on the IMFSample, you have to remove it (i agree it's not clear, but if buffers are removed at this time, this will create problems with the component who handles IMFSample, and it will not find buffers after). that said, this design can cause problems if no one calls RemoveAllBuffers at the right time.
Differents components are not supposed to use AddBuffer on the same IMFSample. But it is possible to do it.
This is bad design for memory management. You have to deal with it.

Last but not least.
According to Microsoft you have to release the buffers (before- or afterward doesn't matter).
See: Decode the audio

Related

Methods to discourage reverse engineering of an opencl kernel

I am preparing my opencl accelerated toolkit for release. Currently, I compile my opencl kernels into binaries targeted for a particular card.
Are there other ways of discouraging reverse engineering? Right now, I have many opencl binaries in my release folder, one for each kernel. Would it be better to splice these binaries into one single binary, or even add them into the host binary, and somehow read them in using a special offset ?
OpenCL 2.0 and SPIR-V can be used for this, but is not available on all platforms yet.
Encode binaries. Keep keys in server and have clients request it at time of usage. Ofcourse keys should be encoded too,( using a variable value such as time of server maybe). Then decode in client to use as binary kernel.
I'm not encode pro but I would use multiple algorithms applied multiple times to make it harder, if they are crunchable in several months(needed for new version update of your GPGPU software for example) when they are alone. But simple unknown algorithm of your own such as reversing order of bits of all data (1st bit goes nth position, nth goes 1st) should make it look hard for level-1 hackers.
Warning: some profiling tools could get its codes in "run-time" so you should add many maybe hundreds of trivial kernels without performance penalty to hide it in a crowded timeline or you could disable profiling in kernel options or you could add a deliberate error maybe some broken events in queues then restart so profiler cannot initiate.
Maybe you could obfuscate final C99 code so it becomes unreadable by humans. If can, he/she doesn't need hacking in first place.
Maybe most effectively, don't do anything, just buy copyrights of your genuine algorithm and show it in a txt so they can look but can not dare copying for money.
If kernel can be rewritten into an "interpreted" version without performance penalty, you can get bytecodes from server to client, so when client person checks profiler, he/she sees only interpreter codes but not real working algorithm since it depends on bytecodes from server as being "data"(buffer). For example, c=a+b becomes if(..)else if(...)else(case ...) and has no meaning without a data feed.
On top of all these, you could buy time against some evil people reverseengineer it, you could pick variable names to initiate his/her "selective perception" so he/she can't focus for a while. Then you develop a meaner version at the same time. Such as c=a+b becomes bulletsToDevilsEar=breakDevilsLeg+goodGame

Delete the large object in ironpython, and instantly release the memory?

I am a creating a huge mesh object (some 900 megabytes in size).
Once I am done with analysing it, I would like to somehow delete it from the memory.
I did a bit of search on stackoverflow.com, and I found out that del will only delete the reference to mentioned mesh. Not the mesh object itself.
And that after some time, the mesh object will eventually get garbage collected.
Is gc.collect() the only way by which I could instantly release the memory, and there for somehow remove the mentioned large mesh from the memory?
I've found replies here on stackoverflow.com which state that gc.collect() should be avoided (at least when it comes to regular python, not specifically ironpython).
I've also find comments here on stackoverflow which claim that in IronPython it is not even guaranteed the memory will be released if nothing else is holding a reference.
Any comments on all these issues?
I am using ironpython 2.7 version.
Thank you for the reply.
In general managed enviroments relase there memory, if no reference is existing to the object anymore (from connection from the root to the object itself). To force the .net framework to release memory, the garbage collector is your only choice. In general it is important to know, that GC.Collect does not free the memory, it only search for objects without references and put the in a queue of objects, which will be released. If you want to free memory synchron, you also need GC.WaitForPendingFinalizers.
One thing to know about large objects in the .net framework is, that they are stored seperatly, in the Large Object Heap (LOH). From my point of few, it is not bad to free those objects synchron, you only have to know, that this can cause some performance issues. That's why in general the GC decide on it's own, when to collect and free memory and when not to.
Because gc.collect is implemented in Python as well as in IronPython, you should be able to use it. If you take a look at the implementation in IronPython, gc.collect does exactly what you want, call GC.Collect() and GC.WaitForPendingFinalizer. So in your case, i would use it.
Hope this helps.

OpenCL - Releasing platform object

I was studying OpenCL releasing functions ( clRelease(objectName) ) and it was interesting for me that there was no function to release Platform (more specifically, cl_platform_id) objects.
Does anybody know the reason ?
It's because you create platform objects with a regular malloc and not a clCreateObjectName() function. So you release them with a regular free. I guess it's like that because platforms are host resources.
Note that it is the same for device objects.
EDIT: To clarify a bit, thanks to #chippies' comment: The clGetPlatformIDs() function has two uses. first to query the number of platforms available in the system. Second, to fill the memory space you allocated for the platforms with the actual platform(s) you decided to use. You store these platforms in a memory space that you first malloc. Therefore when you are done with these platforms you release them with a free the memory that you malloc-ed.

Implement IP camera

We have a device that has an analog camera. We have a card that samples it and digitizes it. This is all done in directx. At this point in time, replacing hardware is not an option, but we need to code such that we can see this video feed real-time regardless of any hardware or underlying operating system changes occur in the future.
Along this line, we've chosen Qt to implement a GUI to view this camera feed. However, if we move to a linux or other embedded platform in the future and change other hardware (including the physical device where the camera/video sampler lives), we will need to change the camera display software as well, and that's going to be a pain because we need to integrate it into our GUI.
What i proposed was migrating to a more abstract model where data is sent over a socket to the GUI and the video is displayed live after being parsed from the socket stream.
First, is this a good idea or a bad idea?
Secondly, how would you implement such a thing? How do the video samplers usually give usable output? How can I push this output over a socket? Once I am on the receiving end parsing the output, how do I know what to do with the output (as in how to get the output to render)? The only thing I can think of would be to write each sample to a file and then to display the contents of the file every time a new sample arrives. This seems like an inefficient solution to me, if it would work at all.
How do you recommend I handle this? Are there any cross-platform libraries available for such a thing?
Thank you.
edit: i am willing to accept suggestions of something different rather than what is listed above.
Have you looked at QVision? It is a Qt based framework for managing video and video processing. You don't need the processing, but I think it will do what you want.
Anything that duplicates the video stream is going to cost you in performance, especially in an embedded space. In most situations for video, I think you're better off trying to use local hardware acceleration to blast the video directly to the screen. With some proper encapsulation, you should be able to use Qt for the GUI surrounding the video, and have a class that is platform specific that you use to control the actual video drawing to the screen (where to draw, and how big, etc.).
Edit:
You may also want to look at the Phonon library. I haven't looked at it much, but it appears to support showing video that may be acquired from a range of different sources.

Force Garbage Collection in AS3?

Is it possible to programmatically force a full garbage collection run in ActionScript 3.0?
Let's say I've created a bunch of Display objects with eventListeners and some of the DO's have been removed, some of the eventListeners have been triggered and removed etc... Is there a way to force garbage collection to run and collect everything that is available to be collected?
Yes, it's possible, but it is generally a bad idea. The GC should have a better idea of when is a good time to run than you should, and except for a very specific case, like you just used 500MB of memory and you need to get it back ASAP, you shouldn't call the GC yourself.
In Flash 10, there is a System.gc() method you can call (but please don't, see above) - keep in mind System.gc() only works in the debugging version of Flash player 10+.
In Flash 9, there is an unsupported way to force it via an odd LocalConnection command, but it may not work in all versions. See this post by Grant Skinner.
There is a new API for telling the GC that it might be a "relatively good moment" to collect.
See the Adobe API docs for
System.pauseForGCIfCollectionImminent
And also this Adobe blog post from shortly after the method was introduced in Player version 11
The method takes an "imminence" argument; basically, you feed in a low number (near 0.0) if you really want the collector to run, even if there has not been much activity (currently measured by bytes-allocated) since the last collection, and you feed in a large number (near 1.0) if you only want the collection pause to happen if we were already near the point where a collection would happen anyway.
The motivation here is for situations in e.g. games where you want to shift the point where GC's happen by a small amount, e.g. do the GC during a change of level in the game, rather than two seconds after the player started exploring the level.
One very important detail: This new API is supported by both the Release and the Debugger Flash Runtimes. This makes it superior to calling System.gc().
For all currently released versions, System.gc() only works in the debug version of the Flash player and ADL (the debug environment for AIR apps). Flash player 10 beta currently does work in all flavors.
I agree with Davr, it's a bad idea to do. The runtime will usually have a better idea than you do.
Plus, the specifics of how the garbage collector works is an implementation detail subject to change between flash player versions. So what works well today has no guarantee to work well in the future.
As others said: do not try to GC manually, there are hacks but it's not safe.
You should try recycling objects when you can - you'll save a lot of memory.
This can be applied for instance to BitmapDatas (clear and reuse), particles (remove from display and reuse).
I have a comment on those saying you should never do GC manually. I'm used to manual memory management in C++ and I prefer sharedptr a lot over GC, but anyway.
There is a specific case where I can't find another solution than do a GC. Please consider: I have a DataCache class, the way it work is it keeps result objects for certain method calls that send out updated events when refreshing/receiving data. The way the cache is refreshed is I just clean all results from it and send the event which causes any remaining listeners to re-request their data and listeners that went out of scope should not rerequest which cleans out not needed results. But apparently, if I can't force all listeners that still dangle waiting for GC to be cleaned up immediatly before sending out the "ask you data again" event, those dangling listeners will request data again unnecessarily. So since I can't removeEventListener because AS3 doesn't have destructors I can't see another easy solution than forcing a GC to make sure there's no dangling listeners anymore.
(Edit) On top of that I cannot use removeEventListener anyway for binding which were setup in mxml, for example (using my custom DataCacher class which handles remoteobj)
<mx:DataGrid id="mygrid" dataProvider="{DataCacher.instance().result('method').data}" ... />
When the popup window containing this datagrid is closed, you would expect the bindings to be destroyed. Apparently they live on and on. Hmm, shouldn't flex destroy all bindings (meaning eventlisteners) from an object when it's being marked for GC because the last reference is deleted. That would kinda solve the problem for me.
At least that's why I think, I'm still a beginner in Flex so any thoughts would be appreciated.
try {
new LocalConnection().connect('foo');
new LocalConnection().connect('foo');
} catch (e:*){
trace("Forcing Garbage Collection :"+e.toString());
}
If you have to, calling the gargabe collector could be useful... so, you have to be carefull how and when you do it, but there is no doubt that there are times when is neccesary.
for example, if you have an app that is modular, when you change from one view to the other, all the deleted objects could represent a large amount of memory that should be available as faster as possible, you just need to have control of the variables and references you are disposing.
recycling doesn't really help. I used one loader that repeatedly loaded the same jpg every 500ms. task manager still reported a non stop increase in memory.
tried and proven solution here.
http://simplistika.com/as3-garbage-collection/

Resources