AMDh264Encoder returning MF_E_ATTRIBUTENOTFOUND when checking MFSampleExtension_CleanPoint - ms-media-foundation

When receiving an output from IMFTransform::ProcessOutput, calling GetUINT32(MFSampleExtension_CleanPoint) on the sample fails and returns MF_E_ATTRIBUTENOTFOUND only while using the AMDh264Encoder (NV12 in, H264 out), as such there are no keyframes in the final output video so it is corrupted.
What causes getting attribute MFSampleExtension_CleanPoint MF_E_ATTRIBUTENOTFOUND to fail, only on the AMDh264Encoder?

Video encoder MFTs are supplied by hardware vendors. AMD does "AMDh264Encoder" for their hardware and introduces it with their video drivers, in particular.
For this reason implementations from different vendors have slight differences, AMD guys decided to not set this attributes on produced media samples.
You should skip this gracefully and treat the attribute as optional.

Related

How to detect what underlying VGA Video Mixing Render 7 is using?

We have a issue that specific combinations of filters including VMR7 causes frame is not rendered right. We noticed it is only happened with certain GPU card with some of driver versions.
We try to make some workaround (with some overhead) only for the GPU. Is any way to know the underlying VGA card associated with the VMR7?
I've found the answers for my question.
There is monitor related information interface IVMRMonitorConfig which is query-able from VMR7 filter to ask about device information which is associated.
https://msdn.microsoft.com/en-us/library/windows/desktop/dd390488(v=vs.85).aspx
IVMRMonitorConfig::GetAvailableMonitors(
[out] VMRMONITORINFO *pInfo,
[in] DWORD dwMaxInfoArraySize,
[out] DWORD *pdwNumDevices
);
I can recognize the specific VGA card by the keyword in VMRMONITORINFO::szDevice or VMRMONITORINFO::szDescription string.

How do I find ANY beacon using the AltBeacon android reference library?

I'm using the altbeacon android reference library for detecting beacons.
There is an option to configure the parser to detect other non-altbeacon beacons e.g. Estimote (as described here) by adding a new BeaconParser (see this) which works a treat.
However, how do I allow it to detect ALL beacons of any UUID/format (altbeacons, estimotes, roximity etc)? I've tried no parsers, blank parameters and without the "m:2-3=.." parameter. Nothing works.
Thanks
You can configure multiple parsers to be active at the same time so you can detect as many beacon types as you want simultaneously. But there is no magic expression that will detect them all.
Understand that the BeaconParser expression tells the library how to decode the raw bytes of a Bluetooth LE advertisement and convert it into identifiers and data fields. Each time a company comes up with a new beacon transmission format, a new parser format may be needed.
Because of intellectual property restrictions, the library cannot be preconfigured to detect proprietary beacons without permission. This is why you must get the community-provided expressions for each proprietary type.

Endianness and OpenCL Transfers

In OpenCL, transfer from CPU client side to GPU server side is accomplished through clEnqueueReadBuffer(...)/clEnqueueWriteBuffer(...). However, the documentation does not specify whether any endian-related conversions take place in the underlying driver.
I'm developing on x86-64, and a NVIDIA card--both little endian, so the potential problem doesn't arise for me.
Does conversion happen, or do I need to do it myself?
The transfer do not do any conversions. The runtime does not know the type of your data.
You can probably expect conversions only on kernel arguments.
You can query the device endianness (using clGetDeviceInfo and check CL_DEVICE_ENDIAN_LITTLE ), but I am not aware of a way that allows transparent conversions.
This is the point, where INMHO the specification is not satisfactory.
At first it is clear about pointers, that is, data that a pointer is referencing can be in host or device byte order, and one can declare this by a pointer attribute, and the default byte order is that of the device.
So according to this, developers have to take care of the endianness that they feed as input to a kernel.
But than in "Appendix B - Portability" it's said that implementations may or may not automatically convert endianness of kernel arguments and that developers should consult the documentation of the vendors in case host and device byte order is different.
Sorry for me being that direct but what shit is that. I mean the intention of the OpenXX specifications is that they should make it possible to write cross platform code. But when there are that significant asspects that can vary from implementation to implementation, this is quite not possible.
The next point is, what does this all mean for OpenCL/OpenGL interoperation.
In OpenGL data for buffer objects like VBO's have to be in host byte order. So what in case such a buffer is shared between OpenCL and OpenGL. Must the data of it be transformed before and after they are processed by an OpenCL kernel or not?

Why does GetDeliveryBuffer blocked with an INTERLEAVE_CAPTURE mode AVI Mux?

I'm trying to use a customized filter to receive video and audio data from a RTSP stream, and deliver samples downstream the graph.
It seems like that this filter was modified from the SDK source.cpp sample (CSource), and implemented two output pins for audio and video.
When the filter is directly connected to an avi mux filter with INTERLEAVE_NONE mode, it works fine.
However, when the interleave mode of avi mux is set to INTERLEAVE_CAPTURE,
the video output pin will hang on the GetDeliveryBuffer method (in DoBufferProcessingLoop) of this filter after several samples have sent,
while the audio output pin still works well.
Moreover, when I inserted an infinite pin tee filter into one of the paths between the avi mux and this source filter,
the graph arbitrarily turned into stop state after some samples had been sent (one to three samples or the kind).
And when I put a filter that is just an empty trans-in-place filter which does nothing after the infinite tee,
the graph went back to the first case: never turns to stop state, but hang on the GetDeliveryBuffer.
(Here is an image that shows the connections I've mentioned like)
So here are my questions:
1: What could be the reasons that the video output pin hanged on the GetDeliveryBuffer ?
In my guess it looks like the avi mux caught these sample buffers and did not release them until they are enough for interleaving,
but even when I set the amount of video buffers to 30 in DecideBufferSize it will still hang. If the reason is indeed like that, so how do I decide the buffer size of the pin for a downstream avi muxer ?
Likely a creation of more than 50 buffers of a video pin is not guaranteed to work because the memory size cannot be promised. :(
2: Why does the graph goes to stop state when the infinite pin tee is inserted ? And why could a no-operation filter overcomes it ?
Any answer or suggestion is appreciated. Or hope someone just give me some directions. Thanks.
Blocked GetDeliveryBuffer means the allocator, you are requesting a buffer from, does not [yet] have anything for you. All media samples are outstanding and are not yet returned back to the allocator.
An obvious work around is to request more buffers at pin connection and memory allocator negotiation stage. This however just postpones the issue, which can very much similarly appear later for the same reason.
A typical issue with a topology in question is related to threading. Multiplexer filter which has two inputs will have to match input streams to produce a joint file. Quite so often on runtime it will be holding media samples on one leg while expecting more media samples to come on the other leg on another thread. It is assumes that upstream branches providing media samples are running independently so that a lock on one leg is not locking the other. This is why multiplexer can freely both block IMemInputPin::Receive methods and/old hold media samples inside. In the topology above it is not clear how exactly source filter is doing threading. The fact that it has two pins make me assume it might have threading issues and it is not taking into account that there might be a lock downstream on multiplexer.
Supposedly source filter is yours and you have source code for it. You are interested in making sure audio pin is sending media samples on a separate thread, such as through asynchronous queue.

Different approaches on getting captured video frames in DirectShow

I was using a callback mechanism to grab the webcam frames in my media application. It worked, but was slow due to certain additional buffer functions that were performed within the callback itself.
Now I am trying the other way to get frames. That is, call a method and grab the frame (instead of callback). I used a sample in CodeProject which makes use of IVMRWindowlessControl9::GetCurrentImage.
I encountered the following issues.
In a Microsoft webcam, the Preview didn't render (only black screen) on Windows 7. But the same camera rendered Preview on XP.
Here my doubt is, will the VMR specific functionalities be dependent on camera drivers on different platforms? Otherwise, how could this difference happen?
Wherever the sample application worked, I observed that the biBitCount member of the resulting BITMAPINFOHEADER structure is 32.
Is this a value set by application or a driver setting for VMR operations? How is this configured?
Finally, which is the best method to grab the webcam frames? A callback approach? Or a Direct approach?
Thanks in advance,
IVMRWindowlessControl9::GetCurrentImage is intended for occasional snapshots, not for regular image grabbing.
Quote from MSDN:
This method can be called at any time, no matter what state the filter
is in, whether running, stopped or paused. However, frequent calls to
this method will degrade video playback performance.
This methods reads back from video memory which is slow in first place. This methods does conversion (that is, slow again) to RGB color space because this format is most suitable for for non-streaming apps and gives less compatibility issues.
All in all, you can use it for periodic image grabbing, however this is not what you are supposed to do. To capture at streaming rate you need you use a filter in the pipeline, or Sample Grabber with callback.

Resources