SampleGrabber BufferLen size variation in C# and VB.NET - directshow

Is there any reason why the size of BufferLen in ISampleGrabberCB.BufferCB with the media subtype not set should vary if called from different programming languages?
I have a C# app and a VB.NET app that both run a graph as quickly as possible with the clock turned off and no media subtype set on the SampleGrabber. The code is identical. In the C# app, the size of BufferLen is different every time a sample passes through the grabber (as you'd expect). In the VB.NET app, BufferLen is a constant fixed value.
When running a 1280 x 720 video through the graph, for example, the size of BufferLen in the VB code is always 1,382,400 (which sort of makes sense as the output pin on the video decoder is showing a 12-bit NV12 format). In the C# code, the size of BufferLen varies wildly between low and high values.
Does anyone know why this happens?

The variable buffer length indicates that you're getting compressed video. I can't explain why that might be though. Is one version running as admin? Or in 64 bits?

Problem solved. The graphs are not identical, in fact: in the C# app, the AVI splitter is connected directly to the SampleGrabber; in the VB app, an unexpected video decoder is sitting between the two. It turns out that the VB code is rendering the graph before tearing it down and rebuilding it, which means the media subtype has already been set. Thanks to you both for your help.

Related

Design of compression using OpenCL FPGA ,Memory allocation

I am trying to implement lossy compression algorithms using OpenCL ,I have divided the work into two kernels ,one for implementing the algorithm itself and the other one is used for encoding ,means that ,I am trying to concatenate different sized bytes into the stream of the compressed data ,the problem is that ,I am getting high initiation interval due to inefficient memory accesses ,and it takes too much time for the encoding part ,when I opened HTML report ,I got the following message :
**stallable, 13 reads and 25 writes.
Reduce the number of write accesses or fix banking to make this memory system stall-free. Banking may be improved by using compile-time known indexing on lowest array dimension.
Banked on bits 0, 1 into 4 separate banks.
Private memory implemented in on-chip block RAM
My question is ,how could I improve the allocation of the stream ,the way i do it is as follow :
unsigned char __attribute__((numbanks(4),bankwidth(8))) out[outsize];
but it is inefficient ,is there any technique or way that I can use for better utilization?
The way I do encoding is that ,I am adding byte while monitoring the index of last modified bit and byte ,so I am doing exoring and because sometime I got more than one byte or less than one btye so I work byte by byte

Different approaches on getting captured video frames in DirectShow

I was using a callback mechanism to grab the webcam frames in my media application. It worked, but was slow due to certain additional buffer functions that were performed within the callback itself.
Now I am trying the other way to get frames. That is, call a method and grab the frame (instead of callback). I used a sample in CodeProject which makes use of IVMRWindowlessControl9::GetCurrentImage.
I encountered the following issues.
In a Microsoft webcam, the Preview didn't render (only black screen) on Windows 7. But the same camera rendered Preview on XP.
Here my doubt is, will the VMR specific functionalities be dependent on camera drivers on different platforms? Otherwise, how could this difference happen?
Wherever the sample application worked, I observed that the biBitCount member of the resulting BITMAPINFOHEADER structure is 32.
Is this a value set by application or a driver setting for VMR operations? How is this configured?
Finally, which is the best method to grab the webcam frames? A callback approach? Or a Direct approach?
Thanks in advance,
IVMRWindowlessControl9::GetCurrentImage is intended for occasional snapshots, not for regular image grabbing.
Quote from MSDN:
This method can be called at any time, no matter what state the filter
is in, whether running, stopped or paused. However, frequent calls to
this method will degrade video playback performance.
This methods reads back from video memory which is slow in first place. This methods does conversion (that is, slow again) to RGB color space because this format is most suitable for for non-streaming apps and gives less compatibility issues.
All in all, you can use it for periodic image grabbing, however this is not what you are supposed to do. To capture at streaming rate you need you use a filter in the pipeline, or Sample Grabber with callback.

IMediaSample(DirectShow) to IDirect3DSurface9/IMFSample(MediaFoundation)

I am working on a custom video player. I am using a mix of DirectShow/Media Foundation in my architecture. Basically, I'm using DS to grab VOB frames(unsupported by MF). I am able to get a sample from DirectShow but am stuck on passing it to the renderer. In MF, I get a Direct3DSurface9 (from IMFSample), and present it on the backbuffer using the IDirect3D9Device.
Using DirectShow, I'm getting IMediaSample as my data buffer object. I don't know how to convert and pass this as IMFSample. I found others getting bitmap info from the sample and use GDI+ to render. But my video data may not always have RGB data. I wish to get a IDirect3DSurface9 or maybe IMFSample from IMediaSample and pass it for rendering, where I will not have to bother about color space conversion.
I'm new to this. Please correct me if I'm going wrong.
Thanks
IMediaSample you have from upstream decoder in DirectShow is nothing but a wrapper over memory backed buffer. There is no and cannot be any D3D surface behind it (unless you take care of it on your own and provide a custom allocator, in which case you would not have a question in first place). Hence, you are to memory-copy data from this buffer into MF sample buffer.
There you come to the question that you want buffer formats (media types) match, so that you could copy without conversion. One of the ways - and there might be perhaps a few - is to first establish MF pipeline and find out what exactly pixel type you are offered with buffers on the video hardware. Then make sure you have this pixel format and media type in DirectShow pipeline, by using respective grabber initialization or color space conversion filters, or via color space conversion DMO/MFT.

High Resolution Capture and Encoding

I'm using two custom push filters to inject audio and video (uncompressed RGB) into a DirectShow graph. I'm making a video capture application, so I'd like to encode the frames as they come in and store them in a file.
Up until now, I've used the ASF Writer to encode the input to a WMV file, but it appears the renderer is too slow to process high resolution input (such as 1920x1200x32). At least, FillBuffer() seems to only be able to process around 6-15 FPS, which obviously isn't fast enough.
I've tried increasing the cBuffers count in DecideBufferSize(), but that only pushes the problem to a later point, of course.
What are my options to speed up the process? What's the right way to do live high res encoding via DirectShow? I eventually want to end up with a WMV video, but maybe that has to be a post-processing step.
You have great answers posted here to your question: High resolution capture and encoding too slow. The task is too complex for the CPU in your system, which is just not fast enough to perform realtime video encoding in the configuration you set it to work.

what size of buffer is the best for uploading file to internet

I'm using HTTP API provided by MS to upload video to YouTube, I noticed the total elapsed time is different with different buffer size, what size of buffer is the best for uploading file to internet? Thanks in advance.
Try it out. Depends on your network speed and other settings. If there would be the one optimal size, it would have been preconfigured.
The right one?
TCP/IP has lots of self-tuning functionality built in (although by default window scaling is disabled). If you are seeing different behaviours using different application level buffers then this is most likely due to anomolies within the application code. If the code is closed-source then you can only ever do black box testing to find the optimal behaviour. However at a guess it sounds like the source reads are delayed until the buffer is empty - try using rotating buffers with a pre-fetch, e.g.
i) read X bytes into buffer 1
ii) start writing buffer 1 to the output in a seperate thread
iii) read X bytes into buffer 2
iv) when the thread created in ii returns, swap the buffers around and repeat steps from ii
C.

Resources