Create directshow filter object in expression encoder sdk - directshow

I have a directshow filter that can be used by expression encoder.
Is it possible to create two objects of this filter using expression encoder sdk( two projects)?
Thanks in advance.

The filter will be instantiated each time applications need it, so there is nothing to prevent 2+ instances of your object, except possibly the case where application "keeps track" of devices in use and does not give you an option to use the same device again (the application might be under assumption that device is "real", not virtual, and cannot work simultaneously in two pipelines) and your workaround is to register 2+ virtual devices of the same type then.

Related

Media Foundation: multi-input MFT and topology connection order

The problem
I'm writing a custom MFT with two inputs and one output (it merges two video streams into one).
My MFT requires media types to be set on its inputs before it can provide an output type.
I've set up my topology by connecting two source nodes (they take different streams from an aggregate media source) to my transform node, and then an EVR to my single output.
When I start the media session, I see that the topology invokes SetInputType on the first input, and it succeeds.
But then it immediately tries to get an output type: first by calling GetOutputCurrentType on my MFT, which returns MF_E_TRANSFORM_TYPE_NOT_SET as it is unable to provide one, and then by calling GetOutputAvailableType, which I made return MF_E_TRANSFORM_TYPE_NOT_SET as per the documentation (says You must set the input types before setting the output types; I also tried to output some partial media types but it's the same).
And here's the problem: after that, the topology seems to give up on my MFT: it never calls SetInputType on the second input.
The question
How can I force the topology to set all input types before dealing with the output?
Read this : Multiple input
Under Windows 7, it doesn't work...
You can provide a custom media session like i do in MFNode project.

How do I find ANY beacon using the AltBeacon android reference library?

I'm using the altbeacon android reference library for detecting beacons.
There is an option to configure the parser to detect other non-altbeacon beacons e.g. Estimote (as described here) by adding a new BeaconParser (see this) which works a treat.
However, how do I allow it to detect ALL beacons of any UUID/format (altbeacons, estimotes, roximity etc)? I've tried no parsers, blank parameters and without the "m:2-3=.." parameter. Nothing works.
Thanks
You can configure multiple parsers to be active at the same time so you can detect as many beacon types as you want simultaneously. But there is no magic expression that will detect them all.
Understand that the BeaconParser expression tells the library how to decode the raw bytes of a Bluetooth LE advertisement and convert it into identifiers and data fields. Each time a company comes up with a new beacon transmission format, a new parser format may be needed.
Because of intellectual property restrictions, the library cannot be preconfigured to detect proprietary beacons without permission. This is why you must get the community-provided expressions for each proprietary type.

Using OpenCL for multiple devices (multiple GPU)

Hello fellow StackOverflow Users,
I have this problem : I have one very big image which i want to work on. My first idea is to divide the big image to couple of sub-images and then send this sub-images to different GPUs. I don't use the Image-Object, because I don't work with the RGB-Value, but I'm only using the brightness value to manipulate the image.
My Question are:
Can I use one context with many commandqueues for every device? or should I use one context with one commandqueue for each device ?
Can anyone give me an example or ideas, how I can dynamically change the inputMem-Data (sub-images data) for setting up the kernel arguments to send to the each device ? (I only know how to send the same input data)
For Example, If I have more sub-images than the GPU-number, how can I distribute the sub-images to the GPUs ?
Or maybe another smarter approach?
I'll appreciate every help and ideas.
Thank you very much.
Use 1 context, and many queues. The simple method is one queue per device.
Create 1 program, and a kernel for each device (created from the same program). Then create different buffers (one per device) and set each kernel with each buffer. Now you have different kernels, and you can queue them in parallel with different arguments.
To distribute the jobs, simple use the event system. Checking if a GPU is empty and queing there the next job.
I can provide more detailed example with code, but as general sketch that should be the way to follow.
AMD APP SDK has few samples on multi gpu handling. You should be looking at these 2 samples
SimpleMultiDevice: shows how to create multiple commandqueues on single context and some performance results
BinomailoptionMultiGPU: look at loadBalancing method. It divides the buffer based on compute units & max clock freq of available gpus

Different approaches on getting captured video frames in DirectShow

I was using a callback mechanism to grab the webcam frames in my media application. It worked, but was slow due to certain additional buffer functions that were performed within the callback itself.
Now I am trying the other way to get frames. That is, call a method and grab the frame (instead of callback). I used a sample in CodeProject which makes use of IVMRWindowlessControl9::GetCurrentImage.
I encountered the following issues.
In a Microsoft webcam, the Preview didn't render (only black screen) on Windows 7. But the same camera rendered Preview on XP.
Here my doubt is, will the VMR specific functionalities be dependent on camera drivers on different platforms? Otherwise, how could this difference happen?
Wherever the sample application worked, I observed that the biBitCount member of the resulting BITMAPINFOHEADER structure is 32.
Is this a value set by application or a driver setting for VMR operations? How is this configured?
Finally, which is the best method to grab the webcam frames? A callback approach? Or a Direct approach?
Thanks in advance,
IVMRWindowlessControl9::GetCurrentImage is intended for occasional snapshots, not for regular image grabbing.
Quote from MSDN:
This method can be called at any time, no matter what state the filter
is in, whether running, stopped or paused. However, frequent calls to
this method will degrade video playback performance.
This methods reads back from video memory which is slow in first place. This methods does conversion (that is, slow again) to RGB color space because this format is most suitable for for non-streaming apps and gives less compatibility issues.
All in all, you can use it for periodic image grabbing, however this is not what you are supposed to do. To capture at streaming rate you need you use a filter in the pipeline, or Sample Grabber with callback.

DirectShow - Order of invocation of IAMStreamConfig::SetFormat and ICaptureGraphBuilder2::RenderStream creates issues in some video cameras

I have to configure my video camera display resolution before capturing and processing the data. Initially I did it as follows.
Created all necessary interfaces.
Added camera and renderer filters
Did RenderStream with Capture and Preview PIN Categories.
Then did the looping through AM_MEDIA_TYPE structures and setting the params.
This worked for a lot of cameras, but a few cameras failed. Then I changed the order of 3 and 4 given above. That is, I did the setting of params before the RenderStream. This time, the error cases went through, but a few On board cameras in SONY VAIO laptop etc seem to fail.
Now, my questions are
Which is the optimal and correct method of getting and setting AM_MEDIA_TYPE parameters and running the graph?
If there are different cameras, if I get an indication of which order is the best for a particular camera by going through the camera's DirectShow interfaces, that will also serve my purpose.
Please help me in this at the earliest,
Thanks and regards,
Shiju
IAMStreamConfig::SetFormat needs to be used to set capture format before the pin is connected and rendered. This way the downstream subchain of filters is built with proper media types.

Resources