Any way to distinguish between virtual and physical camera - directshow

Using Directshow.NET I have developed an application which grabs some pictures from camera and save it on disk. Everything is going fine but if in case some virtual camera is registered(installed) For eg. Cyberlink YouCam then directshow receiving following frame: (GraphStudio screenshot)
So, to avoid this I want to detect whether video device(s) found by FilterCategory.VideoInputDevice is virtual or physical webcam. Is there any way to distinguish between both?

Physical camera are implemented by WDM Video Capture Filter. Virtual cameras mimic those, some - better and some - worse.
Those virtual cameras implemented without a driver do not typically implement some interfaces of WDM Video Capture Filter. The one implemented via a driver can be filtered out (black listed) by their hardware path. The task is harder with the latter because WDM Video Capture Filter wraps such driver and implements a filter pretty much similar to physical device filter, but hardware path should reveal its virtual nature.

Use the following piece of code while iterating your filterInfo collection:
if(FilterInfo.MonikerString.StartsWith("#device:pnp:\\\\?\\root", StringComparison.OrdinalIgnoreCase))
{
// virtual camera found
}

Related

Integrate custom device into Google Home

My idea is to have single addressable RGBW LED strips in all my rooms. For the sake of practice and interest, I do not simply want to by some controller, I want to start this project with some custom self-build infrastructure, consisting of some Arduinos and/or raspberry pis. My initial idea was to just setup a simple local server on a raspberry (which controls the arduinos connected to the LEDs) and build myself an app to control the lightning. That part is clear to me and should not be a problem, but I thought it might be a plus to integrate my devices directly to Google Home so I do not need any extra app.
I read through the Smart Home Platform but things are not 100% clear to me. I read things about requirements like public Oauth2 Server. I was wondering, if it is possible to get this working without setting up any server which has to be reached publicly, because otherwise I won't waste time on that topic.
If you want to control your room devices from a smartphone and are satisfied with local operation from few meters away than you should consider BLE on phone and devices.
Obviously, you would need to write your own app, but luckily with BLE you can use publicly available apps such as LightBlu for the dev phase and maybe even for later use (I have not looked into that lately).

Enabling two apps to use a single sound device

I have:
USB Sound which is alsa "Device" or "hw:1,0"
Asterisk console configured to use "plughw:1,0"
This works, letting me use the USB Sound for making and receiving voice calls via Asterisk.
I also want to use multimon to decode DTMF tones during the call. If I stop Asterisk I can run "aoss multimon -T DTMF" to decode the tones successfully but in order to do so I had to create an /etc/asoundrc file like so:
pcm.dsp0 { type plug slave.pcm "hw:1,0" }
Starting Asterisk, which grabs the "plughw:1,0" means I get an error trying to run multimon. I believe this is because only one app can access an alsa device at any one time.
I think I need to split the hw:1,0 into two new alsa devices, which I have been trying to do using alsa plugins (dmix/multi) but I'm afraid I can't get my head around how to get these configured!
p.s. I want to use multimon as I also have other use cases for using it on the same setup to decode other tones than just DTMF.
As #CL have pointed, you could use dsnoop for analysing the audio trhough multimon. The following extract has been taken from Basic Virtual PCM Devices for Playback/Capture, ALSA | nairobi-embedded
The dmix[:$CARD...] and dsnoop[:$CARD...] Virtual PCM Devices
A limitation with the hw[:$CARD...] and plughw[$CARD...] virtual PCM devices (on systems with no hardware mixing) is that only one application at a time can access an audio stream. The dmix[:$CARD...] (playback) and dsnoop[:$CARD...] (capture) virtual PCM devices allow mixing or sharing, respectively, of a single stream among several applications without the intervention of a sound server, i.e. these devices use native ALSA library based mechanisms.

Twain driver concurrent requests

It's possible to use one twain driver to manage concurrent request to two different multifunction printer?
I mean, if I have two MFPs , can I do two scan request in paralel using the same twain driver?
It depends on if your driver supports it.
From the TWAIN Spec page 125:
If an application attempts to connect to a Source that only supports a single connection when the source is already opened, the Source should respond with TWRC_FAILURE and TWCC_MAXCONNECTIONS.
Also from the spec on page 212:
The Source is responsible for managing this, not the Source Manager (the Source Manager does not know in advance how many connections the Source will support).
I tested this with a Fujitsu fi-7260 scanner and got the TWCC_MAXCONNECTIONS error with Twacker:
It could be possible. The reason being TWAIN just sits between the application and the images fed to it.
Imagine a scenario something on the below lines:
1) User clicked on the scan button.
2) You initiate the network layer calls to start the scan job.
3) Now instead to one printer you start scan jobs on two printers from two threads.
4) Let's say each of those threads populate the raw BMP data to a single data structure that is shared.
5) Once both threads are complete iterate over that shared data structure to pass the images to the application via the XFERIMAGE call.
Basic idea is to create an abstraction of two printers behind the scene.
Please let me know if my understanding of your question was not correct or you need other clarification.
If you implement it in the described way, it usually works only with two different MFP's as the majority of TWAIN drivers do not support two different USB devicves at the same time.

How to create a local virtual IP camera that can be accessed from other software

I need to create several local virtual IP Cameras for a project I'm making. I have tried several software, and the closest I have gotten was with magic camera, because it would let me create a virtual camera, but it wont let me assign a source to that camera. I need to assign an IP address and a username with a password, so that I access the IP camera's video and use that virtual camera in a program I'm developing. the thing is that the Camera's brand is not supported by Labview, so I need to use a virtual local camera to use these cameras (3S Vision IP Cameras).
Thanks in advance!
From the National Instruments Support Knowledgebase:
Connecting to an Arbitrary MJPEG IP Camera with IMAQdx Using Third Party Virtual Camera Emulator
http://digital.ni.com/public.nsf/allkb/9446A8C25CC99F7586257A56004D513D
Here are the options for using an IP cameras in LabVIEW as of 2019:
(in case someone like me still need this)
Use Vision Acquisition Software 14.5 (February 2015)
(with LabVIEW 2014 SP1 and Vision Development Module 2014-2017 SP1)
Pros:
Official, native support;
Any # of cameras.
Cons:
You lose all the features introduced in newer versions of LabVIEW;
Cameras must support and be configured to stream in MJPEG over HTTP.
Additional info:
It's the last version to support arbitrary IP cameras. Basler and Axis IP cameras were supported until VAS 19.0.
Cameras in the same subnet should be detected automatically. If cameras are in another network, you can try to add them manually as follows:
Go to the %Public%\Documents\National Instruments\NI-IMAQdx\Data\ folder;
Open or create a file IPCameras.ini in a text editor;
If creating, place an IPCameras section on a first line:
[IPCameras]
Add a line for each camera in a following format:
cameraSerialNumber = IPAddress, MJPEG stream URL, camera brand, camera description
Save your changes and restart NI MAX.
Use DirectShow device (webcam) emulator
NI-IMAQdx driver supports USB 2.0 cameras through the DirectShow interface. By using software which creates such interface for IP cameras, they can be used as regular USB 2.0 cameras.
There are multiple tools available:
IP Video Source
Pros:
Free;
Any # of cameras.
Cons:
Each camera must be added manually through emulator's settings;
Each camera's resolution must be set manually in emulator's settings;
Cameras must support and be configured to stream in MJPEG over HTTP(S);
32/64-bit versions work independently of each other. NI MAX is a 32-bit application, so it won't show cameras emulated by 64-bit tool. However, they are still detected and can be used in LabVIEW with IMAQdx VIs.
Additional info:
Camera's alias displayed in LabVIEW can be changed in a following way:
Go to the %Public%\Documents\National Instruments\NI-IMAQdx\Data\ folder;
Select one of camX.iid files and open it in a text editor;
Find an attribute InterfaceName and set its value to a desired name. See Vendor attribute's value for a name you set to that camera in emulator's settings;
Save your changes and rename this file to the same name;
Restart LabVIEW.
Moonware Universal Source Filter [more info]
Pros:
Supports JPEG/MJPEG/MPEG4/H264 over HTTP/RTSP;
Hardware decoding;
Low latency;
Multiple cameras.
Cons:
32-bit only. 64-bit version is not likely to happen;
Adds a watermark to an image (free version) / Paid: $49 per PC (no watermark);
Each camera must be added manually through emulator's settings.
and more
Use Multimedia for LabVIEW add-on
Pros:
Native interface (LabVIEW API for FFmpeg libraries);
Supports most codecs and protocols;
Any # of cameras;
Full control over data acquisition and processing (down to individual FFmpeg options).
Cons:
Paid: $949 per PC (developer license), $19 per PC (runtime license), 30 days trial;
Lower-level analog of NI-IMAQdx driver (see more complicated).
Use libVLC to receive images from a camera
(or another similar library)
Pros:
Free;
Supports most codecs and protocols;
Possibility of hardware decoding (depends on library usage);
Any # of cameras.
Cons:
You'll have to interface with libVLC library directly through the Call Library function node;
To receive video frame by frame you'll have to write a simple C library which will be called by libVLC to provide a frame (see example below in a linked thread).

programmatically stream audio with NetStream

In Flex you can stream microphone audio to an FMS/Red5 server using NetStream.attachAudio, which requires a Microphone object. Is it possible to stream audio through the NetStream from somewhere other than a Microphone? For example, from a file/embedded resource?
The reason I'm asking is that I'd like to be able to run automated tests that don't require using an actual microphone.
Well, it looks like this isn't possible. My workaround is to use SoundFlower to route audio file playback (invoked outside of Flash) into a virtual microphone, which Flash then streams to the media server. From Flash's point of view, its just as if you were manually speaking into the mic.

Resources