capture another camera inside a capture device source filter? - directshow

Is it possible capture another camera inside a capture device source filter?
Essentially a virtual camera that displays a "real" camera stream

Yes, and in a straightforward way: virtual source filter manages another graph internally and processes the data from embedded camera and graph.

Related

Is there possibility to add ExtraVoice Commands to Voice Guidance which is running from heremaps?

Is there possibility to add Extra Voice Commands to Voice Guidance which is running from here maps?
-like Turn Right (From Here maps) something like- I want (Stop After turn Right)
NMAAudioManager is the central class that is used by SDK for iOS to modify the application AVAudioSession and play audio. It is the interface that the NMANavigationManager uses to play audio feedback such as voice instructions. You can also use NMAAudioManager to change whether hardware keys directly control HERE SDK volume, and also use it to set volume as a factor relative to the user's device volume.
The NMAAudioManager contains a queue of audio output objects. You can add to this queue by calling playOutput: with NMAAudioFileOutput, NMATTSAudioOutput, or your own NMAAudioOutput implementation. You can also use NMAAudioManager methods such as clearQueue, skipCurrentOutput, and stopOutputAndClearQueue to manage audio output in this queue.
Please refer below link for detailed implementation :
developer.here.com/documentation/ios-premium/dev_guide/topics/audio-management.html

QT QAudiooutput only playing through left ear while in stereo

What i have currently is an attempt to have a signal generator play in stereo. What is currently happening is while the format has been accepted it winds up only playing the audio from the left ear. However when i switch to mono it works fine through both ears however what i want to do is have it where i can control which ear i am listening from. For instance if i only want it to play on the left ear all i will hear is the audio on the left ear, however i also want to ability to switch ears or use both. The current method i am using is as follows.
format.setFrequency(44100);
format.setChannels(2);
format.setSampleSize(16);
format.setCodec("audio/pcm");
format.setByteOrder(QAudioFormat::LittleEndian);
format.setSampleType(QAudioFormat::SignedInt);
//This is just the format setup.
// i am using push mode for this program
audio_outputStream = new QAudioOutput(format, this);
audio_outputDevice = audio_outputStream->start();
//now when i write the numbers from the signal generator i use this method
QByteArray array;
array.append(integerValueThatIsConvertedToConstChar,SizeOfInt);
//And i write the data from the ByteArray to the IODevice as such
audio_outputDevice.write(array.data(),MaxSizeOfInt);
//Afterwhich i remove the front of the ByteArray for the next number to be appended and wrote in.
array.remove(0,SizeOfInt);
As stated before the above steps i use in my code do work with monotone with one channel but only plays on the left side when using Stereo with 2 channels. My goal is to be able to control which channel i am writing to through push mode. I currently do not see why it currently only uses one of the two channels, am I simply not writing to the second channel?

DirectShow - Order of invocation of IAMStreamConfig::SetFormat and ICaptureGraphBuilder2::RenderStream creates issues in some video cameras

I have to configure my video camera display resolution before capturing and processing the data. Initially I did it as follows.
Created all necessary interfaces.
Added camera and renderer filters
Did RenderStream with Capture and Preview PIN Categories.
Then did the looping through AM_MEDIA_TYPE structures and setting the params.
This worked for a lot of cameras, but a few cameras failed. Then I changed the order of 3 and 4 given above. That is, I did the setting of params before the RenderStream. This time, the error cases went through, but a few On board cameras in SONY VAIO laptop etc seem to fail.
Now, my questions are
Which is the optimal and correct method of getting and setting AM_MEDIA_TYPE parameters and running the graph?
If there are different cameras, if I get an indication of which order is the best for a particular camera by going through the camera's DirectShow interfaces, that will also serve my purpose.
Please help me in this at the earliest,
Thanks and regards,
Shiju
IAMStreamConfig::SetFormat needs to be used to set capture format before the pin is connected and rendered. This way the downstream subchain of filters is built with proper media types.

change recording file programmatically in directshow

I made a console application, using directshow, that record from a live source (now a webcam, then a tv capture card), add current date and time in overlay and then save audio and video as .asf.
Now I want that the output file is going to change every 60 minutes without stopping the graph. I must not loose any seconds of the live stream.
The graph is something like this one:
http://imageshack.us/photo/my-images/543/graphp.jpg/
I took a look at the GMFBridge but I have some compiling problem with their examples.
I am wondering if there is a way to split what exist from the overlay filter and audio source, connect them to another asf writer (paused) and then switch them every 60 minutes.
The paused asf filter's file name must change (pp.asf, pp2.asf, pp4.asf ...). Something like this:
http://imageshack.us/photo/my-images/546/graph1f.jpg/
with pp1 paused. I found some people in internet that say that the asf writer deletes the current file if the graph does not go in stop mode.
Well, I have the product (http://www.videophill.com) that does exactly what you described (its used for broadcast compliance recording purposes) - and I found that only way to do that is this:
create a dshow graph that will be used only to capture the audio and video
then, at the end of the graph, insert samplegrabber filters, both for audio and video
then, use IWMWritter to create and save wmv file, using samples fetched from samplegrabber filters
when time comes, close one IWMWritter and create another one.
That way, you won't lose single frame when switching the output files.
Of course, there is also question of queue-ing and storing the samples (when switching the writters) and properly re-aligning the audio/video timestamps, but from my research, that's the only 'normal' way to do it, and I used in practice.
The solution is in writing a custom DShow filter with two input pins in your case. One for audio stream and the other for video stream. Inside that filter (doesn't have to be inside from the architecture point of view, because you can also use callbacks for example and do the job somewhere else) you should create asf files. While switching files, A/V data would be stored in cache (e.g. big enough circular buffer). You can also watch and modify A/V sync in that filter. For writing ASF files I would recommend Windows Media Format SDK.You can also add output pins if you like to pass A/V data further if necessary for preview, parallel streaming etc...
GMFBridge is a viable, but complicated solution, a more direct approach I have implemented in the past is querying your ASF Writer for the IWMWriterAdvanced2 interface and setting a custom sink. Within that interface you have methods to remove and add sinks to your ASF writer. The sink automatically connected will write to the file that you speficifed. One way to write whereever you want to is
1.) remove all default sinks:
pWriterAdv->RemoveSink(NULL);
2.) register a custom sink:
pWriterAdv->AddSink((IWMWriterSink*)&streamSink);
The custom sink can be a class that implements IWMWriterSink, which requires implementing callback methods that are called i.e. when the ASF header is written (OnHeader(/* [in] */ INSSBuffer *pHeader);) and when a data packet is written (OnDataUnit(/* [in] */ INSSBuffer *pDataUnit);) - in your implementation you can then write them wherever you want, for example offer additional methods on this class where you can specify the file name you want to write to.
Note that this solution does not quite get you were you want to if you need to write out the header information in each of the 60 minute files - after the initial header you will only get ASF packet data. A workaround for that could be to re-write the intial header before any packet data of each file, however this will produce an unindexed (non-seekable) ASF file.

Clipping video in Directshow

I need to clip a video into smaller videos( of same format) of the same size.I am using Directshow .I have been able to extract frames from the video but I am not sure how to proceed with extracting video from the file .Could someone help me with this ?
First, I'm not sure about creating smaller clips of the same size. I assume you mean that you want shorter clips of the same dimension. If you are happy to start at the nearest preceding key frame, then you don't want to decompress it and recompress it. So in this case, I would connect the demux filter to a mux and then a file writer. You should be able to use IMediaSeeking (on the mux, or possibly the demux output pins) to select the right segment.
G

Resources