v4l2 -> QByteArray(?) -> QWebsocket -> internet -> {PC, Android, web} - qt

as you can assume from the tile, I would like to broadcast a webcam stream to different clients. I know that there are many solutions (as motion), but I have already a working infrastructure based on a Qt server software and a websocket as connection to the outer world.
I have read source code of other linux applications like kopete and motion to find out the most efficient way, but don't come to a good conclusion. Another goal is to keep the websocket stream in a format which can be decoded by e.g. javascript in a browser.
The source, a v4l2 device, is already accessed. There a different formats (YUV, MJPEG, ...) but I don't know which (standard) format to choose when it comes to streaming. Another requirement is to save the stream to a harddrive and to process those stream (opencv?) to find motion. So the question is should i transmit a QByteArray thats zlib compressed or use mjpeg, which I don't know how to use. The used webcam is a uvcvideo device:
enter ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'MJPG' (compressed)
Name : MJPEG
Index : 1
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUV 4:2:2 (YUYV)code
To be honest, I am not sure how motion does this in detail, because this might be the ways to choose.
Thanks
small

Related

Beep Sound when Decoding DSP TrueSpeech To PCM

I'm trying to decode array of bytes from DSP TrueSpeech to PCM.
When we convert this array as part of streaming (divide it to packets) we can hear some strange "Beep" tones after the decoding.
We tried to decode the entire WAV file in one piece and we didn't get those Beeps.
Currently we are using Alvas.net for it, but we tried also with NAudio and got the same reaults?
My questions:
1)Is anyone familiar with this kind of behavior?
2)Do you have an idea what can we do?
Thanks
Ziv
How are you performing the decode? Often codecs maintain internal state, so it's important that you don't keep closing and re-opening the codec for each block of audio that you receive. In NAudio, that means just one AcmStream/WaveFormatConversionStream that everything you receive is passed through.
Also, make sure it is only compressed audio that is being passed into the codec. Sometimes when you receive audio over the network it is contained within some kind of larger packet that contains timing or encoding metadata (e.g. RTP).
At the bottom line, we have the packet data(array of bytes) which we are sending to decode (return as PCM) and then we're writing the new decoded array of bytes in to the new WAV file.
We're defiantly going to try your suggestion regarding the stream with NAudio.
Regarding the bytes we're working on, they don't contain any garbage. We've wrote a tester that stream the file directly (without network) and got the same beep results.
Our solution is working so well with many other codecs (GSM and etc..) and only in true speech we're having this problem.
Therefore it seems to be like some behavior of True Speech codec, but we didn't find any documentation about it.
Thanks Again
Ziv

GDCL Mpeg-4 Multiplexor - Filter's can't agree on connection

I am attempting to publish some mp4 files with the GDCL Mpeg-4 Multiplexor but it's not accepting the input from my camera (QuickCam
Orbit/Sphere AF).
I see that it has set the the sub type is set to MEDIASUBTYPE_NULL.
I can't seem to figure out a set of filters that will adapt to
successfully link the pins. What do I need to do to successfully
adapt from my Capture pin to the multiplexor?
GDCL Mpeg-4 Multiplexor multiplexes compressed data and your camera captures raw (uncompressed) video. You need to insert a compressor in between in order to deliver MPEG-4 compatible video into the multiplexer. That is, MPEG-4 Part 2 or MPEG-4 Part 10 AKA H.264 video compressor. The multiplexer filter itslef does not do data compression/encoding.

Encoding videos for use with Adobe Live Streaming

I have an original video coded at 20Mbps, 1920x1080, 30fps and want to convert it down to be 640x480 30fps at a range of (3 different) bitrates for use by Adobe Live Streaming.
Should I use ffmpeg to resize and encode at the 3 bitrates then use f4fpackager to create the f4m f4f and f4x files or just use ffmpeg to reduce the resolution and then f4fpackager to encode the relevant bitrates?
I've had several tries so far, but when encoded the videos seem to play at a much larger bitrate than they've been encoded at. For example, if I set up the OSMF to play from my webserver, I'd be expecting my best encoded video to play at 1,500kbps but it's way above that.
Has anyone had any experience of encoding for use like this?
I'm using the following options to f4fpackager
--bitrate=1428 --segment-duration 30 --fragment-duration 2
f4fpackager doesn't do any encoding, it does 2 things:
- fragment the mp4 files (mp4 -> f4f)
- generate a Manifest (f4m) file referencing all you fragmented files (f4f)
So the process is:
- transcode your source file in all the size/bitrate that you want to provide (eg: 1920x01080#4Mbps, 1280x720#2Mbps, etc)
- use f4fpackager to convert the mp4 to f4f (this is the fragmentation step)
- use f4fpackager to generate the Manifest.f4m referencing the files that you generated in the previous step
the --bitrate option of f4fpackager should match the value that you use with ffmpeg, this parameter is used to generate the manifest file with the correct bitrate value of each quality

GDCL Mpeg-4 Multiplexor Problem

I just create a simple graph
SourceFilter(*.mp4 file format) ---> GDCL MPEG 4 Mux Filter ---> File writer Filter
It works fine. But when the source is in h264 file format
SourceFilter( *.h264 file format) ---> GDCL MPEG 4 Mux Filter---> File writer Filter
It record a file but the recorded file does not play in VLC Player, QuickTime, BS Player, WM Player.
What i am doing wrong? Any ideas to record h264 video source? Do i need H264 Mux?
Best Wishes
PS: i JUST want to record video by the way...Why i need a mux?
There are two H.264 formats used by DirectShow filters. One is Byte Stream Format, in which each NALU is preceded by a start code 00 00 01. The other is the format used within MP4 files, in which each start code is preceded by a length (the media type or the MP4 file metadata specifies how many bytes are used in the length field). The problem is that some FOURCCs are used for both formats.
The MP4 mux sample accepts either BSF or length-preceded data, depending on the subtype give. It does not attempt to work out which it is. Most likely, when you are feeding the H.264 elementary stream, you are giving the mux a FOURCC or media type that the mux thinks means length-prepended, when you are giving BSF data. Check in TypeHandler::CanSupport.
If you just want to save H.264 video to a file, you can use a Dump filter to just write the bits to a file. If you are saving BSF, this is a valid H.264 elementary stream file. If you want support for the majority of players, or if you want seeking support, then you will want to write the elementary stream into a container with an index, such as MP4. In this case, you need a mux, not for the multiplexing, but for the indexing and metadata creation.
G

Interface for Volume intensity in FMLE

FMLE = Flash live media encoder 3.0
i have posted this question on Adobe Forum, but not sure if they have people on that forum with programming experience.
I am a developer writing a video capture and audio capture device. The devices already work in other encoders. The devices are written in directshow. I am integrating with FMLE and encountered this problem.
The audio device doesnt have a usable volume bar in FMLE. The FMLE error is "The selected audio device "censored (company secret)" doesn't allow setting volume intensity. Disabling the volume slider control."
my audio device implements these interfaces along with the standard directshow filter interfaces
IBasicAudio
IAMAudioInputMixer
I put tracepoints in queryinterface and found FMLE query's for (my comments in comment string)
{IID_IUnknown}
{IID_IPersistPropertyBag}
{IID_IBaseFilter}
{IID_IAMOpenProgress}
{IID_IAMDeviceRemoval}
{IID_IMediaFilter}
{IID_IAMBufferNegotiation}
{IID_IAMStreamConfig}
{IID_IPin}
{IID_IReferenceClock}
{IID_IMediaSeeking}
{IID_IMediaPosition}
{IID_IVideoWindow} // WTF ?? query video window ?
{IID_IBasicAudio}
{2DD74950-A890-11D1-ABE8-00A0C905F375} // i think this is async stream,
What am i missing ? FMLE doesnt use IAMAudioInputMixer ?
Anyone know the exact interface which FMLE uses for Volume intensity ? . .I assumed it was IBasicAudio, but it doesnt seem to call any methods in there.
Answer provided by Ram Gupta of adobe forum.
"FMLE does not query for CLSID_AudioInputMixerProperties interface.
FMLE enumerates all the pin of audio source filter(using EnumPins) and then it extracts each pin info using QueryPinInfo Function.
FMLE searches for the audio filter Pin whose direction is PINDIR_INPUT(using QueryPinInfo) and then it queries for IAMAudioInputMixer interface to set the volume level.
Could you pls chk if the following functions are properly implemented
-->get_enable: it should set its parameter value to true.
-->put_MixLevel
-->QueryPinInfo:"
This solution did work. My problem was that because i never declared an input pin (since i dont have any directshow related input).

Resources