Encrypt the video downlink stream from the DJI Matrice 100 - encryption

I'm trying to encrypt the video downlink from the DJI M100 drone.
my problem is that I don't understand the structure - what is the frame path ? there is the N1, lightbridge2, gimbal camera ...
it will be grate to get some info on it ! :)

Check out this sample.
It shows you how to get raw video frames from the drone and decode it using FFMPEG.
https://github.com/DJI-Mobile-SDK-Tutorials/Android-VideoStreamDecodingSample
Here is the link to full tutorial
http://developer.dji.com/mobile-sdk/documentation/sample-code/index.html

Related

C# AForge VideoFileWriter WriteVideoFrame error

I want to make a app that record video using webcam,
I made the logic like get each frame as bitmap and store it to file using
AForge VideoFileWriter WriteVideoFrame function,
When I open then file using VideoFileWriter Open function,
writer.Open(path, VideoWidth, VideoHeight, frameRate, VideoCodec.H264, bitRate);
It is hard to determine the bitRate, when the bitrate is wrong, the whole program die without any error.
I think the bitrate is related to video frame width, height, framerate, bitcount as well as codec,
But I not sure the specific formular to calculate.
I want to compress the video using h264 codec.
Can anyone help me to find out the solution?
Thank you very much.

v4l2 -> QByteArray(?) -> QWebsocket -> internet -> {PC, Android, web}

as you can assume from the tile, I would like to broadcast a webcam stream to different clients. I know that there are many solutions (as motion), but I have already a working infrastructure based on a Qt server software and a websocket as connection to the outer world.
I have read source code of other linux applications like kopete and motion to find out the most efficient way, but don't come to a good conclusion. Another goal is to keep the websocket stream in a format which can be decoded by e.g. javascript in a browser.
The source, a v4l2 device, is already accessed. There a different formats (YUV, MJPEG, ...) but I don't know which (standard) format to choose when it comes to streaming. Another requirement is to save the stream to a harddrive and to process those stream (opencv?) to find motion. So the question is should i transmit a QByteArray thats zlib compressed or use mjpeg, which I don't know how to use. The used webcam is a uvcvideo device:
enter ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'MJPG' (compressed)
Name : MJPEG
Index : 1
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUV 4:2:2 (YUYV)code
To be honest, I am not sure how motion does this in detail, because this might be the ways to choose.
Thanks
small

Beep Sound when Decoding DSP TrueSpeech To PCM

I'm trying to decode array of bytes from DSP TrueSpeech to PCM.
When we convert this array as part of streaming (divide it to packets) we can hear some strange "Beep" tones after the decoding.
We tried to decode the entire WAV file in one piece and we didn't get those Beeps.
Currently we are using Alvas.net for it, but we tried also with NAudio and got the same reaults?
My questions:
1)Is anyone familiar with this kind of behavior?
2)Do you have an idea what can we do?
Thanks
Ziv
How are you performing the decode? Often codecs maintain internal state, so it's important that you don't keep closing and re-opening the codec for each block of audio that you receive. In NAudio, that means just one AcmStream/WaveFormatConversionStream that everything you receive is passed through.
Also, make sure it is only compressed audio that is being passed into the codec. Sometimes when you receive audio over the network it is contained within some kind of larger packet that contains timing or encoding metadata (e.g. RTP).
At the bottom line, we have the packet data(array of bytes) which we are sending to decode (return as PCM) and then we're writing the new decoded array of bytes in to the new WAV file.
We're defiantly going to try your suggestion regarding the stream with NAudio.
Regarding the bytes we're working on, they don't contain any garbage. We've wrote a tester that stream the file directly (without network) and got the same beep results.
Our solution is working so well with many other codecs (GSM and etc..) and only in true speech we're having this problem.
Therefore it seems to be like some behavior of True Speech codec, but we didn't find any documentation about it.
Thanks Again
Ziv

How to use streaming audio data from microphone for ASR in Qt

I'm working on a speech recognition project and my program can recognize words from audio files. Now I need to work with the audio stream coming from microphone. I'm using QAudio for getting sound data from mic and QAudio has a function to start the process. This start(* QBuffer) function writes the data into a QBuffer(inherited from QByteArray) object. When I'm not dealing with continuous stream, I can stop recording from mic anytime I want and copy the whole data from QBuffer into a QByteArray and I can do whatever I wanna do with the data. But in continuous stream QBuffer's size increases by time and becomes 100Mb in 15 mins.
So I need to use some kind of circular buffer but I can't figure out how to do that especially with this start(*QBuffer) function. I also avoid of cutting the streaming sound at a point where the speech continues.
What is the basic way to handle streaming audio data for speech recognition?
Is it possible to change the start(*QBuffer) function into start(*QByteArray) and make the function to overwrite on that QByteArray to build and circular buffer?
Thanks in advance
boost.com is offering a circular buffer
http://www.boost.org/doc/libs/1_37_0/libs/circular_buffer/doc/circular_buffer.html#briefexample
It should meet what you need
Alain

How to play IMFMediaSample in media foundation?

I am able to extract samples out of a video using the readSample method. Now how can I play the data present in those samples? Or how to play IMFSample ?
Sample IMFSample is a block of data, such as video frame or a chunk of audio sequence. This is a tiny piece of data to be played alone. The API addresses more sophisticated playback scenarios, such as where playback is a session where one or more streams are streamed in sync.
Be sure to check Getting Started with MFPlay on MSDN to see how playback is set up with Media Foundation.

Resources