How does MPlayer recognize an MJPEG stream? - http

Since MJPEG over http consists basically on the transmission of a series of JPEG images seperated by a defined seperator, how does MPlayer recognize that it is an MJPEG stream?
Thank you

Have a look at:
MplayerMjpegStreamViewing < Motion < Foswiki
e.g.
mplayer -fps 4 -demuxer lavf http://rpi-6:8080/?action=stream
does the job for me. Suitable for a streaming server running on a Raspberry like this:
/usr/local/bin/mjpg_streamer -o output_http.so -w ./www -i input_raspicam.so -x 1920 -y 1440 -fps 3 -hf -vf

Related

How to create an audio file from a Pcap file with Tshark?

I want to make audio data from a Pcap file with Tshark.
I have successfully created audio data from a Pcap file using Wireshark in RTP analysis function.
This Pcap file is created from a VoIP phone conversation.
Next time I want to do the same thing with Tshark.
What command would do that?
I read the Tshark manual to find out how.
but couldn't find it.
do i need any tools?
On Linux, extracting the RTP packets from PCAP file is possible with tshark together with shell tools tr and xxd, but then you might need other tools to convert to an audio format.
If you have a single call recording in the pcap, so all rtp packets belong to it, try with:
tshark -n -r call.pcap -2 -R rtp -T fields -e rtp.payload | tr -d '\n',':' | xxd -r -ps >call.rtp
If the pcap has the recordings from many calls, then you have to identify the calls and their RTP streams by source/destination IPs or SSRC and build the filter accordingly, for example if SSRC is 0x7f029328:
tshark -n -r call.pcap -2 -R rtp -R "rtp.ssrc == 0x7f029328" -T fields -e rtp.payload | tr -d '\n',':' | xxd -r -ps >call.rtp
Tools like sox or ffmpeg can be used to convert from call.rtp file to wav format, depending on the codec that was used in the call. If the codec was G711u (PCMU) with sample rate 8000:
sox -t ul -r 8000 -c 1 call.rtp call.wav
The audio formats supported by sox are listed by sox -h. The ffmpeg might be needed for codecs such as G729 or G722, example for G722 with sample rate 16000:
ffmpeg -f g722 -i call.rtp -acodec pcm_s16le -ar 16000 -ac 1 call.wav
These guidelines are from some brief notes I made during the past when I had similar needs, hope they are good and still valid nowadays, or at least provide the right direction to explore further.

Mixing audio stream into video stream using ffmpeg while retaining original audio from the video stream as background [duplicate]

Can I overlay/downmix two audio mp3 files into one mp3 output file using ffmpeg?
stereo + stereo → stereo
Normal downmix
Use the amix filter:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex amix=inputs=2:duration=longest output.mp3
Or the amerge filter:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex amerge=inputs=2 -ac 2 output.mp3
Downmix each input into specific output channel
Use the amerge and pan filters:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex "amerge=inputs=2,pan=stereo|c0<c0+c1|c1<c2+c3" output.mp3
mono + mono → stereo
Use the join filter:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex join=inputs=2:channel_layout=stereo output.mp3
Or amerge:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex amerge=inputs=2 output.mp3
mono + mono → mono
Use the amix filter:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex amix=inputs=2:duration=longest output.mp3
More info and examples
See FFmpeg Wiki: Audio Channels
Check this out:
ffmpeg -y -i ad_sound/whistle.mp3 -i ad_sound/4s.wav -filter_complex "[0:0][1:0] amix=inputs=2:duration=longest" -c:a libmp3lame ad_sound/outputnow.mp3
I think it will help.
The amix filter helps to mix multiple audio inputs into a single output.
If you run the following command:
ffmpeg -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex amix=inputs=3:duration=first:dropout_transition=3 OUTPUT
This command will mix 3 input audio streams (I used two mp3 files, in the example below) into a single output with the same duration as the first input and a dropout transition time of 3 seconds.
The amix filter accepts the following parameters:
inputs:
The number of inputs. If unspecified, it defaults to 2.
duration:
How to determine the end-of-stream.
longest:
The duration of the longest input. (default)
shortest:
The duration of the shortest input.
first:
The duration of the first input.
dropout_transition:
The transition time, in seconds, for volume renormalization when an input stream ends. The default value is 2 seconds.
For example, I ran the following command in Ubuntu:
FFMPEG version: 3.2.1-1
UBUNTU 16.04.1
ffmpeg -i background.mp3 -i bSound.mp3 -filter_complex amix=inputs=2:duration=first:dropout_transition=0 -codec:a libmp3lame -q:a 0 OUTPUT.mp3
-codec:a libmp3lame -q:a 0 was used to set a variable bit rate. Remember that, you need to install the libmp3lame library, if is necessary. But, it will work even without the -codec:a libmp3lame -q:a 0 part.
Reference: https://ffmpeg.org/ffmpeg-filters.html#amix
For merging two audio files with different volumes and different duration following command will work:
ffmpeg -y -i audio1.mp3 -i audio2.mp3 -filter_complex "[0:0]volume=0.09[a];[1:0]volume=1.8[b];[a][b]amix=inputs=2:duration=longest" -c:a libmp3lame output.mp3
Here duration can be change to longest or to shortest, you can also change the volume levels according to your need.
If you're looking to add background music to some voice use the following command as in the gaps the music will become loud automatically:
ffmpeg -i bgmusic.mp3 -i audio.mp3 -filter_complex "[1:a]asplit=2[sc][mix];[0:a][sc]sidechaincompress=threshold=0.003:ratio=20[bg]; [bg][mix]amerge[final]" -map [final] final.mp3
In this threshold is something whose value will decide how much loud the audio should be, the less the threshold more the audio will be. Ratio gives how much the other audio should be compressed, the more the ratio the more the compression is.
If they are different length, you can use apad to add a silent sound to the shortest one
With Bash
set 'amovie=a.mp3 [gg]; amovie=b.mp3 [hh]; [gg][hh] amerge'
ffmpeg -f lavfi -i "$1" -q 0 c.mp3
Example
You can use the following command arguments:
// Command is here
let commandValue = "-y -i \(recordedAudioPath) -i \(backgroundAudio) -filter_complex [\(0):a][\(1):a]amerge=inputs=\(2)[a] -map [a] -ac \(2) -shortest -preset ultrafast \(outputPath)"
MobileFFmpeg.execute(commandValue)

DirectShow stream using ffmpeg point to point streaming through TCP protocol

I had set up a point-to-point stream using ffmpeg via UDP protocol and the stream worked, but there was screen tearing etc. I already tried raising the buffer size, but it did not help. This is a work network, so the UDP protocol won't work.
here is the full command:
ffmpeg -f dshow -i video="UScreenCapture" -r 30 -vcodec mpeg4 -q 12 -f mpegts udp://192.168.1.220:1234?pkt_size=188?buffer_size=65535
I've tried to make this work with TCP with no success
Here's what i've got now:
ffmpeg -f dshow -i video="UScreenCapture" -f mpegts tcp://192.168.1.194:5555
this returns an error:
real-time buffer [UScreenCapture] [Video input] too full or near too
full <323% of size: 3041280 [rtbufsize parameter]>! frame dropped!
This last message repeated xxxx times (it went up to around 1400 and I just turned it off).
I've tried to implement the -rtbufsize paremeter and raising the buffsize up to 800000000, didn't help.
I would appreciate any suggestions on how to solve this.

Combine two ffmpeg commands

Is there a way to combine the following two ffmpeg commands into one?
ffmpeg -i OutputAudioEN.mp4 -acodec aac -strict -2 german.mp4
ffmpeg -i german.mp4 -c copy -f segment
-segment_list audio-de.m3u8 -segment_time 10 output%03d.ts
Is this possible to use the output from the first command for the second line, without using two separate commands?
Well, here I'm making my comment as a proper answer. What I'm suggesting here is you can directly segment the video and then encode the audio using acc according to your need. Following command works for me.
ffmpeg -i OutputAudioEN.mp4 -f segment -segment_list audio-de.m3u8 -segment_time 10 -acodec aac -strict -2 output%03d.ts
Like this you can segment the video while audio encoding also happens at the meantime.
Hope this helps you!

MPEG-TS audio synchronization lost on segmentation

I use FFmpeg for encoding my iphone video (on debian) and mediafilesegmenter (on mac OS server). For encoding this is my commande :
ffmpeg -i INPUT -y -acodec libfaac -ar 22000 -ab 40k -vcodec libx264 -b 600k \
-bt 600k -vpre slow -vpre baseline -threads 1 -level 30 -r 10 -s 400x224 \
-map_chapters -1:-1 -f ipod INPUT.mp4
When i read the INPUT.mp4 everything is OK with audio.
But when I use the Apple Segmenter (mediafilesegmenter) i have a descync between audio & video.
Is my command line is wrong ? or it's a Apple segmenter bug. The mediastreamvalidator show me :
WARNING: Media segment contains a video track but does not contain any IDR access unit with a SPS and a PPS
But i don't know if cause the audio desync.
I have the latest mediafilesegmenter download from connect.apple.com.

Resources