Is there a way to combine the following two ffmpeg commands into one?
ffmpeg -i OutputAudioEN.mp4 -acodec aac -strict -2 german.mp4
ffmpeg -i german.mp4 -c copy -f segment
-segment_list audio-de.m3u8 -segment_time 10 output%03d.ts
Is this possible to use the output from the first command for the second line, without using two separate commands?
Well, here I'm making my comment as a proper answer. What I'm suggesting here is you can directly segment the video and then encode the audio using acc according to your need. Following command works for me.
ffmpeg -i OutputAudioEN.mp4 -f segment -segment_list audio-de.m3u8 -segment_time 10 -acodec aac -strict -2 output%03d.ts
Like this you can segment the video while audio encoding also happens at the meantime.
Hope this helps you!
Related
I'm stream and successfully output converted video into mkv file.
var stream = ..incoming stream of type MemoryStream();
var process = new System.Diagnostics.Process();
process.StartInfo.Arguments = "-f mp4 -i pipe:0 -c:v libx264 -crf 20 -s 600:400 -f matroska myfile.mkv";
process.Start();
process.StandardInput.BaseStream.Write(stream.ToArray())
process.BeginOutputReadLine()
process.StandardInput.BaseStream.Close();
process.WaitForExit(1000);
My question is: how can I change the implementation (command and StandardOutput.BaseStream in order to export results into memory stream and not file (like it's in the above example).
What you were doing in the just deleted question was correct,
-f mp4 -i pipe:0 -c:v libx264 -crf 20 -s 600:400 -f matroska pipe:1
let you write the output to a pipe. But you'll likely experience a deadlock because FFmpeg would not read the entire content of stream.ToArray() at once. It only reads what it needs to produce the next output data frame.
So, you need to run at least 2 threads on your program, one to write to FFmpeg and another to read from FFmpeg. Unfortunately, I'm not familiar with .net so I cannot help you with actual how to, but hopefully this answer points you a right direction.
I need to record video from RaspberryPi, using this Bash script:
#!/bin/sh
/usr/bin/ffmpeg -f video4linux2 -input_format h264 -video_size 320x240 -framerate 15 -i /dev/video0 -vcodec copy -an "/var/ayron/videotrap/videos/pctrace_$(date +"%Y_%m_%d_%H_%M_%S").h264"
In this way, I can report the date and time of start recording. But I need to show on video the Date and Time during registration. Which kind of filter must I use?
Thanks for your supply.
Use drawtext filter:
/usr/bin/ffmpeg -f video4linux2 -input_format h264 -video_size 320x240 -framerate 15 -i /dev/video0 -an -vf "drawtext=text='%{localtime\:%Y_%m_%d_%H_%M_%S}'" "/var/ayron/videotrap/videos/pctrace_$(date +"%Y_%m_%d_%H_%M_%S").h264"
You can't filter and stream copy the video at the same time so -vcodec copy has been omitted.
If you want to use colons (:) in the time then you'll have to do some ugly escaping as shown in How to drawtext colon with localtime in ffmpeg -filter_complex?
Can I overlay/downmix two audio mp3 files into one mp3 output file using ffmpeg?
stereo + stereo → stereo
Normal downmix
Use the amix filter:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex amix=inputs=2:duration=longest output.mp3
Or the amerge filter:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex amerge=inputs=2 -ac 2 output.mp3
Downmix each input into specific output channel
Use the amerge and pan filters:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex "amerge=inputs=2,pan=stereo|c0<c0+c1|c1<c2+c3" output.mp3
mono + mono → stereo
Use the join filter:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex join=inputs=2:channel_layout=stereo output.mp3
Or amerge:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex amerge=inputs=2 output.mp3
mono + mono → mono
Use the amix filter:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex amix=inputs=2:duration=longest output.mp3
More info and examples
See FFmpeg Wiki: Audio Channels
Check this out:
ffmpeg -y -i ad_sound/whistle.mp3 -i ad_sound/4s.wav -filter_complex "[0:0][1:0] amix=inputs=2:duration=longest" -c:a libmp3lame ad_sound/outputnow.mp3
I think it will help.
The amix filter helps to mix multiple audio inputs into a single output.
If you run the following command:
ffmpeg -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex amix=inputs=3:duration=first:dropout_transition=3 OUTPUT
This command will mix 3 input audio streams (I used two mp3 files, in the example below) into a single output with the same duration as the first input and a dropout transition time of 3 seconds.
The amix filter accepts the following parameters:
inputs:
The number of inputs. If unspecified, it defaults to 2.
duration:
How to determine the end-of-stream.
longest:
The duration of the longest input. (default)
shortest:
The duration of the shortest input.
first:
The duration of the first input.
dropout_transition:
The transition time, in seconds, for volume renormalization when an input stream ends. The default value is 2 seconds.
For example, I ran the following command in Ubuntu:
FFMPEG version: 3.2.1-1
UBUNTU 16.04.1
ffmpeg -i background.mp3 -i bSound.mp3 -filter_complex amix=inputs=2:duration=first:dropout_transition=0 -codec:a libmp3lame -q:a 0 OUTPUT.mp3
-codec:a libmp3lame -q:a 0 was used to set a variable bit rate. Remember that, you need to install the libmp3lame library, if is necessary. But, it will work even without the -codec:a libmp3lame -q:a 0 part.
Reference: https://ffmpeg.org/ffmpeg-filters.html#amix
For merging two audio files with different volumes and different duration following command will work:
ffmpeg -y -i audio1.mp3 -i audio2.mp3 -filter_complex "[0:0]volume=0.09[a];[1:0]volume=1.8[b];[a][b]amix=inputs=2:duration=longest" -c:a libmp3lame output.mp3
Here duration can be change to longest or to shortest, you can also change the volume levels according to your need.
If you're looking to add background music to some voice use the following command as in the gaps the music will become loud automatically:
ffmpeg -i bgmusic.mp3 -i audio.mp3 -filter_complex "[1:a]asplit=2[sc][mix];[0:a][sc]sidechaincompress=threshold=0.003:ratio=20[bg]; [bg][mix]amerge[final]" -map [final] final.mp3
In this threshold is something whose value will decide how much loud the audio should be, the less the threshold more the audio will be. Ratio gives how much the other audio should be compressed, the more the ratio the more the compression is.
If they are different length, you can use apad to add a silent sound to the shortest one
With Bash
set 'amovie=a.mp3 [gg]; amovie=b.mp3 [hh]; [gg][hh] amerge'
ffmpeg -f lavfi -i "$1" -q 0 c.mp3
Example
You can use the following command arguments:
// Command is here
let commandValue = "-y -i \(recordedAudioPath) -i \(backgroundAudio) -filter_complex [\(0):a][\(1):a]amerge=inputs=\(2)[a] -map [a] -ac \(2) -shortest -preset ultrafast \(outputPath)"
MobileFFmpeg.execute(commandValue)
Since MJPEG over http consists basically on the transmission of a series of JPEG images seperated by a defined seperator, how does MPlayer recognize that it is an MJPEG stream?
Thank you
Have a look at:
MplayerMjpegStreamViewing < Motion < Foswiki
e.g.
mplayer -fps 4 -demuxer lavf http://rpi-6:8080/?action=stream
does the job for me. Suitable for a streaming server running on a Raspberry like this:
/usr/local/bin/mjpg_streamer -o output_http.so -w ./www -i input_raspicam.so -x 1920 -y 1440 -fps 3 -hf -vf
What's the simplest way for running a command like this
ffmpeg -i MVI_NNNN.MOV -sameq -ar 22050 MVI_NNNN.mp4
on all .MOV files in a directory? The input filename MVI_NNNN.MOV would be something like MVI_0849.MOV and the output should preserve the file number, so MVI_0849.mp4.
Try for...loop:
for i in *.MOV
do
ffmpeg -i "$i" -sameq -ar 22050 "${i%.MOV}.mp4"
done
${i%.MOV}.mp4 will remove .MOV, append .mp4
"..."(double quotation marks) are needed if filenames contain white-spaces
Using GNU parallel running one ffmpeg instance per CPU core to speed things up:
$ parallel ffmpeg -i {} -sameq -ar 22050 {.}.mp4 ::: *.MOV
See the manual for tweaks.