I am currently writing an application that read frames from camera, modify them, and save them into a video file. I'm planing to do it with ffmpeg. There's rarely a documentation about ffmpeg. I can't find a way. Does any know how to do it?
I need it to be done on unix, and in C or C++. Does any can provide some instructions?
Thanks.
EDIT:
Sorry, I haven't write clearly. I want some developer APIs to write frames to a video file. I open up camera stream, I get every single frame, then I save them into a video file with those APIs available in ffmpeg's public apis. So using command line tool actually doesn't help me. And I've seen output_example.c under the ffmpeg src folder. It's pretty great that I may copy some parts of the code directly without change. And I am still looking for a easier way.
Also, I'm thinking of porting my app to iPhone, as far as I know, only ffmpeg has been ported on iPhone. GStreamer is based on glib, and it's all GNU stuff. I'm not sure if I can get it work on iPhone. So ffmpeg is still the best choice for now.
Any comments is appreciated.
This might help get you started - the documentation is available, but newer features tend to be documented in ffmpeg's man pages.
The frames need to be numbered sequentially.
ffmpeg -f image2 -framerate 25 -i frame_%d.jpg -c:v libx264 -crf 22 video.mp4
-f defines the format
-framerate defines the frame rate
-i defines the input file/s ... %d specifies numbered files .. add 0's
to specify padding, e.g. %05d for zero-padded five-digit numbers.
-vcodec selects the video codec
-crf specifies a rate control method, used to define how the x264 stream is
encoded
video.mp4 is the output file
For more info, see the Slideshow guide.
If other solutions than ffmpeg are feasible for you, you might want to look at GStreamer. I think it might be just the right thing for your case, and there's quite some documentation out there.
You can do what you require without using a library, as in unix you can pipe RGBA data into a program, so you can do:
In your program:
char myimage[640*480*4];
// read data into myimage
fputs(myimage,1,640*480*4,stdout);
And in a script that runs your program:
./myprogram | \
mencoder /dev/stdin -demuxer rawvideo -rawvideo w=640:h=480:fps=30:format=rgba \
-ovc lavc -lavcopts vcodec=mpeg4:vbitrate=9000000 \
-oac copy -o output.avi
I believe you can also use ffmpeg this way, or x264. You can also start the encoder from within your program, and write to a pipe (making the whole process as simple if you were using a library).
While not quite what you want, and not suitable for iPhone development, it does have the advantage that Unix will automatically use a second processor for the encoding.
Related
I noticed that there is an option --compress of rsync tool in linux. Provided I need to copy millions of pictures from one directory to the other on the same computer. Will I benefit from this compression option or not ? Is compressing a good strategy to copy small pictures ?
Probably not. I don't know what exactly you mean by "small pictures", but I am guessing that they are already compressed in some standard image format, e.g. JPEG or PNG. In that case, --compress will take time, but give no benefit in compression.
Using Automator on OSX
Passing selected files/folders to rsync using a service. It is working and copies are succesful. Although want two things to happen and one question as to how to reference the "current" users desktop in the destination.
Want it to open terminal and visibly show the copy process with the --progress option that rsync offers. A GUI window would be nice but wanting simple for now.
Would like a simple "completed" message to be displayed somehow. Either in the terminal window or a GUI message.
Where the First.Last is used in the destination path, how can this be defined to simple use the current users desktop?
Below is what is working now but without the 3 things mentioned above.
for f in "$#"; do
/usr/bin/rsync --verbose --progress --times "$f" /Users/First.Last/Desktop/copy
echo "$f COMPLETED"
done
Thanks for any and all suggestions.
I am working with network shell (nsh; bmc software) I believe it is based on zsh 4.3.4. I have written a script that connects to a list of variable solaris machines and runs numerous commands and then creates some local directories and files based off of those commands.
I am looking for a way to display the script's progress as it can take some time depending on the number of servers. I have been told by others I need to utilize pv or dialog. However, in nsh when attempting to run these commands I get "command not found." It could be a limitation of nsh as well.
As a simple example, I want to see the progress of the following:
for i in $(cat serverlist.txt)
do
nexec -i $i hostname >> hosts.txt
done
Of course my script is a lot more complex than this but I cannot seem to get it working correctly as pv and dialog are not available. Also I know I should be using read -r to truncate the file, but appears not to work correctly either.
I'm wondering if it's possible with a commandline tool, (ffmpeg or other) to trim empty space from the beginning or end of an audio track. If anyone knows anything about this, that would some advice/info would be amazing. Thanks
Are you hoping to find a tool that automatically identifies and trims silence from the start of the track? That would be a little more complicated. However, if you know that you want to remove the first, e.g., 7.5 seconds of a PCM WAV file, use FFmpeg like this:
ffmpeg -i input.wav -ss 7.5 output.wav
The same command generally applies for a .MP3 file. However, you have to be careful to avoid generational quality loss by inadvertently decompressing and then re-encoding the MP3 audio data. For this, you ask FFmpeg to copy the codec data rather than transcoding it:
ffmpeg -i input.mp3 -acodec copy -ss 7.5 output.mp3
i am trying to make ffmpeg or x264 provider that will encode videos.. Well i have been looking at some tools and such and. i don't know..
I would need to make my own api. I have done the same 4 ffmpeg for flv1 but h264 seems much different..
can anyone give me some basics where and how to start?
Ah.. as i don't have any answers and i did resolve this.. here is what i did do..
After loosing a lot of time, downloading many files, reading unclear documentation and so on and so on...
best and most important thing here is avisynth tool that can load any kind of video under directshow - do really a lot (using it's own script language) and then send that script to x264.exe encoder that will create video, which u will need to include with mp3 (also extracted by using avisynth plugin) and wrap it into mp4 file with mp4box.exe.
All this jobs are done by running process from .net that will return output.
My list of tools is:
avisynth - best thing for video ever made
ffmpeg - to get images out but u can use it for other things if u like
x264 - to get x264 video out from avs (avisynth script)
mp3box - to combine 264 file with mp3 into h264
soundout - avi synth plugin to extract mp3 sound from avisynth video
yadif - avi synth plugin to do some tihngs
My choice would be to use Mencoder.
Try to find a binary version that has pre-compiled support for x264 (or compile your own!) in order to use it for H.264 encoding. In order to see what things your Mencoder binary support, try the command
mencoder -ovc help
If you get x264 somewhere in that list, you are good to go.
After that, you can use Mencoder to transcode any kind of video to H.264. Please check the mencoder manual here to get started:
http://www.mplayerhq.hu/DOCS/HTML/en/menc-feat-x264.html