i am trying to make ffmpeg or x264 provider that will encode videos.. Well i have been looking at some tools and such and. i don't know..
I would need to make my own api. I have done the same 4 ffmpeg for flv1 but h264 seems much different..
can anyone give me some basics where and how to start?
Ah.. as i don't have any answers and i did resolve this.. here is what i did do..
After loosing a lot of time, downloading many files, reading unclear documentation and so on and so on...
best and most important thing here is avisynth tool that can load any kind of video under directshow - do really a lot (using it's own script language) and then send that script to x264.exe encoder that will create video, which u will need to include with mp3 (also extracted by using avisynth plugin) and wrap it into mp4 file with mp4box.exe.
All this jobs are done by running process from .net that will return output.
My list of tools is:
avisynth - best thing for video ever made
ffmpeg - to get images out but u can use it for other things if u like
x264 - to get x264 video out from avs (avisynth script)
mp3box - to combine 264 file with mp3 into h264
soundout - avi synth plugin to extract mp3 sound from avisynth video
yadif - avi synth plugin to do some tihngs
My choice would be to use Mencoder.
Try to find a binary version that has pre-compiled support for x264 (or compile your own!) in order to use it for H.264 encoding. In order to see what things your Mencoder binary support, try the command
mencoder -ovc help
If you get x264 somewhere in that list, you are good to go.
After that, you can use Mencoder to transcode any kind of video to H.264. Please check the mencoder manual here to get started:
http://www.mplayerhq.hu/DOCS/HTML/en/menc-feat-x264.html
Related
I'm working on a small project involving QuickTime media files (*.mov) playback. It's a simple output to specialized video card I have in my university lab. The only supported by manufacturer way to work with this video card in Windows OS is to use DirectShow filters. But since I have to use QuickTime video files as stream source I encounter the problem with DirectShow. I can't find any way to demultiplex a source file. There is no problem extracting an audio stream from a QT file but I can not find any demultiplexer which can actually split a video stream from it.
So far I tried Haali Splitter which were recommended for *.mov files by one of my professors but it's unable to correctly split a QuickTime file in to audio and video streams. Is there any other alternatives? Preferably free or open source since while I'm ready to spend a bit on buying QickTime source or slitter filter most of what I found are ridiculously expensive.
I also found the filter developed by River Past which can work as DirectShow filter source. But for some reason while it's working fine with WMP and GraphEdit it refuses to work at all when I'm trying to use it with my program or even in 3rd party graph editing tools. It's just throwing "UNSPECIFIED ERROR" which doesn't make any sense. And GraphEditPlus can't even load this particular filter for some reason. So apparently this filter has some kind of mechanism preventing it's usage with anything else but original Micrisoft GraphEdig and WMP.
And is there any kind of description of QickTime MOV file format? I was thinking about trying to write my own demultiplexer but unable to find any complete documentation describing this format.
Try the MP4 demux filter at http://gdcl.co.uk/mpeg4. It works with many/most MOV files and is open source.
G
i am looking for solution to convert swf into flv in a batch
like using command line or sdk
there is solutions but the are very expansive like moyea swf video converter
can you please help me getting free/not expansive solution for converting swf into flv
thank you very much.
I found a solution, i am using coolutils total converter.
insiode the program there is a way to run it as a cmd.
There's a perl module for this, but ffmpeg should also be able to do so.
I can suggest converting you swf files using an SWF video converter.
I'm using this one, which can convert swf files to more than 200 formats and it's VERY fast.
Is it possible to open incomplete video-files for playback using directshow?
The current solution first downloads the video file (.avi-container, can be h.264, mpeg2, mpeg4) and then starts playback. This can of course be a rather lengty operation.
The downloader fetches the videofile in chunks from a database so in theory it should be possible to open the file during download.
Is it possible to create a Directshow graph that can start the playback during download even if the file is incomplete when playback starts?
The software is written in C++ both server/client.
Thanks,
at the least http://en.wikipedia.org/wiki/VLC_media_player#cite_note-12 will probably do it...
As far as I'm aware, though, you should be able to start the graph as soon as the file exists...as long as it doesn't reach the end during playback while the file hasn't been written yet.
Or are you looking for some filter that will "wait patiently" before replaying?
I have two mp4 video files on webserver, i wanted to play them in flash player(flv player) on my asp.net page, but i couldn't be able to play them, i also tried to play them in quick time player the same prob occured. but i waz giving the accurate path, there were no spaces in mp4 file names etc.
Does it need to have mp4 player(or codec etc) installed on webserver?
I have also some wmv files on that server , and i am playing perfectly using silver light player, and media player object on my website.
So please share your knowledge... thanks in advance...
You need to convert them to FLV first. I use a program called AVS Video Convertor, its not free but is a great tool.
I am currently writing an application that read frames from camera, modify them, and save them into a video file. I'm planing to do it with ffmpeg. There's rarely a documentation about ffmpeg. I can't find a way. Does any know how to do it?
I need it to be done on unix, and in C or C++. Does any can provide some instructions?
Thanks.
EDIT:
Sorry, I haven't write clearly. I want some developer APIs to write frames to a video file. I open up camera stream, I get every single frame, then I save them into a video file with those APIs available in ffmpeg's public apis. So using command line tool actually doesn't help me. And I've seen output_example.c under the ffmpeg src folder. It's pretty great that I may copy some parts of the code directly without change. And I am still looking for a easier way.
Also, I'm thinking of porting my app to iPhone, as far as I know, only ffmpeg has been ported on iPhone. GStreamer is based on glib, and it's all GNU stuff. I'm not sure if I can get it work on iPhone. So ffmpeg is still the best choice for now.
Any comments is appreciated.
This might help get you started - the documentation is available, but newer features tend to be documented in ffmpeg's man pages.
The frames need to be numbered sequentially.
ffmpeg -f image2 -framerate 25 -i frame_%d.jpg -c:v libx264 -crf 22 video.mp4
-f defines the format
-framerate defines the frame rate
-i defines the input file/s ... %d specifies numbered files .. add 0's
to specify padding, e.g. %05d for zero-padded five-digit numbers.
-vcodec selects the video codec
-crf specifies a rate control method, used to define how the x264 stream is
encoded
video.mp4 is the output file
For more info, see the Slideshow guide.
If other solutions than ffmpeg are feasible for you, you might want to look at GStreamer. I think it might be just the right thing for your case, and there's quite some documentation out there.
You can do what you require without using a library, as in unix you can pipe RGBA data into a program, so you can do:
In your program:
char myimage[640*480*4];
// read data into myimage
fputs(myimage,1,640*480*4,stdout);
And in a script that runs your program:
./myprogram | \
mencoder /dev/stdin -demuxer rawvideo -rawvideo w=640:h=480:fps=30:format=rgba \
-ovc lavc -lavcopts vcodec=mpeg4:vbitrate=9000000 \
-oac copy -o output.avi
I believe you can also use ffmpeg this way, or x264. You can also start the encoder from within your program, and write to a pipe (making the whole process as simple if you were using a library).
While not quite what you want, and not suitable for iPhone development, it does have the advantage that Unix will automatically use a second processor for the encoding.