I noticed that there is an option --compress of rsync tool in linux. Provided I need to copy millions of pictures from one directory to the other on the same computer. Will I benefit from this compression option or not ? Is compressing a good strategy to copy small pictures ?
Probably not. I don't know what exactly you mean by "small pictures", but I am guessing that they are already compressed in some standard image format, e.g. JPEG or PNG. In that case, --compress will take time, but give no benefit in compression.
Related
I see that the Logstash 1.4.2 tar install via the below curl command is around 140 MB & am wondering if there is a way to get smaller footprint download without the extra baggage of Kibana, ElasticSearch, some filters, inputs,outputs. Is it safe to purge the vendor directory.
The latest version of Logstash 1.5.0 appears to have grown bigger in size & is about 160MB.
Would appreciate if anyone can provide any recommendation and /or inputs around the same.
curl -s https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz | tar xz
Instead of manually deleting stuff from the Logstash distribution that you don't think you need in order to save a few tens of megabytes, just use a more light-weight shipper and do all processing on machine that isn't so low on disk space. Some of your choices are logstash-forwarder, Log Courier, and NXLog. These are just a handful of megabytes each (and use far less RAM since they don't run through the JVM).
Alternatively, NXLog's configuration language is quite rich and you can probably use that for doing the processing you need on your leaf nodes without the need for a separate machine for the processing of logs. NXLog's overhead is quite small.
I'm wondering if it's possible with a commandline tool, (ffmpeg or other) to trim empty space from the beginning or end of an audio track. If anyone knows anything about this, that would some advice/info would be amazing. Thanks
Are you hoping to find a tool that automatically identifies and trims silence from the start of the track? That would be a little more complicated. However, if you know that you want to remove the first, e.g., 7.5 seconds of a PCM WAV file, use FFmpeg like this:
ffmpeg -i input.wav -ss 7.5 output.wav
The same command generally applies for a .MP3 file. However, you have to be careful to avoid generational quality loss by inadvertently decompressing and then re-encoding the MP3 audio data. For this, you ask FFmpeg to copy the codec data rather than transcoding it:
ffmpeg -i input.mp3 -acodec copy -ss 7.5 output.mp3
Could having a very large PATH variable noticeably slow down your computer? If so, would it only slow down the computer when using terminal or would it slow down the machine in general?
Practically speaking, is it beneficial to keep a small PATH variable?
It should not noticeably slow down your computer as a whole. Most shells do at least some limited caching (last time we ran ls, we found it in /usr/bin), on top of the fact that your system will generally have a significant amount of file system metadata cached. If you type a command that you haven't run before, and it happens to be in the 200th directory in your PATH, or if your system is under significant virtual memory pressure (in which case it's just going to be slow anyway), you will probably notice some delay in launching the command, but the second time you run it it should be less noticeable. This will be considerably worse if some of your PATH elements are on network file systems, slow devices like CD/DVD media, etc., or if you are on an ancient system that is either just plain slow by todays standards or has very small memory. I would recommend at least periodically reviewing your PATH to see if there are directories that no longer exist or are no longer used which you can prune out, but in general a longer PATH is not overly problematic.
If you find that it is a problem, you can create a small directory that contains symbolic links to the binaries you need from other paths and/or small wrapper scripts to launch the appropriate applications, and only include this directory in your path (in addition to the system standard locations) rather than every individual directory that one or two binaries live in...
I am currently writing an application that read frames from camera, modify them, and save them into a video file. I'm planing to do it with ffmpeg. There's rarely a documentation about ffmpeg. I can't find a way. Does any know how to do it?
I need it to be done on unix, and in C or C++. Does any can provide some instructions?
Thanks.
EDIT:
Sorry, I haven't write clearly. I want some developer APIs to write frames to a video file. I open up camera stream, I get every single frame, then I save them into a video file with those APIs available in ffmpeg's public apis. So using command line tool actually doesn't help me. And I've seen output_example.c under the ffmpeg src folder. It's pretty great that I may copy some parts of the code directly without change. And I am still looking for a easier way.
Also, I'm thinking of porting my app to iPhone, as far as I know, only ffmpeg has been ported on iPhone. GStreamer is based on glib, and it's all GNU stuff. I'm not sure if I can get it work on iPhone. So ffmpeg is still the best choice for now.
Any comments is appreciated.
This might help get you started - the documentation is available, but newer features tend to be documented in ffmpeg's man pages.
The frames need to be numbered sequentially.
ffmpeg -f image2 -framerate 25 -i frame_%d.jpg -c:v libx264 -crf 22 video.mp4
-f defines the format
-framerate defines the frame rate
-i defines the input file/s ... %d specifies numbered files .. add 0's
to specify padding, e.g. %05d for zero-padded five-digit numbers.
-vcodec selects the video codec
-crf specifies a rate control method, used to define how the x264 stream is
encoded
video.mp4 is the output file
For more info, see the Slideshow guide.
If other solutions than ffmpeg are feasible for you, you might want to look at GStreamer. I think it might be just the right thing for your case, and there's quite some documentation out there.
You can do what you require without using a library, as in unix you can pipe RGBA data into a program, so you can do:
In your program:
char myimage[640*480*4];
// read data into myimage
fputs(myimage,1,640*480*4,stdout);
And in a script that runs your program:
./myprogram | \
mencoder /dev/stdin -demuxer rawvideo -rawvideo w=640:h=480:fps=30:format=rgba \
-ovc lavc -lavcopts vcodec=mpeg4:vbitrate=9000000 \
-oac copy -o output.avi
I believe you can also use ffmpeg this way, or x264. You can also start the encoder from within your program, and write to a pipe (making the whole process as simple if you were using a library).
While not quite what you want, and not suitable for iPhone development, it does have the advantage that Unix will automatically use a second processor for the encoding.
i am trying to make ffmpeg or x264 provider that will encode videos.. Well i have been looking at some tools and such and. i don't know..
I would need to make my own api. I have done the same 4 ffmpeg for flv1 but h264 seems much different..
can anyone give me some basics where and how to start?
Ah.. as i don't have any answers and i did resolve this.. here is what i did do..
After loosing a lot of time, downloading many files, reading unclear documentation and so on and so on...
best and most important thing here is avisynth tool that can load any kind of video under directshow - do really a lot (using it's own script language) and then send that script to x264.exe encoder that will create video, which u will need to include with mp3 (also extracted by using avisynth plugin) and wrap it into mp4 file with mp4box.exe.
All this jobs are done by running process from .net that will return output.
My list of tools is:
avisynth - best thing for video ever made
ffmpeg - to get images out but u can use it for other things if u like
x264 - to get x264 video out from avs (avisynth script)
mp3box - to combine 264 file with mp3 into h264
soundout - avi synth plugin to extract mp3 sound from avisynth video
yadif - avi synth plugin to do some tihngs
My choice would be to use Mencoder.
Try to find a binary version that has pre-compiled support for x264 (or compile your own!) in order to use it for H.264 encoding. In order to see what things your Mencoder binary support, try the command
mencoder -ovc help
If you get x264 somewhere in that list, you are good to go.
After that, you can use Mencoder to transcode any kind of video to H.264. Please check the mencoder manual here to get started:
http://www.mplayerhq.hu/DOCS/HTML/en/menc-feat-x264.html