FFMPEG taking too much time to encrypting a video file.
I am running following command "ffmpeg -i "Sample.mp4" -hls_time 10 -hls_list_size 0 -hls_key_info_file enc.keyinfo "sample.m3u8"".
To encrypt 708MB video file it takes around 22 minutes, Which is too much time so
Is there any way to decress the time without changing the video quality?
Related
I am trying to Convert the *.rtpdump file, created by Wireshark into wav file by Sox.
In Wireshark the original file is played without any tatering sound in the audio file, but when I convert it to wav file via SOX (on Windows), there is continuous tatering sound throughout the clip and the actual voice remains in background.
I tried the u-law encoding, a-law and others, the best it can get is with u-law, but it's also not so much audible. I tried the lowpass, gain, treble things but that also is not helping, changing channels, bit rate and other options make it worse.
Tried many things but tatering is not going
sox.exe -t raw -r 8000 -e u-law -c 1 66.rtpdump -t wav d:\out.wav -V
sox.exe -t raw -r 8000 -e a-law -c 1 66.rtpdump -t wav d:\out.wav -V
The first few bytes within each packet are causing this tatering sound.
I removed these bytes and the combined all the packets without these bytes to create a tatering free sound.
I have a streaming transcoder which converts a high bandwidth fiber stream to a multicast RTP stream. I want to be able to show this stream to a client in a browser. There are 2 issues if I understand correctly:
The client most likely does not support multicast over his network
RTP cannot be played in a browser, so this needs to be converted to another format
What I have done so far (using FFMPEG):
Method 1: copy the stream to a .m3u8 without muxing, then hosting it with a Webserver (Nginx)
ffmpeg -protocol_whitelist file,udp,rtp -i ./stream.sdp -c:v copy -c:a copy -bufsize 50k -flags -global_header -hls_time 1 -f hls -hls_playlist_type event -hls_list_size 3 ./video/stream.m3u8
Method 2: enable HLS on Nginx and convert the stream to RTMP
ffmpeg -protocol_whitelist file,udp,rtp -i ./stream.sdp -vcodec libx264 -vprofile baseline -acodec aac -strict -2 -f flv rtmp://localhost/show/stream
Both of these methods result in a working livestream, but the delay remains around 5 seconds.
Is there any way to make the livestream faster? The multicast livestream has around a 1 second delay at max.
Both of these methods result in a working livestream, but the delay remains around 5 seconds.
Thats actually really good for HLS. Yes, there are ways to be faster within things Luke WebRTC and CTE. But nothing standard, You would have to develop the player and a chunk of infrastructure yourself.
So, I've read all the articles here and unfortunately I can't seem to find the answers I'm looking for. I've gotten close, but the certain magic strings allude me.
I'm running hls live streaming (nginx) on ubuntu 17.10 server. In short, I can get the server running one video at a time fine with ffmpeg (with subtitles) using the following:
ffmpeg -re -i "1.mkv" -vcodec libx264 -vprofile baseline -g 30 -b:v 1000k -s 852x480 -acodec aac -strict -2 -b:a 192k -ac 2 -vf subtitles=1.srt -f flv rtmp://localhost:1935/show/stream
Though, I cannot find a solution to run a playlist using this method. It seems impossible, and when I try vlc via sout (internally, or externally) I reveive either buffer problems, or the aac experimental codec error:
[aac # 0xb162e900] The encoder 'aac' is experimental but experimental codecs are not enabled, add '-strict -2' if you want to use it.
Example string that spits that error:
vlc "1.mkv" --sout '#transcode{soverlay,vb=1000,vcodec=h264,width=853,height=480,acodec=mp4a,ab=128,channels=2,samplerate=44100}:std{access=rtmp,mux=ffmpeg{mux=flv},dst=rtmp://localhost:1935/show/stream}'
Every other audio codec doesn't work with flv. I'm at a loss, I've tried almost every combination I could think of and digout just to get to this point. The best functioning out of them has been ffmpeg: it doesn't buffer video at all, plays smoothly, but just can't play a playlist. Whereas vlc can play a playlist but buffers, and has no sound (internally). I've tried aenc=ffmpeg{strict=-2}, batch pipes, etc, etc. I need help. Nothing works. Is there any solution? All I want is to run a playlist of 25 videos, all different variations, on a loop to the m3u8 for embedding.
A friend of mine mentioned he used bash scripts to have a seamless playlist like viewing feature. Hopefully that points you in the direction you need. I can try digging them up if you want to work together on this, coz I too am interested in finding out more about it.
There is a event going on..it goes for 2 hours and at each second it creates new series data. How can I take this data(x=[],y=[]),convert into a graph and then convert it into a video live stream and then see it?
Is this even possible?
I have never done this but it seems possible.
First, you create a PNG with the graph using http://www.gnuplot.info.
Gnuplot is a portable command-line driven graphing utility for Linux, OS/2, MS Windows, OSX, VMS, and many other platforms. [...] It was originally created to allow scientists and students to visualize mathematical functions and data interactively, but has grown to support many non-interactive uses such as web scripting.
You can create a script that routinely updates the PNG with new data.
There is also a node.js module to interface with gnuplot.
https://www.npmjs.com/package/plotframes
Then, you could use FFmpeg to create an HLS stream looping an image over and over again.
ffmpeg -loop 1 -r 30000/1001 -i graph_960x540.png -an -s 960x540 -r 30000/1001 -c:v libx264 -crf 10 -maxrate 900k -b:v 900k -profile:v baseline -bufsize 1800k -pix_fmt yuv420p -hls_time 2 -hls_list_size 0 -hls_segment_filename 'png2hls/file%03d.ts' png2hls/index.m3u8
However, is a video stream required to display a graph? Why not display the image and use JavaScript to update the image as new graphs are produced?
Cheers.
We use ffmpeg in one of our applications to slice videos. While it's working fine for slicing PAL videos, it is not working for QT videos... Here's the command we use:
ffmpeg.exe -i "input.mp4" ss startTime -c copy -to stopTime -y "output.mp4"
Throws an error - "[mp4 # 0515c240] Could not find tag for codec pcm_s16le in stream #1, codec not currently supported in container"
The input videos are created by a solid state digital video recording system which records videos from following Input channels:
6 channels of videos input (DVI-4No.s, PAL-2No.s)
2 channels of audio input (Left & Right)
2 channels of MIL STD 1553B bus data
2 channels of RS422 data
What could be the issue & how can it be resolved?
FFmpeg does not support PCM audio in MP4 container. Output to MOV instead.
Run
ffmpeg -i in.mp4 -ss startTime -to stopTime -c copy -avoid_negative_ts make_zero out.mov