FFMPEG - Flaky internet for streaming, fallback stream or image - nginx

I am using NGINX to receive rtmp and output to hls.
rtmp {
server {
listen 1935;
...
application rtmp {
live on;
...
exec ffmpeg -re -i rtmp://127.0.0.1/rtmp/$name -threads 1 -c:a aac -ac 1 -strict -2 -b:a 64k -c:v libx264 -profile:v baseline -g 10 -b:v 300K -s 480x240 -f flv rtmp://127.0.0.1/hls/$name;
}
application hls {
live on;
hls on;
hls_path /tmp/hls;
...
}
}
}
My stream comes from Flash Media Live Encoder. But sometimes I have flaky internet because my connection comes from mobile. Sometimes internet drops for 3-5 seconds every 5 minutes. But this is enough to disrupt the stream. Is it possible for me to make it run continuously even when my FMLE is disconnected?
I am thinking of executing an FFMPEG from the server box to stream continuously an image as a fallback when FMLE is disconnected, then combine the 2 RTMP streams. Perhaps favoring the one from FMLE if available, and the other as fallback. But I am not sure how to combine using FFMPEG.
Or is there another hack I can try?

Related

How to limit bitrate for NGNIX RMTP server

We have an NGINX RMTP module installed and while testing the same we came to know that the bitrate for the output was a around 7Mbps irrespective of the input stream's bitrate and as we have a lot of people watching these streams I would like to know how to reduce the same to about 4Mbps for this module?
Also, does NGNIX's RTMP module support H.265 instead of the standard H.264 which can help set the bitrate to about 2Mbps.
You can transcode the incoming rtmp stream with ffmpeg with maxrate & b:v in this case you can control the maximum bitrate. Here is a simple example(for this example use another app show as well):
application live {
live on;
exec_push ffmpeg -i rtmp://localhost/$app/$name -async 1 -vsync -1
-c:a libfdk_aac -b:a 128k -c:v libx264 -b:v 2000k -maxrate 3000k -f flv -preset superfast -profile:v baseline rtmp://localhost:1935/show/$name_with_maxrate
}

NGinx RTMP live stream text overlay and push to multiple

I have been banging my head against this wall for a long time. Hoping you all can get me over.
I have a live stream coming from an IP Camera to my computer.
Nginx publishes to YouTube and to an FFmpeg stream that takes a frame every minute to use for a static webcam image.
Here is the code with the exec_push that I've tried to use with no success. The YouTube stream and frame capture work fine. I have FFmpeg installed with freetype. This is all on MacOS X 10.15.4 Catalina with home-brew FFmpeg --HEAD installed.
Update: I should also say I have tried outputting the overlay using command line FFmpeg and it works great with this command:
/usr/local/bin/ffmpeg -i rtmp://localhost:1935/live/68.1. -vf drawtext="fontfile=/System/Library/Fonts/Supplemental/Arial.ttf:text='Stack Overflow': fontcolor=white: fontsize=24: box=1: boxcolor=black#0.5: boxborderw=5: x=(w-text_w)/2: y=(h-text_h)/2" /Users/user/Desktop/test.mp4
So it seems that the output portion is the part FFmpeg doesn't like in Nginx.conf
My thought is I should be passing the overlayed FFmpeg stream to the "overlay" app and have the stream published to Youtube and the frame capture from there. (And also potentially recorded).
Update: When I have tried point to a sh file to run the command rather than the direct FFmpeg exec_push I get:
[alert] 56849#0: kevent() error on 15 filter:-1 flags:4002 (2: No such file or directory)
Thanks so much!
rtmp {
server {
listen 1935;
chunk_size 4096;
application live {
live on;
record off;
exec_push /usr/local/bin/ffmpeg -i rtmp://localhost:1935/live/68.1. -vf drawtext="fontfile=/System/Library/Fonts/Supplemental/Arial.ttf:textfile=/Users/Shared/overlayescaped.txt: reload=1: fontcolor=white: fontsize=20: box=1: boxcolor=black#1: boxborderw=75: x=70: y=925" -c:v libx264 -maxrate 6000k -bufsize 4000k -c:a aac -b:a 160k -ar 44100 -b:a 128k -f mp4 rtmp://localhost:1935/overlay/test;
#push rtmp://localhost:1935/overlay;
}
application overlay {
live on;
record off;
push rtmp://a.rtmp.youtube.com app=live2 playpath=yourstreamkey;
exec_push /usr/local/bin/ffmpeg -i rtmp://localhost:1935/overlay/$name -vf fps=1/60 /Users/Shared/stream/netcam.jpg;
}
}
}
The answer was:
a) I must invoke the Ffmpeg command through a file for this to work. I'm not entirely sure why, but that is just the way it is.
b) I wasn't able before to get logging info from Ffmpeg. It was because I was logging to the wrong spot. I needed to log to /tmp/ because of the unprivileged (nobody) user used by Nginx. Makes sense.
c) At that point once the command was working from a file, I could see the actual errors that Ffmpeg was throwing and could troubleshoot them. Which had a lot to do with option placement, spacing, and ensuring it is a flv container, not an mp4 container.
Here is the Nginx rtmp configuration I ended up with:
rtmp {
server {
listen 1935;
chunk_size 4096;
application live {
live on;
record off;
meta copy;
exec /Users/Shared/ffmpegcommand.sh $name;
}
application overlay {
live on;
record off;
meta copy;
push rtmp://a.rtmp.youtube.com app=live2 playpath=stream-key;
exec_push /usr/local/bin/ffmpeg -i rtmp://localhost:1935/overlay/$name -vf fps=1/60 /Users/Shared/stream/netcam.jpg;
}
}
}
And here is the Ffmpeg command I am using in the command file for the text overlay (now using -filter_complex, as -vf wasn't the proper option in this case).
/usr/local/bin/ffmpeg -i rtmp://localhost:1935/live/68.1. -filter_complex drawtext="fontfile=/System/Library/Fonts/Supplemental/Verdana.ttf: textfile=/Users/Shared/overlayescaped.txt: reload=1: fontcolor=white: fontsize=17: box=1: boxcolor=black#1: boxborderw=80: x=80: y=935" -c:v libx264 -level 4.1 -maxrate 6000k -bufsize 4000k -c:a copy -f flv rtmp://localhost:1935/overlay/newlive 2>>/tmp/ffmpeg.error
I also modified the audio options so that they copy straight from source as no encoding is needed.
Finally, I created the overlay text file from a text file I already had. The existing overlay had a % symbol for humidity so I had to escape that character using sed in a bash script.
escovlfiletmp='/Users/Shared/overlayescapedtmp.txt'
escovlfile='/Users/Shared/overlayescaped.txt'
overlaysearch="% B:"
overlayreplace="\\\\\\% B:"
sed -e "s/${overlaysearch}/${overlayreplace}/g" ${overlayfile} > ${escovlfile}
I've attached a screen cap of the final video stream result. The entire black area is the overlay.
Very happy.
Thank you for all the resources on this website and elsewhere. It took me 4 days and many hours of constant searching but managed to piece it all together.
enter image description here

Using ffmpeg to transcode/transmux HLS to RTMP for nginx simulcast not working

I want to take an HLS stream and transcode it to RTMP and simulcast it with the nginx RTMP module.
It's not working, however (I have it placed in the application section of the RTMP module).
exec ffmpeg -i -re http://<HLS>.m3u8 -acodec aac -vcodec libx264 -f flv rtmp://localhost/live/test;
When I try to view my RTMP stream in VLC, it is not loading. I have tried several variations of that ffmpeg directive, none have worked. Any advice? If you need to see more of my config file, I can provide that, but this server has been working previously perfectly when sending video over via a Teradek encoder. This new wrinkle is just not working.
EDIT: Just had a thought. It’d probably help to have the codec information of the incoming HLS stream. Here it is:
Video Codec: H264 - MPEG-4 AVC
Resolution: 640x360
Frame rate: 24
Decoded format: Planar 4:2:0 YUV
Audio Codec: MPEG AAC Audio (mp4a)
Channels: Stereo
Sample rate:48000Hz
If you run in terminal
ffmpeg -i -re http://<HLS>.m3u8 -acodec aac -vcodec libx264 -f flv rtmp://localhost/live/test;
are you able to play the stream in VLC?

Low Latency DASH Nginx RTMP

I use arut nginx-rtmp-module (https://github.com/arut/nginx-rtmp-module) on the media server, then I tried to stream using FFmpeg to the dash application, then I test the stream by playing it using VLC.
And it waits around 30secs to start playing, and it plays from the beginning, not the current timestamp.
This is my current config on the RTMP block
rtmp {
server {
listen 1935;
application live {
live on;
exec ffmpeg -re -i rtmp://localhost:1935/live/$name
-c:a libfdk_aac -b:a 32k -c:v libx264 -b:v 128K -f flv rtmp://localhost:1935/hls/$name_low
-c:a libfdk_aac -b:a 64k -c:v libx264 -b:v 256k -f flv rtmp://localhost:1935/hls/$name_mid
-c:a libfdk_aac -b:a 128k -c:v libx264 -b:v 512K -f flv rtmp://localhost:1935/hls/$name_hi
-c:a libfdk_aac -b:a 128k -c:v libx264 -b:v 512K -f flv rtmp://localhost:1935/dash/$name_dash;
}
application hls {
live on;
hls on;
hls_path /tmp/hls;
hls_nested on;
hls_variant _low BANDWIDTH=160000;
hls_variant _mid BANDWIDTH=320000;
hls_variant _hi BANDWIDTH=640000;
}
application dash {
live on;
dash on;
dash_path /tmp/dash;
dash_nested on;
}
}
}
This is the command I use for streaming
ffmpeg -re -i 2014\ SPRING.mp4 -c copy -f flv
rtmp://52.221.221.163:1935/dash/spring
How can I reduce the delay, and make it play from the same timestamp as the streamer?
Can I achieve under 5s latency?
UPDATE
Tried to change the playlist length and fragment length, using this directive
dash_playlist_length 10s;
dash_fragment 2s;
But still got some latency problem, sometimes it's smaller than before, sometimes it's the same
Can I achieve under 5s latency?
No. DASH is a segmented protocol, meaning your media is chopped up into relatively large chunks. The player has to download some chunks before it can start playing them. Your encoder has to upload entire chunks before these chunks even appear in the manifest. This is the wrong tool for the job, and any attempts at reducing latency by cranking the chunk size down are adding massive overhead to your project. You're using the wrong tool for the job if latency is important to you.
How can I reduce the delay, and make it play from the same timestamp as the streamer?
You can't. Physics! It's impossible for you to play the same thing at the exact same time as it is being encoded. You're sending data over a packet-switched network, with many encoding/decoding steps in the way which all require a buffer as they work in chunks. The only way to playback what's coming in simultaneously is to go analog... at least there your only delay is the speed of light.
The best you can do is switch to a protocol designed for low latency, like WebRTC. Just be sure you understand the tradeoffs. Your codecs will be optimized for latency, not quality... so your quality will suffer. WebRTC over UDP (optional but common) means that some packets will get lost, causing your viewing experience to suffer. When you care about latency, it doesn't matter so much if you lose a chunk here or there, what matters is that you keep going. You can use WebRTC over TCP and keep your reliability at only a slight increase in latency.
Decide what really matters to you. In almost every case, it isn't actually low latency. You can't have it all ways. There are tradeoffs to every approach. You must decide what is best for your specific situation.
I also have the same issue with VLC media player. Most of the latency is by the client player, you can use the ffplayer with no buffer to check it.
ffplay -fflags nobuffer rtmp://192.168.1.66/myapp/live
Results of mine,
latency of VLC: 6~7s
latency of ffplay: 500ms
For more refer to comment of narlex of issue how to reduce latency on github
You may need to change the GOP size in ffmpeg command. The default GOP size for ffmpeg is 250 which means there will be a key frame every 250 frames. If your output is 25fps, then you will have a key frame every 10s in worst case (if scene cut detection is enabled, you may have shorter key frame interval).
For both HLS and DASH, the segments must start with a key frame. So you will have a lot of segments with 10s duration. You need to reduce the segment duration (the GOP size) in order to reduce latency.
Try to modify your ffmpeg command as follow to see if it helps
exec ffmpeg -re -i rtmp://localhost:1935/live/$name
-c:a libfdk_aac -b:a 32k -c:v libx264 -g 50 -b:v 128K -f flv rtmp://localhost:1935/hls/$name_low
-c:a libfdk_aac -b:a 64k -c:v libx264 -g 50 -b:v 256k -f flv rtmp://localhost:1935/hls/$name_mid
-c:a libfdk_aac -b:a 128k -c:v libx264 -g 50 -b:v 512K -f flv rtmp://localhost:1935/hls/$name_hi
-c:a libfdk_aac -b:a 128k -c:v libx264 -g 50 -b:v 512K -f flv rtmp://localhost:1935/dash/$name_dash;

Transcode H264 stream into mpeg2 with ffmpeg and nginx-rtmp module

I am using nginix web server and nginx-rtmp module for managing my video stream encoded in h264. Here is my nginx conf:
rtmp {
server {
listen 1935;
application big {
live on;
exec ffmpeg -re -i rtmp://localhost:1935/$app/$name -vcodec
libx264 -vprofile baseline -acodec libvo_aacenc -ac 1 -ar 441000
-f flv rtmp://localhost:1935/hls/${name};
}
}
application hls
{
live on;
hls_path /usr/local/nginx/html/video;
}
}
it works well in browser, however because my mobile client is Adobe Air it would only work on Android but not Apple, because Apple doesn't support H264 encoding through AIR applications, so I was trying to transcode the stream to something supported for example mpeg. And this is how I changed my ffmpeg:
exec ffmpeg -re -i rtmp://localhost:1935/$app/$name -vcodec
mpeg2video -acodec copy -b:v 10M -b:a 128k
-f mpegts rtmp://localhost:1935/hls/${name};
However it just won't show the video not in a browser nor on device, my assumption is that it probably failed to transcode.
Maybe I am missing something ? Any ideas are highly appreciated.
Thank you.
You probably got your answer already, but just in case:
You are not using the module properly,
1) On iOS you need to point your browser to http://localhost:80/hls/${name} to get the HLS stream.
2) You are missing in the config the http section to generate the HLS stream
See here for details: How can we transcode live rtmp stream to live hls stream using ffmpeg?

Resources