Reduced HLS Latency causes transcoded playlist issues - nginx

Software Info:
* ubuntu 16.04
* Nginx 1.15.1
So recently I was reading around for low latency for HLS stream on nginx and found a solution here: Reduce HLS latency from +30 seconds
this will reduce the latency to ~7 seconds but then I also wants to transcode the stream and the latency for that doesn't really matter (all I want is the source to be low latency but if transcoded version can then it would be extra) but when I do so, the source has no issue but the transcode version gives issues which essentially the browser script will tries to play an early version of the fragment which already been deleted so it causes 404 errors. So how do I solve it so that I can achieve the ~7 seconds latency for the source and have the transcode version working at the same time?
My current configuration:
FFMPEG Transcode
-c:v copy -preset:v ultrafast -b:v 6000K -c:a copy -tune zerolatency -f flv rtmp://localhost/stream/$name_source
-c:v libx264 -preset ultrafast -s 852x480 -tune fastdecode -b:v 1000K -c:a copy -tune zerolatency -f flv rtmp://localhost/stream/$name_medium
-c:v libx264 -preset ultrafast -s 1280x720 -tune fastdecode -b:v 3500K -c:a copy -tune zerolatency -f flv rtmp://localhost/stream/$name_high
-c:v libx264 -preset ultrafast -s 426x240 -b:v 400K -c:a copy -tune fastdecode -tune zerolatency -f flv rtmp://localhost/stream/$name_low
HLS:
hls_fragment 1s;
hls_playlist_length 4s;
hls_variant _source BANDWIDTH=600000;
hls_variant _high BANDWIDTH=350000;
hls_variant _medium BANDWIDTH=100000;
hls_variant _low BANDWIDTH=40000;

I'm able to get it to work when I set both fragment and playlist length the same amount of time.
hls_fragment 2s;
hls_playlist_length 2s;
above is the lowest you can achieve while having to transcode which is about ~5-6 seconds latency while transcode is about ~13 seconds.
It still not recommended as it can causes some issues

Related

How to limit bitrate for NGNIX RMTP server

We have an NGINX RMTP module installed and while testing the same we came to know that the bitrate for the output was a around 7Mbps irrespective of the input stream's bitrate and as we have a lot of people watching these streams I would like to know how to reduce the same to about 4Mbps for this module?
Also, does NGNIX's RTMP module support H.265 instead of the standard H.264 which can help set the bitrate to about 2Mbps.
You can transcode the incoming rtmp stream with ffmpeg with maxrate & b:v in this case you can control the maximum bitrate. Here is a simple example(for this example use another app show as well):
application live {
live on;
exec_push ffmpeg -i rtmp://localhost/$app/$name -async 1 -vsync -1
-c:a libfdk_aac -b:a 128k -c:v libx264 -b:v 2000k -maxrate 3000k -f flv -preset superfast -profile:v baseline rtmp://localhost:1935/show/$name_with_maxrate
}

NGINX and FFMPEG generate dynamic adaptive streaming

In this configuration file
https://github.com/TareqAlqutami/rtmp-hls-server/blob/master/conf/nginx.conf#L24-L30
for each received stream, transcode for adaptive streaming This single
ffmpeg command takes the input and transforms the source into 4
different streams with different bitrates and qualities. # these
settings respect the aspect ratio.
How we can dynamically generate variants? i.e for 1080p input generate all variants, but for 240p input generate no variants
My startup works without error
You need to configure the log and see what error it gives.
But another solution is to check manually
You may be using a codec that is not installed
I will check your tank, maybe I can contribute here
application live {
live on; # Allows live input
exec ffmpeg -i rtmp://localhost/live/$name -threads 8
-c:v libx264 -profile:v baseline -b:v 768K -s 640x360 -vf "drawtext= fontcolor=red: fontsize=20: fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: text='360': x=10: y=10:" -f flv -c:a aac -ac 1 -strict -2 -b:a 96k rtmp://localhost/show/$name_360
-c:v libx264 -profile:v baseline -b:v 1024K -s 960x540 -vf "drawtext= fontcolor=red: fontsize=20: fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: text='480': x=10: y=10:" -f flv -c:a aac -ac 1 -strict -2 -b:a 128k rtmp://localhost/show/$name_480
-c:v libx264 -profile:v baseline -b:v 1920K -s 1280x720 -vf "drawtext= fontcolor=red: fontsize=20: fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: text='720': x=10: y=10:" -f flv -c:a aac -ac 1 -strict -2 -b:a 128k rtmp://localhost/show/$name_720
-c:v libx264 -profile:v baseline -b:v 4000K -s 1920x1080 -vf "drawtext= fontcolor=red: fontsize=20: fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: text='720': x=10: y=10:" -f flv -c:a aac -ac 1 -strict -2 -b:a 128k rtmp://localhost/show/$name_1080;
}
application show {
live on; # Allows live input from above
hls on; # Enable HTTP Live Streaming
# hls_fragment 5s;
# Pointing this to an SSD is better as this involves lots of IO
hls_path /dest;
#hls_variant _240 BANDWIDTH=288000;
hls_variant _360 BANDWIDTH=448000;
hls_variant _480 BANDWIDTH=1152000;
hls_variant _720 BANDWIDTH=2048000;
hls_variant _1080 BANDWIDTH=4096000;
}

End FFMPEG execution when RTMP input is closed

I have an RTMP arut server inside which I call a python script to run an FFMPEG command and create HLS packaging.
Problem is that when I stop the RTMP stream, FFMPEG still run processes in background.
FFMPEG string is like this:
ffmpeg -re -v verbose -i rtmp://localhost:1935/live/testlive -rw_timeout 500 -http_persistent 1 -method PUT -http_user_agent test -f hls -hls_list_size 5 -hls_flags discont_start+delete_segments -vf "scale=426:trunc(ow/a/2)*2" -c:a libfdk_aac -ar 48000 -c:v h264 -profile:v main -crf 24 -sc_threshold 0 -g 48 -keyint_min 48 -hls_time 4 -hls_playlist_type event -preset veryfast -b:v 300k -maxrate 856k -bufsize 1200k -b:a 128k -vcodec libx264 -hls_segment_filename http://mystream/123/v1/testlive/0_%03d.ts http://mystream/123/v1/testlive/index.m3u8
The RTMP server should send an NetStream.Play.Stop event to subscribers (or disconnect them) to signal end of stream. If you are using the nginx rtmp module, look at the play_restart or idle_streams directives.

Low Latency DASH Nginx RTMP

I use arut nginx-rtmp-module (https://github.com/arut/nginx-rtmp-module) on the media server, then I tried to stream using FFmpeg to the dash application, then I test the stream by playing it using VLC.
And it waits around 30secs to start playing, and it plays from the beginning, not the current timestamp.
This is my current config on the RTMP block
rtmp {
server {
listen 1935;
application live {
live on;
exec ffmpeg -re -i rtmp://localhost:1935/live/$name
-c:a libfdk_aac -b:a 32k -c:v libx264 -b:v 128K -f flv rtmp://localhost:1935/hls/$name_low
-c:a libfdk_aac -b:a 64k -c:v libx264 -b:v 256k -f flv rtmp://localhost:1935/hls/$name_mid
-c:a libfdk_aac -b:a 128k -c:v libx264 -b:v 512K -f flv rtmp://localhost:1935/hls/$name_hi
-c:a libfdk_aac -b:a 128k -c:v libx264 -b:v 512K -f flv rtmp://localhost:1935/dash/$name_dash;
}
application hls {
live on;
hls on;
hls_path /tmp/hls;
hls_nested on;
hls_variant _low BANDWIDTH=160000;
hls_variant _mid BANDWIDTH=320000;
hls_variant _hi BANDWIDTH=640000;
}
application dash {
live on;
dash on;
dash_path /tmp/dash;
dash_nested on;
}
}
}
This is the command I use for streaming
ffmpeg -re -i 2014\ SPRING.mp4 -c copy -f flv
rtmp://52.221.221.163:1935/dash/spring
How can I reduce the delay, and make it play from the same timestamp as the streamer?
Can I achieve under 5s latency?
UPDATE
Tried to change the playlist length and fragment length, using this directive
dash_playlist_length 10s;
dash_fragment 2s;
But still got some latency problem, sometimes it's smaller than before, sometimes it's the same
Can I achieve under 5s latency?
No. DASH is a segmented protocol, meaning your media is chopped up into relatively large chunks. The player has to download some chunks before it can start playing them. Your encoder has to upload entire chunks before these chunks even appear in the manifest. This is the wrong tool for the job, and any attempts at reducing latency by cranking the chunk size down are adding massive overhead to your project. You're using the wrong tool for the job if latency is important to you.
How can I reduce the delay, and make it play from the same timestamp as the streamer?
You can't. Physics! It's impossible for you to play the same thing at the exact same time as it is being encoded. You're sending data over a packet-switched network, with many encoding/decoding steps in the way which all require a buffer as they work in chunks. The only way to playback what's coming in simultaneously is to go analog... at least there your only delay is the speed of light.
The best you can do is switch to a protocol designed for low latency, like WebRTC. Just be sure you understand the tradeoffs. Your codecs will be optimized for latency, not quality... so your quality will suffer. WebRTC over UDP (optional but common) means that some packets will get lost, causing your viewing experience to suffer. When you care about latency, it doesn't matter so much if you lose a chunk here or there, what matters is that you keep going. You can use WebRTC over TCP and keep your reliability at only a slight increase in latency.
Decide what really matters to you. In almost every case, it isn't actually low latency. You can't have it all ways. There are tradeoffs to every approach. You must decide what is best for your specific situation.
I also have the same issue with VLC media player. Most of the latency is by the client player, you can use the ffplayer with no buffer to check it.
ffplay -fflags nobuffer rtmp://192.168.1.66/myapp/live
Results of mine,
latency of VLC: 6~7s
latency of ffplay: 500ms
For more refer to comment of narlex of issue how to reduce latency on github
You may need to change the GOP size in ffmpeg command. The default GOP size for ffmpeg is 250 which means there will be a key frame every 250 frames. If your output is 25fps, then you will have a key frame every 10s in worst case (if scene cut detection is enabled, you may have shorter key frame interval).
For both HLS and DASH, the segments must start with a key frame. So you will have a lot of segments with 10s duration. You need to reduce the segment duration (the GOP size) in order to reduce latency.
Try to modify your ffmpeg command as follow to see if it helps
exec ffmpeg -re -i rtmp://localhost:1935/live/$name
-c:a libfdk_aac -b:a 32k -c:v libx264 -g 50 -b:v 128K -f flv rtmp://localhost:1935/hls/$name_low
-c:a libfdk_aac -b:a 64k -c:v libx264 -g 50 -b:v 256k -f flv rtmp://localhost:1935/hls/$name_mid
-c:a libfdk_aac -b:a 128k -c:v libx264 -g 50 -b:v 512K -f flv rtmp://localhost:1935/hls/$name_hi
-c:a libfdk_aac -b:a 128k -c:v libx264 -g 50 -b:v 512K -f flv rtmp://localhost:1935/dash/$name_dash;

Use a variable hls_path in nginx

Right now I am saving all the files and folders created by hls in a single directory like this
application live {
live on; # Allows live input
# Once receive stream, transcode for adaptive streaming
# This single ffmpeg command takes the input and transforms
# the source into 4 different streams with different bitrate
# and quality. P.S. The scaling done here respects the aspect
# ratio of the input.
exec ffmpeg -i rtmp://127.0.0.1/$app/$name -async 1 -vsync -1
-c:v libx264 -c:a aac -strict -2 -b:v 256k -b:a 32k -vf "scale=480:trunc(ow/a/2)*2" -tune zerolatency -preset veryfast$
-c:v libx264 -c:a aac -strict -2 -b:v 768k -b:a 96k -vf "scale=720:trunc(ow/a/2)*2" -tune zerolatency -preset veryfast$
-c:v libx264 -c:a aac -strict -2 -b:v 1024k -b:a 128k -vf "scale=960:trunc(ow/a/2)*2" -tune zerolatency -preset veryfa$
-c:v libx264 -c:a aac -strict -2 -b:v 1920k -b:a 128k -vf "scale=1280:trunc(ow/a/2)*2" -tune zerolatency -preset veryf$
-c copy -f flv rtmp://127.0.0.1/show/$name_src;
}
# This application is for splitting the stream into HLS fragments
application show {
live on; # Allows live input from above
hls on; # Enable HTTP Live Streaming
hls_cleanup off;
hls_nested on;
# Pointing this to an SSD is better as this involves lots of IO
#exec mkdir /mnt/HLS/;
hls_path /mnt/HLS/;
# Instruct clients to adjust resolution according to bandwidth
hls_variant _low BANDWIDTH=288000; # Low bitrate, sub-SD resolution
hls_variant _mid BANDWIDTH=448000; # Medium bitrate, SD resolution
hls_variant _high BANDWIDTH=1152000; # High bitrate, higher-than-SD resolution
hls_variant _hd720 BANDWIDTH=2048000; # High bitrate, HD 720p resolution
}
How do I make a new directory for every hls streams according to the name of streams.

Resources