nginx live adaptive bitrate streaming:- not able to switch quality manuallly? - nginx

I am using Nginx for live adaptive bitrate streaming. My live streaming is working fine.
Also, the chunks are getting created and the master playlist is also getting created as you can see in this image.
My config
application live {
live on; # Allows live input
exec_push /usr/bin/ffmpeg -i rtmp://localhost:1935/$app/$name
-force_key_frames "expr:gte(t,n_forced*3)" -c:v libx264 -vprofile baseline -vlevel 3.1 -s 640x360 -b:v 1200k -strict -2 -c:a aac -ar 44100 -ac 2 -b:a 96k -f flv rtmp://localhost/show/$name_hi
-force_key_frames "expr:gte(t,n_forced*3)" -c:v libx264 -vprofile baseline -vlevel 3.1 -s 240x360 -b:v 1200k -strict -2 -c:a aac -ar 44100 -ac 2 -b:a 96k -f flv rtmp://localhost/show/$name_low;
}
This is my master playlist
This is my each playlist m3u8 file
But when I point the master playlist to the videojs (hlsjs) added it is showing the quality
auto
undefinedp
But when I use some other test stream from online then it is showing me all available quality
using my live stream generated using nginx ffmpeg
using other live stream

You need to add the RESOLUTION attribute to your master playlist in the EXT-X-STREAM-INF tag. This is optional in https://www.rfc-editor.org/rfc/rfc8216#section-4.3.4.2 but it's required by the quality selector UI plugin.
See: https://github.com/chrisboustead/videojs-hls-quality-selector/issues/8
Nginx RTMP module config example:
hls_variant <variant_name> BANDWIDTH=<bandwidth>,RESOLUTION=<resolution>;

Related

How to limit bitrate for NGNIX RMTP server

We have an NGINX RMTP module installed and while testing the same we came to know that the bitrate for the output was a around 7Mbps irrespective of the input stream's bitrate and as we have a lot of people watching these streams I would like to know how to reduce the same to about 4Mbps for this module?
Also, does NGNIX's RTMP module support H.265 instead of the standard H.264 which can help set the bitrate to about 2Mbps.
You can transcode the incoming rtmp stream with ffmpeg with maxrate & b:v in this case you can control the maximum bitrate. Here is a simple example(for this example use another app show as well):
application live {
live on;
exec_push ffmpeg -i rtmp://localhost/$app/$name -async 1 -vsync -1
-c:a libfdk_aac -b:a 128k -c:v libx264 -b:v 2000k -maxrate 3000k -f flv -preset superfast -profile:v baseline rtmp://localhost:1935/show/$name_with_maxrate
}

NGINX and FFMPEG generate dynamic adaptive streaming

In this configuration file
https://github.com/TareqAlqutami/rtmp-hls-server/blob/master/conf/nginx.conf#L24-L30
for each received stream, transcode for adaptive streaming This single
ffmpeg command takes the input and transforms the source into 4
different streams with different bitrates and qualities. # these
settings respect the aspect ratio.
How we can dynamically generate variants? i.e for 1080p input generate all variants, but for 240p input generate no variants
My startup works without error
You need to configure the log and see what error it gives.
But another solution is to check manually
You may be using a codec that is not installed
I will check your tank, maybe I can contribute here
application live {
live on; # Allows live input
exec ffmpeg -i rtmp://localhost/live/$name -threads 8
-c:v libx264 -profile:v baseline -b:v 768K -s 640x360 -vf "drawtext= fontcolor=red: fontsize=20: fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: text='360': x=10: y=10:" -f flv -c:a aac -ac 1 -strict -2 -b:a 96k rtmp://localhost/show/$name_360
-c:v libx264 -profile:v baseline -b:v 1024K -s 960x540 -vf "drawtext= fontcolor=red: fontsize=20: fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: text='480': x=10: y=10:" -f flv -c:a aac -ac 1 -strict -2 -b:a 128k rtmp://localhost/show/$name_480
-c:v libx264 -profile:v baseline -b:v 1920K -s 1280x720 -vf "drawtext= fontcolor=red: fontsize=20: fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: text='720': x=10: y=10:" -f flv -c:a aac -ac 1 -strict -2 -b:a 128k rtmp://localhost/show/$name_720
-c:v libx264 -profile:v baseline -b:v 4000K -s 1920x1080 -vf "drawtext= fontcolor=red: fontsize=20: fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: text='720': x=10: y=10:" -f flv -c:a aac -ac 1 -strict -2 -b:a 128k rtmp://localhost/show/$name_1080;
}
application show {
live on; # Allows live input from above
hls on; # Enable HTTP Live Streaming
# hls_fragment 5s;
# Pointing this to an SSD is better as this involves lots of IO
hls_path /dest;
#hls_variant _240 BANDWIDTH=288000;
hls_variant _360 BANDWIDTH=448000;
hls_variant _480 BANDWIDTH=1152000;
hls_variant _720 BANDWIDTH=2048000;
hls_variant _1080 BANDWIDTH=4096000;
}

watermark on transcoding rtmp video nginx

Hello i am transcoding video with rtmp and i want to put some text...i put Arial.ttf in my home directory of the server but doesnt work.. this is my command..
-map 0:0 -map 0:1 -strict -2 -crf 26 -vcodec libx264 -preset superfast
-acodec aac -b:a 128k -vf scale=-1:720 -aspect 16:9 -g 50 -r 30 -ar 48000
-ac 2 -vf drawtext="fontfile=/home/Arial.ttf: text='TEXT': fontcolor=white: fontsize=24: box=1: boxcolor=black: x=10:y=10"
-f flv
can someone help me please?
thanks
Make sure you include absolute path to font and file is there:
drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf:
text='Test Text': x=100: y=50: fontsize=24: fontcolor=yellow#0.2:
box=1: boxcolor=red#0.2"
Redirect output to a log file and see if command outputs any errors.
You can also include a watermark image instead:
-i /path/to/watermark.png -filter_complex "overlay=main_w-overlay_w-4:4"

Use FFMPEG to restream RTMP source to YouTube - no video stream in output

I am attempting to grab the .m3u8 file from an nginx-rtmp server and pass it along to YouTube rtmp. I believe this to be possible (for example here: https://stackoverflow.com/a/11978820/1552594 although this is on the same host). The command I am using is:
ffmpeg -analyzeduration 0 -i \
http://source.rtmp.server/hls/stream.m3u8 -pix_fmt yuv420p \
-f flv rtmp://a.rtmp.youtube.com/live2/xxxx-xxxx-xxxx-xxxx
However the output contains only audio and YouTube doesn't like it. The command produces following:
As you can see no Video stream in output metadata, stream mapping shows only audio and the trace shows 0kb of Video for 651kb of Audio
Any help much appreciated
MORE INFO
Improved version of the command lifted from this article:
https://judge2020.com/restreaming-a-m3u8-hls-stream-to-youtube-using-ffmpeg/
"Restreaming a m3u8 HLS stream to Youtube using FFMPEG" AKA exactly what I am trying to do.
The command I am sending is now:
ffmpeg -re -i "http://source.rtmp.server/hls/stream.m3u8" \
-strict -2 -c:v copy -c:a aac -ar 44100 -ab 128k -ac 2 -flags \
+global_header -bsf:a aac_adtstoasc -bufsize 3000k -f flv \
"rtmp://a.rtmp.youtube.com/live2/xxx-xxxx-xxxx-xxxx"
I got pretty much exactly the same response except with the Audio being read and output using aac codec.
MORE MORE INFO
I have found that adding a mapping can force the video stream into the output:
ffmpeg -re -i "http://source.rtmp.server/hls/stream.m3u8" \
-strict -2 -c:v copy -c:a -map 0:0 -map 0:1 -ar 44100 -ab 128k -ac 2 \
-flags +global_header -bsf:a aac_adtstoasc -bufsize 1000k \
-f flv "rtmp://a.rtmp.youtube.com/live2/xxxx-xxxx-xxxx-xxxx"
This throws up the error that has presumably been resulting in the video stream being silently dropped:
Finally worked it out. The last issue above was a red herring and was due to missing a codec argument for the audio -c:a.
The complete working command is as follows:
ffmpeg -probesize 100M -analyzeduration 20M -re \
-i "http://source.rtmp.server/hls/stream.m3u8" -strict -2 -c:v \
libx264 -pix_fmt yuv420p -c:a aac -map 0:0 -map 0:1 -ar 44100 \
-ab 128k -ac 2 -b:v 2567k -flags +global_header -bsf:a aac_adtstoasc \
-bufsize 1000k -f flv "rtmp://a.rtmp.youtube.com/live2/xxxx-xxxx-xxxx-xxxx"
The important parts are -probesize and -analyeduration - these need tweaking until they work. The -re flag is important to indicate restreaming. The various video codec declarations are also important - -c:v libx264 -pix_fmt yuv420p or it will throw errors about the output size being 0x0. Finally the map flags ensure that both streams are included in the output: -map 0:0 -map 0:1

How can I create a master m3u8 playlist for my encrypted sub-playlists (created with ffmpeg)?

If I create three outputs with the following ffmpeg command for an encrypted HLS stream, how I am able to create a master.m3u8 variant playlist (with correct BANDWIDTH)?
./ffmpeg -re -i Test_1080p.mp4 \
-c:a aac -b:a 128k -c:v libx264 -s 1920x1080 -g 48 -keyint_min 48 -sc_threshold 0 -bf 3 -b_strategy 2 -b:v 7800k -maxrate 8600k -bufsize 7800k -f hls -hls_time 6 -hls_list_size 0 -hls_key_info_file enc.keyinfo ./1080p/index.m3u8 \
-c:a aac -b:a 128k -c:v libx264 -s 1280x720 -g 48 -keyint_min 48 -sc_threshold 0 -bf 3 -b_strategy 2 -b:v 4500k -maxrate 5000k -bufsize 4500k -f hls -hls_time 6 -hls_list_size 0 -hls_key_info_file enc.keyinfo ./720p/index.m3u8 \
-c:a aac -b:a 64k -c:v libx264 -s 640x360 -g 48 -keyint_min 48 -sc_threshold 0 -bf 3 -b_strategy 2 -b:v 730k -maxrate 800k -bufsize 730k -f hls -hls_time 6 -hls_list_size 0 -hls_key_info_file enc.keyinfo ./360p/index.m3u8
Here is some example I found, but I think the BANDWIDTH-Value is not correct for my output files. How do I calculate the correct bandwidth?
#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=150000,RESOLUTION=640x360
http://example.com/360p/index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=240000,RESOLUTION=1280x720
http://example.com/720p/index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=640000,RESOLUTION=1920x1080
http://example.com/1080p/index.m3u8
The variantplaylistcreator-tool from Apple will not work in this case because I need .plist files, ffmpeg does not generate these files.
I think ffmpeg is not able to create a master.m3u8 playlist for the generated output files..
Update January 2018
You can now create master playlists directly with FFmpeg using master_pl_name and var_stream_map. See the documentation.
FFmpeg doesn't create the master playlist but you can do it manually like in the example.
The BANDWIDTH attribute represents the peak bitrate of the variant. For multiplexed streams like yours the value is the peak audio bitrate + peak video bitrate + mux overhead (including any encryption padding). If you have separate video/audio you must take into account the highest-bitrate combination of renditions.
The muxing overhead is shown when the ffmpeg command ends but only if you have a single output. Once you choose the encoding parameters you can run some tests and make an educated guess based on the results.
One thing to keep in mind is that the measured value must be within 10% of the declared bandwidth for VOD and respectively within 25% for 1 hour of live content based on the Apple guidelines.

Resources