Right now I am saving all the files and folders created by hls in a single directory like this
application live {
live on; # Allows live input
# Once receive stream, transcode for adaptive streaming
# This single ffmpeg command takes the input and transforms
# the source into 4 different streams with different bitrate
# and quality. P.S. The scaling done here respects the aspect
# ratio of the input.
exec ffmpeg -i rtmp://127.0.0.1/$app/$name -async 1 -vsync -1
-c:v libx264 -c:a aac -strict -2 -b:v 256k -b:a 32k -vf "scale=480:trunc(ow/a/2)*2" -tune zerolatency -preset veryfast$
-c:v libx264 -c:a aac -strict -2 -b:v 768k -b:a 96k -vf "scale=720:trunc(ow/a/2)*2" -tune zerolatency -preset veryfast$
-c:v libx264 -c:a aac -strict -2 -b:v 1024k -b:a 128k -vf "scale=960:trunc(ow/a/2)*2" -tune zerolatency -preset veryfa$
-c:v libx264 -c:a aac -strict -2 -b:v 1920k -b:a 128k -vf "scale=1280:trunc(ow/a/2)*2" -tune zerolatency -preset veryf$
-c copy -f flv rtmp://127.0.0.1/show/$name_src;
}
# This application is for splitting the stream into HLS fragments
application show {
live on; # Allows live input from above
hls on; # Enable HTTP Live Streaming
hls_cleanup off;
hls_nested on;
# Pointing this to an SSD is better as this involves lots of IO
#exec mkdir /mnt/HLS/;
hls_path /mnt/HLS/;
# Instruct clients to adjust resolution according to bandwidth
hls_variant _low BANDWIDTH=288000; # Low bitrate, sub-SD resolution
hls_variant _mid BANDWIDTH=448000; # Medium bitrate, SD resolution
hls_variant _high BANDWIDTH=1152000; # High bitrate, higher-than-SD resolution
hls_variant _hd720 BANDWIDTH=2048000; # High bitrate, HD 720p resolution
}
How do I make a new directory for every hls streams according to the name of streams.
Related
okay so i used the below config and everything works great both youtube and facebook work .
rtmp {
server {
listen 1935;
chunk_size 8192;
application live {
record off;
live on;
push rtmp://a.rtmp.youtube.com/live2/djfghjkdfhgkjsdfglsjdfhj;
push rtmp://127.0.0.1:19350/rtmp/453uy4uty8ryt85ty85yt8; (facbook)
}
}
}
Now i have tried 2 seprate way to add a water mark (Youtube works fine Every time)
Facebook does not stream at all let alone with a watermark
examples i have tried below
rtmp {
server {
listen 1935;
chunk_size 8192;
application live {
record off;
live on;
exec /bin/ffmpeg -i rtmp://127.0.0.1:1935/live/$name
-vf "movie=/etc/nginx/images/logo.png[logo];[0][logo]overlay=0:300"
-c:v libx264 -f flv rtmp://127.0.0.1:1935/push/$name;
}
application push {
live on;
push rtmp://a.rtmp.youtube.com/live2/djfghjkdfhgkjsdfglsjdfhj;
}
}
}
and another
rtmp {
server {
listen 1935;
chunk_size 8192;
application live {
record off;
live on;
exec /bin/ffmpeg -i rtmp://127.0.0.1:1935/live/$name
-vf "movie=/etc/nginx/images/logo.png[logo];[0][logo]overlay=0:300"
-c:v libx264 -f flv rtmp://127.0.0.1:1935/push/$name;
exec /bin/ffmpeg -i rtmp://127.0.0.1:1935/live/$name
-vf "movie=/etc/nginx/images/logo.png[logo];[0][logo]overlay=0:300"
-c:v libx264 -f flv rtmp://127.0.0.1:1935/pushh/$name;
}
application push {
live on;
push rtmp://a.rtmp.youtube.com/live2/djfghjkdfhgkjsdfglsjdfhj;
}
application pushh {
live on;
push rtmp://127.0.0.1:19350/rtmp/453uy4uty8ryt85ty85yt8;
}
}
}
Now for the life of me i just cannot get my brain to work.
i am very new to rtmp and have tried a dozen other ways before coming here for help.
i know this is going to be something i where i am making such a simple mistake
but on the other hand paying over $49 for restream.io for a shoddy service i just have to learn this for my own servers
exec /bin/ffmpeg -i rtmp://localhost/aaaaaaa -i /etc/nginx/images/logo.png -filter_complex "overlay=10:10,split=2[out1][out2]" -map '[out1]' -map 0:a -s 640x480 -c:v libx264 -c:a aac -ac 1 -strict -2 -b:v 256k -b:a 32k -tune zerolatency -preset veryfast -crf 23 -f flv rtmp://localhost/push-map '[out2]' -map 0:a -s 1280x720 -c:v libx264 -c:a aac -ac 1 -strict -2 -b:v 768k -b:a 96k -tune zerolatency -preset veryfast -crf 23 -f flv rtmp://localhost/pushh;
working answer :)
I can't figure out why exec_record_done isn't being called here. on_record_done gets called but not the exec_. I've checked the script files and perms are all ok. There's nothing in the nginx logs about exec_record_done... Any ideas or suggestions to debug further?
application live {
live on; # Allows live input
# for each received stream, transcode for adaptive streaming
# This single ffmpeg command takes the input and transforms
# the source into 4 different streams with different bitrates
# and qualities. # these settings respect the aspect ratio.
exec_push /usr/local/bin/ffmpeg -i rtmp://localhost:1935/$app/$name -async 1 -vsync -1
-c:v libx264 -c:a aac -b:v 256k -b:a 256k -vf "scale=480:trunc(ow/a/2)*2" -tune zerolatency -preset superfast -crf 23 -f flv rtmp://localhost:1935/show/$name_low
#-c:v libx264 -c:a aac -b:v 768k -b:a 256k -vf "scale=720:trunc(ow/a/2)*2" -tune zerolatency -preset superfast -crf 23 -f flv rtmp://localhost:1935/show/$name_mid
#-c:v libx264 -c:a aac -b:v 1024k -b:a 256k -vf "scale=960:trunc(ow/a/2)*2" -tune zerolatency -preset superfast -crf 23 -f flv rtmp://localhost:1935/show/$name_high
-c:v libx264 -c:a aac -b:v 1920k -b:a 256k -vf "scale=1280:trunc(ow/a/2)*2" -tune zerolatency -preset superfast -crf 23 -f flv rtmp://localhost:1935/show/$name_hd720
-c copy -f flv rtmp://localhost:1935/show/$name_src;
on_publish http://docker.for.mac.localhost:3000/_h/on_publish;
on_publish_done http://docker.for.mac.localhost:3000/_h/on_publish_done;
on_record_done http://docker.for.mac.localhost:3000/_h/on_record_done;
# exec_record_done /usr/local/bin/ffmpeg -i $path
# -c:v libx264 -crf 21 -preset veryfast -g 25 -sc_threshold 0
# -c:a aac -b:a 256k -ac 2
# -f hls -hls_time 6 -hls_playlist_type event /mnt/persist0/vod/$basename/$basename.m3u8;
#exec_record_done /usr/local/bin/rec2hls $path /mnt/persist0/vod $basename http://docker.for.mac.localhost:3000/_h/on_transcode_done/?key=$basename;
exec_record_done bash -c "/usr/local/bin/rec2hls $path /mnt/persist0/vod $basename http://docker.for.mac.localhost:3000/_h/on_transcode_done/?key=$basename";
drop_idle_publisher 10s;
recorder rec1 {
record all manual;
#record_suffix flv;
record_path /mnt/persist0/rec;
record_append on;
# record_unique on; # for debug mainly so we can see whats happening
# ^^^ Resume same file in case of dropouts. At first rec filename will always be unique as per how our watch_key (and the publish redirect URL) is generated
}
}
In this configuration file
https://github.com/TareqAlqutami/rtmp-hls-server/blob/master/conf/nginx.conf#L24-L30
for each received stream, transcode for adaptive streaming This single
ffmpeg command takes the input and transforms the source into 4
different streams with different bitrates and qualities. # these
settings respect the aspect ratio.
How we can dynamically generate variants? i.e for 1080p input generate all variants, but for 240p input generate no variants
My startup works without error
You need to configure the log and see what error it gives.
But another solution is to check manually
You may be using a codec that is not installed
I will check your tank, maybe I can contribute here
application live {
live on; # Allows live input
exec ffmpeg -i rtmp://localhost/live/$name -threads 8
-c:v libx264 -profile:v baseline -b:v 768K -s 640x360 -vf "drawtext= fontcolor=red: fontsize=20: fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: text='360': x=10: y=10:" -f flv -c:a aac -ac 1 -strict -2 -b:a 96k rtmp://localhost/show/$name_360
-c:v libx264 -profile:v baseline -b:v 1024K -s 960x540 -vf "drawtext= fontcolor=red: fontsize=20: fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: text='480': x=10: y=10:" -f flv -c:a aac -ac 1 -strict -2 -b:a 128k rtmp://localhost/show/$name_480
-c:v libx264 -profile:v baseline -b:v 1920K -s 1280x720 -vf "drawtext= fontcolor=red: fontsize=20: fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: text='720': x=10: y=10:" -f flv -c:a aac -ac 1 -strict -2 -b:a 128k rtmp://localhost/show/$name_720
-c:v libx264 -profile:v baseline -b:v 4000K -s 1920x1080 -vf "drawtext= fontcolor=red: fontsize=20: fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: text='720': x=10: y=10:" -f flv -c:a aac -ac 1 -strict -2 -b:a 128k rtmp://localhost/show/$name_1080;
}
application show {
live on; # Allows live input from above
hls on; # Enable HTTP Live Streaming
# hls_fragment 5s;
# Pointing this to an SSD is better as this involves lots of IO
hls_path /dest;
#hls_variant _240 BANDWIDTH=288000;
hls_variant _360 BANDWIDTH=448000;
hls_variant _480 BANDWIDTH=1152000;
hls_variant _720 BANDWIDTH=2048000;
hls_variant _1080 BANDWIDTH=4096000;
}
exec_push /usr/bin/ffmpeg -re -i rtmp://10.254.20.186:1935/$app/$name ar 44100 -vcodec libx264 -g 25 -f flv rtmp://10.254.20.186/live/$name_hi;
I can use this code in my nginx server. this is code running but dont run while write libx264 instead libx265.
The interesting thing is that I can run the following commands on my linux computer without problems.
ffmpeg -i input.mp4 -c:v libx264 -crf 28 -c:a aac -b:a 128k output.mp4
ffmpeg -i input.mp4 -c:v libx265 -crf 28 -c:a aac -b:a 128k output.mp4
ffmpeg -i input.mp4 -c:v hevc -crf 28 -c:a aac -b:a 128k output264tt.mp4
ffmpeg -i input.mp4 -c:v h264 -crf 28 -c:a aac -b:a 128k output264tt.mp4
Software Info:
* ubuntu 16.04
* Nginx 1.15.1
So recently I was reading around for low latency for HLS stream on nginx and found a solution here: Reduce HLS latency from +30 seconds
this will reduce the latency to ~7 seconds but then I also wants to transcode the stream and the latency for that doesn't really matter (all I want is the source to be low latency but if transcoded version can then it would be extra) but when I do so, the source has no issue but the transcode version gives issues which essentially the browser script will tries to play an early version of the fragment which already been deleted so it causes 404 errors. So how do I solve it so that I can achieve the ~7 seconds latency for the source and have the transcode version working at the same time?
My current configuration:
FFMPEG Transcode
-c:v copy -preset:v ultrafast -b:v 6000K -c:a copy -tune zerolatency -f flv rtmp://localhost/stream/$name_source
-c:v libx264 -preset ultrafast -s 852x480 -tune fastdecode -b:v 1000K -c:a copy -tune zerolatency -f flv rtmp://localhost/stream/$name_medium
-c:v libx264 -preset ultrafast -s 1280x720 -tune fastdecode -b:v 3500K -c:a copy -tune zerolatency -f flv rtmp://localhost/stream/$name_high
-c:v libx264 -preset ultrafast -s 426x240 -b:v 400K -c:a copy -tune fastdecode -tune zerolatency -f flv rtmp://localhost/stream/$name_low
HLS:
hls_fragment 1s;
hls_playlist_length 4s;
hls_variant _source BANDWIDTH=600000;
hls_variant _high BANDWIDTH=350000;
hls_variant _medium BANDWIDTH=100000;
hls_variant _low BANDWIDTH=40000;
I'm able to get it to work when I set both fragment and playlist length the same amount of time.
hls_fragment 2s;
hls_playlist_length 2s;
above is the lowest you can achieve while having to transcode which is about ~5-6 seconds latency while transcode is about ~13 seconds.
It still not recommended as it can causes some issues