Live stream stops when a second viewer joins - nginx

I am trying to build a live stream platform using nginx and nginx-http-flv-module (with nginx-rtmp-module).
I have been using nginx-http-flv-module's guide.
I built an nginx server with rtmp and http-flv support.
My nginx.conf file:
#user nobody;
worker_processes 1;
error_log logs/error.log debug;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 8080;
server_name localhost;
...
location /live {
flv_live on; #open flv live streaming (subscribe)
chunked_transfer_encoding on; #open 'Transfer-Encoding: chunked' response
add_header 'Access-Control-Allow-Origin' '*'; #add additional HTTP header
add_header 'Access-Control-Allow-Credentials' 'true'; #add additional HTTP header
add_header 'Access-Control-Expose-Headers' 'Content-Length';
}
}
}
rtmp {
server {
listen 1935;
ping 30s;
notify_method get;
application myapp {
live on;
}
}
}
I start publishing my stream using OBS and play the stream in the browser using flv.js like this:
<video id="videoElement" controls autoplay></video>
...
<script>
let videoElement = document.getElementById('videoElement');
let flvPlayer = flvjs.createPlayer({
type: 'flv',
isLive: "true",
url: 'http://192.168.1.122:8080/live?port=1935&app=myapp&stream=test'
});
flvPlayer.attachMediaElement(videoElement);
flvPlayer.load();
</script>
And everything works great! The stream is playing in the browser as expected. But the problem is whenever a second viewer starts watching the stream (if I open it in another browser tab f.e). The player stops playing and starts infinite loading. So what can cause this problem?

I am the owner of nginx-http-flv-module, I am sorry that the bug was caused by the commit on July 7, 2019 and it has been fixed already.
You can try the latest code.

Related

Can't get dash streaming to work from nginx-rtmp

I'm trying to broadcast a stream from OBS (codec is set x264) to nginx with an rtmp server and then view the stream as mpeg-dash in VLC.
I've set up nginx with the rtmp module and that works. I can stream to nginx and receive the stream via rtmp in VLC. For that I used this URL: rtmp://127.0.0.1/live/stream
This is my config.
#user nobody;
worker_processes 1;
error_log logs/error.log debug;
events {
worker_connections 1024;
}
rtmp {
server {
listen 1935;
ping 30s;
notify_method get;
application live {
live on;
dash on;
dash_path /tmp/dash;
}
}
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile off;
tcp_nopush on;
aio off;
directio 512;
#keepalive_timeout 0;
keepalive_timeout 65;
server {
listen 8080;
location /dash {
# Serve DASH fragments
types {
application/dash+xml mpd;
video/mp4 mp4;
}
root /tmp;
add_header Cache-Control no-cache;
# CORS setup
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Expose-Headers' 'Content-Length';
# Allow CORS preflight requests
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
}
}
}
I can see the stream.mpd in the file explorer, but VLC always says can't open source. I've tried both URLs: http://127.0.0.1/tmp/dash/stream.mpd and http://127.0.0.1/dash/stream.mpd, but both didn't work.
I also tried it with HLS, but I couldn't get this to work either.
To avoid having troubles with file privileges, I set the whole folder to chmod 777.
Any ideas what could be wrong or what I could try? Thank you
I solved it, sort of...
I used the config from here: HTML5 live streaming
I had to change some things in the file (http-tag was missing as an example) but then it worked with HLS. Turns out it doesn't really make a difference if I'm using MPEG-DASH or HLS, so I'm fine with it.
Another thing I found out, is that I always tried it without defining the port. For RTMP this made no problems since I could use the standard port. But on http I had to use port 8080 since 80 was already in use. But in VLC I didn't think to define the port.
Now it works for me;)

Setting up nginx to stream HLS/RTMP simultaneously

I have an nginx web server with the RTMP module running perfectly and pushing out video to several RTMP destinations (Facebook, YouTube, Periscope, etc.).
I now need the stream to output in the HLS protocol so I can build a custom player for it without flash dependency. I've tried following a few other tutorials, but I am struggling.
I have the default config file that came with the installation of nginx with the following added to the end:
rtmp {
server {
listen 1935;
chunk_size 4096;
application live {
live on;
record off;
push rtmp://<streaming-service/key>
}
}
}
What exactly do I need to add to this config file to retain my ability to push out video to these RTMP destinations as well as have a HLS feed I can use in a player without Flash?
I already have FFMPEG installed on the machine in order to send video to Periscope. I've seen solutions that do not need FFMPEG, but I just wanted to add that I did have it installed and am somewhat familiar with it.
EDIT: for more information, I am sending video through the server via a Teradek encoder.
I solved this with FFMPEG. The pull directive wasn't working properly. Here's the config file that worked for me:
#user nobody;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
server {
listen 80;
server_name localhost;
location /hls {
types {
application/vnd.apple.mpegurl m3u8;
video/mp2t ts;
}
root /tmp/;
add_header Cache-Control no-cache;
add_header 'Access-Control-Allow-Origin' '*';
}
location / {
root html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
rtmp {
server {
listen 1935;
chunk_size 4096;
application live {
live on;
record off;
push rtmp://<location/key>
exec ffmpeg -i rtmp://localhost/live/test -vcodec libx264 -vprofile baseline -x264opts keyint=40 -acodec aac -strict -2 -f flv rtmp://localhost/hls/test;
}
application hls {
live on;
hls on;
hls_path /tmp/hls/;
hls_fragment 6s;
hls_playlist_length 60s;
}
}
}

nginx and uwsgi: large difference between upstream response time and request time

Disclaimer: this is technically related to a school project, but I've talked to my professor and he is also confused by this.
I have a nginx load balancer reverse proxying to several uwsgi + flask apps. The apps are meant to handle very high throughput/load. My response times from uwsgi are pretty good, and the nginx server has low CPU usage and load average, but the overall request time is extremely high.
I've looked into this issue and all the threads I've found say that this is always caused by the client having a slow connection. However, the requests are being made by a script on the same network, and this issue isn't affecting anyone else's setup, so it seems to me that it's a problem with my nginx config. This has me totally stumped though because it seems almost unheard of for nginx to be the bottleneck like this.
To give an idea of the magnitude of the problem, there are three primary request types: add image, search, and add tweet (it's a twitter clone).
For add image, the overall request time is ~20x longer than the upstream response time on average. For search, it's a factor of 3, and add tweet 1.5. My theory for the difference here is that the amount of data being sent back and forth is much larger for add image than either search or add tweet, and larger for search than add tweet.
My nginx.conf is:
user www-data;
pid /run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 30000;
events {
worker_connections 30000;
}
http {
# Settings.
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_body_buffer_size 200K;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# SSL.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
# Logging
log_format req_time '$remote_addr - $remote_user [$time_local] '
'REQUEST: $request '
'STATUS: $status '
'BACK_END: $upstream_addr '
'UPSTREAM_TIME: $upstream_response_time s '
'REQ_TIME: $request_time s ';
'CONNECT_TIME: $upstream_connect_time s';
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log req_time;
# GZIP business
gzip on;
gzip_disable "msie6";
# Routing.
upstream media {
server 192.168.1.91:5000;
}
upstream search {
least_conn;
server 192.168.1.84:5000;
server 192.168.1.134:5000;
}
upstream uwsgi_app {
least_conn;
server 192.168.1.85:5000;
server 192.168.1.89:5000;
server 192.168.1.82:5000;
server 192.168.1.125:5000;
server 192.168.1.86:5000;
server 192.168.1.122:5000;
server 192.168.1.128:5000;
server 192.168.1.131:5000;
server 192.168.1.137:5000;
}
server {
listen 80;
server_name localhost;
location /addmedia {
include uwsgi_params;
uwsgi_read_timeout 5s;
proxy_read_timeout 5s;
uwsgi_pass media;
}
location /media {
include uwsgi_params;
uwsgi_read_timeout 5s;
proxy_read_timeout 5s;
uwsgi_pass media;
}
location /search {
include uwsgi_params;
uwsgi_read_timeout 5s;
proxy_read_timeout 5s;
uwsgi_pass search;
}
location /time-search {
rewrite /time-search(.*) /times break;
include uwsgi_params;
uwsgi_pass search;
}
location /item {
include uwsgi_params;
uwsgi_read_timeout 5s;
proxy_read_timeout 5s;
if ($request_method = DELETE) {
uwsgi_pass media;
}
if ($request_method = GET) {
uwsgi_pass uwsgi_app;
}
if ($request_method = POST) {
uwsgi_pass uwsgi_app;
}
}
location / {
include uwsgi_params;
uwsgi_read_timeout 5s;
proxy_read_timeout 5s;
uwsgi_pass uwsgi_app;
}
}
}
And my uwsgi ini is:
[uwsgi]
chdir = /home/ubuntu/app/
module = app
callable = app
master = true
processes = 25
socket = 0.0.0.0:5000
socket-timeout = 5
die-on-term = true
home = /home/ubuntu/app/venv/
uid = 1000
buffer-size=65535
single-interpreter = true
Any insights as to the cause of this problem would be greatly appreciated.
So, I think I figured this out. From reading the nginx docs (https://www.nginx.com/blog/using-nginx-logging-for-application-performance-monitoring/) it seems that there are three metrics to pay attention to: upstream_response_time, request_time, and upstream_connect_time. I was focused on the difference between upstream_response_time and request_time.
However, upstream_response_time is the time between the upstream accepting the request and returning a response. It doesn't include upstream_connect time, or the time it takes to establish a connection to upstream server. And in the context of uwsgi, this is very important, because if there isn't a worker available to accept a request, the request will get put on a backlog. I think the time a request waits on the backlog might count as upstream_connect_time, not upstream_response_time in nginx, because uwsgi hasn't read any of the bytes yet.
Unfortunately, I can't be 100% certain, because I never got a "slow" run where I was logging upstream_connect_time. But the only things I changed that improved my score were just "make the uwsgi faster" changes (devote more cores to searching, increase replication factor in the DB to make searches faster)... So yeah, turns out the answer was just to increase throughput for the apps.

Nginx loosing header sometime

I am using nginx as reverse proxy server for android application(get/post requests only). Some of the data contained in the headers. In some cases nginx loses "id" or "fail_id" header.
config:
user user;
worker_processes 4;
error_log /var/log/nginx/error.log;
events {
worker_connections 100000;
use epoll;
}
http {
upstream myproject {
server 192.168.88.246:2053;
}
server {
listen 2054;
ssl on;
ssl_certificate /home/user/android/cert/cert.pem;
ssl_certificate_key /home/user/android/cert/key.pem;
proxy_read_timeout 600;
proxy_send_timeout 600;
location / {
proxy_pass http://myproject;
proxy_pass_request_headers on;
}
}
}
Could i set original headers of request?
Updated:
A more detailed study found that nginx miss "fail_id" header. All other headers are working.
Problem solved!
Nginx default config misses headers with underscore.
This directive solved the problem:
underscores_in_headers on;
Thank You For the underscore directive. I have used underscores_in_headers on; directive and the header value with underscore passed to my node app.
Now I am able to access header value (api_key) using postman request and angular request from the web.
But now when The request is raise from the android app and I have set the api_key in the android request header, I am unable to access the api_key.
My config is:
server {
listen 80;
server_name uat.api.myserver.com;
underscores_in_headers on;
location / {
proxy_pass http://localhost:9102;
}
}

Tainted canvas when accessing HLS live stream on Galaxy Ace 2

I'm using avconv (ffmpeg) and nginx to stream frames from a camera over HLS and RTMP. Since my phone doesn't support flash it uses HTML5 video tags and HLS in order to stream the video. One feature that I'm trying to support is to record the live stream and save that to a file. However, I am unable to record the stream due to a cross-domain issue.
The live stream is coming from my machine on port 8080 (I'm referencing it using my internal IP, 10.150.x.x:8080/hls/mystream.m3u8) and the server is run on my machine through port 8000 (also referenced through internal IP). Because they are on different ports it is still viewed as cross domain.
In my nginx.conf I have added Access-Control-Allow-Origin: *
and I've also tried adding Access-Control-Allow-Methods GET, PUT, POST, DELETE, OPTIONS
and Access-Control-Allow-Headers Content-Type, Authorization, X-Requested-With
When I examine the headers using curl -I http://10.150.x.x:8080/hls/mystream.m3u8 and through firefox and chrome from my desktop I can see the appropriate headers. But when I look at the headers using the chrome dev tools for my phone I get "CAUTION: Provisional headers shown."
I attempt to capture the frames using canvas.toDataURL() and it is this function that is giving the security error.
Why is it that even though I have Access-Control-Allow-Origin: * in my nginx.conf I still get a cross domain issue?
nginx.conf:
#user nobody;
worker_processes 1;
error_log logs/error.log debug;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 8080;
server_name 10.150.x.x;
#server_name bsid.ca;
add_header 'Access-Control-Allow-Origin' "*";
#add_header 'Access-Control-Allow-Methods' "GET, PUT, POST, DELETE, OPTIONS";
#add_header 'Access-Control-Allow-Headers' "Content-Type, Authorization, X-Requested-With";
location /hls {
types {
application/vnd.apple.mpegurl m3u8;
video/mp2t ts;
}
root /path/to/nginxVideo;
}
# sample handlers
#location /on_play {
# if ($arg_pageUrl ~* 127.0.0.1) {
# return 201;
# }
# return 202;
#}
#location /on_publish {
# return 201;
#}
#location /vod {
# alias /var/myvideos;
#}
# rtmp stat
location /stat {
rtmp_stat all;
rtmp_stat_stylesheet stat.xsl;
}
location /stat.xsl {
# you can move stat.xsl to a different location
root /usr/build/nginx-rtmp-module;
}
# rtmp control
location /control {
rtmp_control all;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
rtmp {
server {
listen 1935;
ping 30s;
notify_method get;
application myapp {
live on;
# sample play/publish handlers
#on_play http://127.0.0.1:8080/on_play;
#on_publish http://127.0.0.1:8080/on_publish;
# sample recorder
#recorder rec1 {
# record all;
# record_interval 30s;
# record_path /tmp;
# record_unique on;
#}
# sample HLS
hls on;
hls_path /home/richard/Media/nginxVideo/hls;
hls_base_url http://10.150.x.x:8080/hls/;
hls_sync 2ms;
}
# Video on demand
#application vod {
# play /var/Videos;
#}
# Video on demand over HTTP
#application vod_http {
# play http://127.0.0.1:8080/vod/;
#}
}
}
Full error:
Uncaught SecurityError: Failed to execute 'toDataURL' on 'HTMLCanvasElement': Tainted canvases may not be exported.
UPDATE
After a lengthy discussion with Ray Nicholus it was determined that the issue was that I was setting the crossorigin attribute on my video element after the stream had begun. By setting it earlier I was able to access the frames without the need of a proxy.
Dev tools will not reveal most specifics about the request if it believes the response has not properly acknowledged the cross-origin request. All I can think of is that you are setting the crossorigin attribute after the bytes have started to stream in (at which point the video is already tainted), either that or your server is not properly acknowledging the request. If the request is lacking an Origin header, the former is likely the case.

Resources