RTMP buffer/relay for network instability compensation - nginx

Imagine scenario: Live RTMP broadcast must be conducted form location were it's possible that network problems will occur. There will be a second link (LTE) which can be considered "last resort" because it's not so reliable. Automatic link switching in place, but everything takes time. I thought that it would be possible to first broadcast to some kind of relay station with 1-2minutes buffer so in case of loosing connection it would keep it alive for some time until main location is reconnected to one of links. I've tried nginx-rtmp-module and playing with all kind of options but every time I disconnect source from network there is a hiccup on stream (I've tested it on youtube live stream). First time I try I get few seconds until stream freeze, but from second time it;s almost instant when OBS machine looses connection to internet. Client buffer length on nginx have almost no impact other than time I have to wait for stream to show on youtube.
my config:
rtmp {
ping 10s;
server {
listen 1935;
buflen 20s;
chunk_size 4096;
application live {
idle_streams off;
live on;
record off;
push rtmp://a.rtmp.youtube.com/live2/my_super_duper_key;
}
}
}
I would be very grateful for any help, maybe I should be using something different than nginx?

Related

Rate limiting in the built-in HTTP server of Rserve?

I'm looking into the built-in HTTP server of Rserve (1.8.5) after modifying .http.request() from FastRWeb. It's fine with the updated request function but the issue is, whenever # concurrent requests are high, some/most of them throw the following error.
WARNING: fork() failed in fork_http(): Cannot allocate memory
WARNING: fork() failed in Rserve_prepare_child(): Cannot allocate memory
This is due to there's not enough free memory remaining and it is necessary to limit # requests in one way or another.
I tried a couple of client layers (1) Python's requests + hug libraries, (2) Python's pyRserve + hug libraries where # worker processes are adjusted by # CPUs. Also I tried reverse proxy with Nginx both in a single/multiple container setup (3) (4).
In all the cases, I observe some overhead (~ 300 - 450 ms) compared to the setup of only Rserve with the built-in HTTP server.
I guess using it as it is would be the most efficient option but I'm concerned that it just keeps trying to fork and returns an error. (Besides errors are quickly thrown, it wouldn't be easy to auto-scale with some typical metrics such as CPU utilization or mean response time.)
Can anyone inform if there is a way to enforce rate limiting with/without relying on another tool, which doesn't sacrifice performance?
My Rserve config is roughly as following.
http.port 8000
socket /var/rserve/socket
sockmod 0666
control disable
Also here is a simplified nginx.conf.
worker_processes auto;
events {
worker_connections 1024;
}
http {
upstream backend {
server 127.0.0.1:8000;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}
I was misguided by Locust (load testing tool) that it showed cached output for the setup of Rserve with the built-in HTTP server.
Manual investigation shows Rserve + Nginx returns a slightly improved result.

Add delay between retries when origin is down (Nginx, 502)

Pretty much the tile. Got a node app behind nginx, and when i restart the app i would like nginx to delay the response, and retry doing the request a couple of times with some delay inbetween. Everything that i found would only instantly retry N times, but that obviously is no useful when the app is down for a restart, which is my use-case. Is there some way? I dont even care how hacky it is / if it is, i just need a solution that is not starting a second instance of the app, and killing the first one when the second one started.
Thanks!
You can add same server as multiple upstream and configure proxy_next_upstream, proxy_next_upstream_timeout and proxy_next_upstream_tries options as well. Reference
upstream node_servers {
server 127.0.0.1:12005;
server 127.0.0.1:12005;
}
...
proxy_next_upstream http_502;
proxy_next_upstream_timeout 60;
proxy_next_upstream_tries 3;
However, I would recommend you to use process managers like pm2 which support graceful reload/restart. They are relevant if you are using more than one CPU for your nodejs server in a clustered mode.

About Nginx status

This is my nginx status below:
Active connections: 6119
server accepts handled requests
418584709 418584709 455575794
Reading: 439 Writing: 104 Waiting: 5576
The value of Waiting is much higher than Reading and Writing, is it normal?
Because of the 'keep-alive' is open?
But if I send a large number of requests to the server, the value of Reading and Writing don't increase, so I think there must be a bottleneck of the nginx or any other.
The Waiting time is Active - (Reading + Writing), i.e. connection still opened waiting for either a new request, or the keepalive expiration.
You could change the keepalive default (which is 75 seconds)
keepalive_timeout 20s;
or tell the browser when it should close the connection by adding an optional second timeout in the header sent to the browser
keepalive_timeout 20s 20s;
but in this nginx page about keepalive you see that some browsers do not care about the header (anyway your site wound't gain much thanks to this optional parameter).
The keepalive is a way to reduce the overhead of creating the connection, as, most of the time, a user will navigate through the site etc... (Plus the multiple requests from a single page, to download css, javascript, images etc...)
It depends on your site, you could reduce the keepalive - but keep in mind that establishing connections is expensive. This is a trade-off you have to refine depending on the site statistics. You could also decrease little by little the timeout (75s -> 50, then a week later 30...) and see how the server behaves.
You don't really want to fix it, as "waiting" means keep-alive
connections. They consume almost no resources (socket + about
2.5M of memory per 10000 connections in nginx).
Are the requests short lived? it's possible they're reading/writing then closing in a short amount of time.
If you're genuinely interested in fixing it you can test to see if nginx is bottleneck you could set keep-alive to 0 in your nginx config:
keepalive_timeout 0;

Using nginx to simulate slow response time for testing purposes

I'm developing a facebook canvas application and I want to load-test it. I'm aware of the facebook restriction on automated testing, so I simulated the graph api calls by creating a fake web application served under nginx and altering my /etc/hosts to point graph.facebook.com to 127.0.0.1.
I'm using jmeter to load-test the application and the simulation is working ok. Now I want to simulate slow graph api responses and see how they affect my application. How can I configure nginx so that it inserts a delay to each request sent to the simulated graph.facebook.com application?
You can slow the speed of localhost (network) by adding delay.
Use ifconfig command to see network device: on localhost it may be lo and on LAN its eth0.
to add delay use this command (adding 1000ms delay on lo network device)
tc qdisc add dev lo root netem delay 1000ms
to change delay use this one
tc qdisc change dev lo root netem delay 1ms
to see current delay
tc qdisc show dev lo
and to remove delay
tc qdisc del dev lo root netem delay 1000ms
My earlier answer works but it is more adapted to a case where all requests need to be slowed down. I've since had to come up with a solution that would allow me to turn on the rate limit only on a case-by-case basis, and came up with the following configuration. Make sure to read the entire answer before you use this, because there are important nuances to know.
location / {
if (-f somewhere/sensible/LIMIT) {
echo_sleep 1;
# Yes, we need this here too.
echo_exec /proxy$request_uri;
}
echo_exec /proxy$request_uri;
}
location /proxy/ {
internal;
# Ultimately, all this goes to a Django server.
proxy_pass http://django/;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
}
Important note: the presence or absence of forward slashes in the various paths makes a difference. For instance, proxy_pass http://django, without a trailing slash, does not do the same thing as the line in the code above.
The principle of operation is simple. If the file somewhere/sensible/LIMIT exists, then requests that match location / are paused for one second before moving on. So in my test suite, when I want a network slowdown, I create the file, and when I want to remove the slowdown, I remove it. (And I have cleanup code that removes it between each test.) In theory I'd much prefer using variables for this than a file, but the problem is that variables are reinitialized with each request. So we cannot have a location block that would set a variable to turn the limit, and another to turn it off. (That's the first thing I tried, and it failed due to the lifetime of variables). It would probably be possible to use the Perl module or Lua to persist variables or fiddle with cookies, but I've decided not to go down these routes.
Important notes:
It is not a good idea to mix directives from the echo module (like echo_sleep and echo_exec) with the stock directives of nginx that result in the production of a response. I initially had echo_sleep together with proxy_pass and got bad results. That's why we have the location /proxy/ block that segregates the stock directives from the echo stuff. (See this issue for a similar conflict that was resolved by splitting a block.)
The two echo_exec directives, inside and outside the if, are necessary due to how if works.
The internal directive prevents clients from directly requesting /proxy/... URLs.
I've modified a nginx config to use limit_req_zone and limit_req to introduce delays. The following reduces the rate of service to 20 requests per second (rate=20r/s). I've set burst=1000 so that my application would not get 503 responses.
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=20r/s;
[...]
server {
[...]
location / {
limit_req zone=one burst=1000;
[...]
}
}
}
The documentation is here. I do not believe there is a way to specify a uniform delay using this method.

Is there a way to set TCP_NODELAY to Socket.flush(), NetConnection.call() or sendToURL()?

I'm writing a real-time app using a Flex/Flash client and my own server running on Linux.
I'd like to be able to send data from the Flex client in real time (in response to user actions). I've tried the following methods:
flash.net.NetConnection.call()
flash.net.sendToURL()
flash.net.Socket.write() followed by flash.net.Socket.flush()
In each case these calls always wait for the server to send an ACK before they can send data again. In other words, if you do:
var nc:NetConnection;
// Setup code left out
nc.call("foo", someData);
// Some more code left out
nc.call("foo", moreData);
The second nc.call() above won't send data to the server until the ACK for the first call has been recieved. I'd like to be able to send data immediately without waiting for that ACK.
If the round-trip time to the server is long (e.g. 300ms) I can only send data to the server 3 times a second. Ideally I'd like to be able to send data up to 30 times per second, but this is only possible with a RTT of around 30ms at the moment.
It doesn't matter if the server itself gets the data 300ms late - I realise I can't beat the speed of light.
Is there any way to get the Flash Player to send data without waiting for an ACK? In other environments this is done by setting the TCP_NODELAY flag on the socket but it seems I don't have that level of control in Flash/Flex.
Update: I think I may have stumbled on a workaround for this. I think the Flash Player tries to get the host browser to give it a separate TCP connection for each NetConnection object, subject to the connection limit for each browser, e.g. 2 for IE. The connection limit can be got around by using sub-domains (haven't tried this yet) so hopefully it should be possible to get closer to real-time behaviour by using a pool of NetConnections.
Thanks.
Alternatively, you might have a look at something like Hemlock instead:
http://hemlock-kills.com/
Hi the sockets have the Nagle algorithm turned on. What this does is hold a "first" write for 200ms so it can be coalesced with any subsequent writes inside this time window, which means fewer packets go out across the network. For most modern applications and network engineers this is totally dumb and inappropriate, as they will wish set TCP_NODELAY and control exactly when transmission is made and are quite capable of bunching their bytes up in a buffer before writing them to the socket. The reason for this is likely that someone in Adobe once wanted to restrict this option to push people towards the RTMP protocol and their commercial/expensive LCDS system (I think you can set the client no delay option from the server side of an RTMP connection). Ahem Adobe, get real and please add TCP_NODELAY asap as you are just harming the Flash ecosystem and not increasing profits!!!

Resources