Stream .m3u8 on a webpage with Clapprjs - nginx

I'm use nginx-rtmp to convert rtmp to hls and stream in a web page with Clappr. But Clappr take old .ts segment (cause 404 error because it is removed on server). How to fix this ?
Sorry, this is a first time i'm using nginx-rtmp and streaming
Nginx-rtmp config:
rtmp {
server {
listen 1935; # Listen on standard RTMP port
chunk_size 4000;
buflen 1s;
application show {
live on;
record off;
# Turn on HLS
hls on;
hls_path /nginx/hls/;
hls_fragment 600ms;
hls_playlist_length 5s;
# disable consuming the stream from nginx as rtmp
deny play all;
}
}
}
Code to stream on web
<!DOCTYPE html>
<html>
<head>
<meta charset=utf-8 />
<title>videojs-contrib-hls embed</title>
<link href="video-js.css" rel="stylesheet">
<script type="text/javascript" src="https://cdn.jsdelivr.net/npm/clappr#latest/dist/clappr.min.js">
</script>
</head>
<body>
<div id="player"></div>
<script>
var player = new Clappr.Player({
source: "<my url>",
parentId: "#player",
});
</script>
</body>
</html>
Clappr read old .ts segment
On server, this segment deleted

Seems you want to do low latency HLS, so you set the fragment to 600ms, which might cause the problem.
I test this player, it doesn't start playing from the first segment, it starts from the third segment livestream-22.ts in the playlist, so I think it's not the problem of player.
In the configuration, I notice that the hls_fragment is very small:
hls_fragment 600ms;
hls_playlist_length 5s;
I think you might want to do low latency live stream, but you should also set the gop of encoder to 1s, for example, set the Keyframe interval for OBS:
Please note that OBS only support 1s+ gop, so the hls fragment should also be set to 1000ms+. I think the problem should be introduced by this mismatch. So please change the config to bellow and test whether it works:
hls_fragment 1000ms;
hls_playlist_length 5s;
Please show me the live.m3u8 content if the segment still not found.
BTW, do you want to do low latency live streaming? Please note that you can't only set the duration of ts segment(hls_fragment) on server, you should also set the Keyframe interval or gop of OBS.
If you use your own server to devlier live stream, you can use HTTP-FLV or HTTP-TS, which also works well, similar to HLS.
When deliver HLS by CDN, which doesn't support HTTP-FLV or HTTP-TS, so you should try other players, which starts playing from the last segment, because the latency is determined by the player behavior for HLS.

Related

RTMP buffer/relay for network instability compensation

Imagine scenario: Live RTMP broadcast must be conducted form location were it's possible that network problems will occur. There will be a second link (LTE) which can be considered "last resort" because it's not so reliable. Automatic link switching in place, but everything takes time. I thought that it would be possible to first broadcast to some kind of relay station with 1-2minutes buffer so in case of loosing connection it would keep it alive for some time until main location is reconnected to one of links. I've tried nginx-rtmp-module and playing with all kind of options but every time I disconnect source from network there is a hiccup on stream (I've tested it on youtube live stream). First time I try I get few seconds until stream freeze, but from second time it;s almost instant when OBS machine looses connection to internet. Client buffer length on nginx have almost no impact other than time I have to wait for stream to show on youtube.
my config:
rtmp {
ping 10s;
server {
listen 1935;
buflen 20s;
chunk_size 4096;
application live {
idle_streams off;
live on;
record off;
push rtmp://a.rtmp.youtube.com/live2/my_super_duper_key;
}
}
}
I would be very grateful for any help, maybe I should be using something different than nginx?

Maximum recommended client_max_body_size value on Nginx

What is the maximum recommended value of client_max_body_size on Nginx for upload of large files?
The web app that I'm working right now will expect uploads of max 100mb. Should I set client_max_body_size to something like 150mb to upload in a single request or do the slice strategy and send chunks of 1mb to the server keeping the client_max_body_size low?
This is a subjective thing and use-case dependent. So the question you should ask yourself is What is the max size beyond which you don't want to allow an upload then use that.
Next what mistake people make is that they just set
client_max_body_size 150M;
In the nginx config in the server block. This is actually wrong because you don't want to allow people to be able to upload 150M of data to everyone and to every url. You will have a specific url for which you want the upload to be allowed. So you should have location like below
location /upload/largefileupload {
client_max_body_size 150M;
}
And for rest urls you can keep it to as low as 2MB. This way you will be less susceptible to a generic ddos attack (large body upload attack). See below url
https://www.tomaz.me/2013/09/15/avoiding-ddos-attacks-caused-by-large-http-request-bodies-by-enforcing-a-hard-limit-in-your-web-server.html

Nginx timeouts don't seem to apply to a location

If I have the following timeout rules in my http block:
keepalive_timeout 1s;
send_timeout 1s;
And the following location:
location = /slow {
echo_sleep 10;
echo "So slow";
}
I would expect /slow to trigger a 408 or 504 (timeout), but it's actually honouring that request. Which says to me that I'm handling timeouts incorrectly. So how would I limit the length of time a request takes to be processed by nginx?
The documentation clearly says
Sets a timeout for transmitting a response to the client. The timeout is set only between two successive write operations, not for the transmission of the whole response. If the client does not receive anything within this time, the connection is closed.
echo_sleep 10; and then echo "xxx", so the response time will start from echo and not from echo_sleep

Nginx client_body_buffer_size and client_max_body_size optimizations for large POST requests

I have an API that receives anywhere from 1K to 20MB of data in each transmission. I also have a website that would only ever receive less than 10K in a single transmission. Both the API and the website are behind the same Nginx proxy server.
From the docs for client_body_buffer_size
If the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file."
This means that any time I receive a request over the default, it will be written to disk.
Given that I can receive large payloads, would it be best to set the client_body_buffer_size equal to client_max_body_size, which for me is 20MB? I assume this would prevent nginx from writing the request to disk every time.
Are there any consequences to setting the client_body_buffer_size so high? Would this affect the website, which never receives such large requests?
I would recommend using a smaller client_body_buffer_size (bigger than 10k but not so much, maybe the x64 default of 16k), since a bigger buffer could ease DoS attack vectors, since you would allocate more memory for it, opposed to disk which is cheaper.
Please note that you can also set a different client_max_body_size and client_body_buffer_size on a specific server or location (see Context) so your website wouldn't allow 20MB uploads.
Here's an interesting thread on client_body_buffer_size, it also reminds that if the client body is bigger than your client_body_buffer_size, the nginx variable $request_body will be empty.
It depends on your server memory and how many traffic you have.
A simple formula: MAX_RAM = client_body_buffer_size X concurrent_traffic - OS_RAM - FS_CACHE.
(exactly the same thing with php-fpm pool tuning or even mysql/elasticsearch).
The key is to monitor all the things (RAM/CPU/Traffic) and change settings according your usage, star little of course then increase until you can.

Using nginx to simulate slow response time for testing purposes

I'm developing a facebook canvas application and I want to load-test it. I'm aware of the facebook restriction on automated testing, so I simulated the graph api calls by creating a fake web application served under nginx and altering my /etc/hosts to point graph.facebook.com to 127.0.0.1.
I'm using jmeter to load-test the application and the simulation is working ok. Now I want to simulate slow graph api responses and see how they affect my application. How can I configure nginx so that it inserts a delay to each request sent to the simulated graph.facebook.com application?
You can slow the speed of localhost (network) by adding delay.
Use ifconfig command to see network device: on localhost it may be lo and on LAN its eth0.
to add delay use this command (adding 1000ms delay on lo network device)
tc qdisc add dev lo root netem delay 1000ms
to change delay use this one
tc qdisc change dev lo root netem delay 1ms
to see current delay
tc qdisc show dev lo
and to remove delay
tc qdisc del dev lo root netem delay 1000ms
My earlier answer works but it is more adapted to a case where all requests need to be slowed down. I've since had to come up with a solution that would allow me to turn on the rate limit only on a case-by-case basis, and came up with the following configuration. Make sure to read the entire answer before you use this, because there are important nuances to know.
location / {
if (-f somewhere/sensible/LIMIT) {
echo_sleep 1;
# Yes, we need this here too.
echo_exec /proxy$request_uri;
}
echo_exec /proxy$request_uri;
}
location /proxy/ {
internal;
# Ultimately, all this goes to a Django server.
proxy_pass http://django/;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
}
Important note: the presence or absence of forward slashes in the various paths makes a difference. For instance, proxy_pass http://django, without a trailing slash, does not do the same thing as the line in the code above.
The principle of operation is simple. If the file somewhere/sensible/LIMIT exists, then requests that match location / are paused for one second before moving on. So in my test suite, when I want a network slowdown, I create the file, and when I want to remove the slowdown, I remove it. (And I have cleanup code that removes it between each test.) In theory I'd much prefer using variables for this than a file, but the problem is that variables are reinitialized with each request. So we cannot have a location block that would set a variable to turn the limit, and another to turn it off. (That's the first thing I tried, and it failed due to the lifetime of variables). It would probably be possible to use the Perl module or Lua to persist variables or fiddle with cookies, but I've decided not to go down these routes.
Important notes:
It is not a good idea to mix directives from the echo module (like echo_sleep and echo_exec) with the stock directives of nginx that result in the production of a response. I initially had echo_sleep together with proxy_pass and got bad results. That's why we have the location /proxy/ block that segregates the stock directives from the echo stuff. (See this issue for a similar conflict that was resolved by splitting a block.)
The two echo_exec directives, inside and outside the if, are necessary due to how if works.
The internal directive prevents clients from directly requesting /proxy/... URLs.
I've modified a nginx config to use limit_req_zone and limit_req to introduce delays. The following reduces the rate of service to 20 requests per second (rate=20r/s). I've set burst=1000 so that my application would not get 503 responses.
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=20r/s;
[...]
server {
[...]
location / {
limit_req zone=one burst=1000;
[...]
}
}
}
The documentation is here. I do not believe there is a way to specify a uniform delay using this method.

Resources