ffmpeg hls stream to nginx webdav. Remove old segments - nginx

I'm trying to stream mp4 file in loop to my nginx server. And i need to remove old segments:
ffmpeg -re -stream-loop -1 -i /data/samples/BigBuckBunny.mp4 -c copy -f hls -hls_time 5 -hls_flags delete_segments -hls_list_size 5 http://127.0.0.1:8080/upload/stream.m3u8
Everythink is ok, but when ffmpeg tries to remove old segment I've got this error in nginx:
[error] 22#22: *73174 DELETE with body is unsupported, client:
127.0.0.1, server: _, request: "DELETE /upload/stream16.ts HTTP/1.1", host: "127.0.0.1:8080"
My nginx config:
location /upload {
root /data/live;
dav_access user:rw group:rw all:rw;
dav_methods PUT DELETE MKCOL COPY MOVE;
create_full_put_path on;
charset utf-8;
autoindex on; }
ffmpeg 4.4.1
nginx 1.21.4
What I'm doing wrong?

It seems that the ffmpeg http muxer that underlies the hls muxer defaults to chunked transfer encoding. When the DELETE request is made there is no body, but ffmpeg still makes the request with a single zero-length chunk.
The error message in nginx could be slightly more helpful. Indeed, it does not support webdav DELETE requests with a body, but it also does not support DELETE requests which are marked as chunked transfer encoded, regardless of whether there is a body (see: https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_dav_module.c#L315), hence the error.
It looks like it should be possible to disabled this behaviour in ffmpeg using the chunked_post option, but it doesn't appear to be working. Not sure if this is a bug or not, or it seems like a bit of a hack anyway.

Related

Unable to get nginx-vod-module plugin to work

My first time trying hands on nginx-vod-module or any video streaming for that matter.
I just want to play static mp4 videos which I place on the server but via hls instead of direct mp4 access. No actual live streaming
Q1. Am I right in understanding that a mp4 video which I place locally on my server, will automatically get broken down into segments for HLS?
My nginx installation is here: /opt/kaltura/nginx
The mp4 file is placed at /opt/kaltura/nginx/test/vid.mp4
In ../nginx/conf/server.conf, I have this:
location /hls/ {
alias test/;
vod hls;
vod_bootstrap_segment_durations 2000;
vod_bootstrap_segment_durations 2000;
vod_bootstrap_segment_durations 2000;
vod_bootstrap_segment_durations 4000;
include /opt/kaltura/nginx/conf/cors.conf;
}
location / {
root html;
}
Now, I am able to access the m3u8 file:
curl http://104.167xxxxx/hls/vid.mp4/index.m3u8
But when I try to open this file via VLC, I see these errors in errors.log:
*2020/10/31 15:00:08 [error] 12749#0: *60 mp4_parser_validate_stsc_atom: zero entries, client: 49.207 ..., server: ubuntu, request: "GET /hls/vid.mp4/seg-1-v1.ts HTTP/1.1", host: "104.167. ..."
2020/10/31 15:00:08 [error] 12752#0: *61 mp4_parser_validate_stsc_atom: zero entries, client: 49.207 ..., server: ubuntu, request: "GET /hls/vid.mp4/seg-2-v1.ts HTTP/1.1", host: "104.167. ..."
2020/10/31 15:00:09 [error] 12749#0: *62 mp4_parser_validate_stsc_atom: zero entries, client: 49.207 ..., server: ubuntu, request: "GET /hls/vid.mp4/seg-3-v1.ts HTTP/1.1", host: "104.167. ..."
2020/10/31 15:00:10 [error] 12751#0: *63 mp4_parser_validate_stsc_atom: zero entries, client: 49.207 ..., server: ubuntu, request: "GET /hls/vid.mp4/seg-4-v1.ts HTTP/1.1", host: "104.167. ..."*
Q2: Is https must for this to work?
Q3: I dont see any /hls/vid.mp4 folder created anywhere on the server. Do I have to manually run ffmpeg separately to create the hls segments?
What wrong am I doing?
I'm no Kaltura expert, but hopefully this will help narrow some of your issues:
A1: Yes, Kaltura will package a solid mp4 to transport stream packets for HLS.
A2: No, this should work over plain http, have run many tests over http myself, does not require https.
A3: No, you don't need to manually run ffmpeg. I believe ffmpeg is a prerequisite, so it should be installed, but you do not need to chunk the mp4 yourself, the kaltura plugin will do this.
I've not seen the particular error message you posted, so afraid I can't help with that.

Curl ends with "curl: (6) Could not resolve host: HTTP"

I've been following a blog on how to compile modsecurity with nginx, Blog. I tried to verify that everything works with creating the file /etc/nginx/conf.d/echo.conf which contains:
server {
listen localhost:8085;
location / {
default_type text/plain;
return 200 "Thank you for requesting ${request_uri}\n";
}
}
I ran the following in cmd:
sudo nginx -s reload
curl -D - http://localhost:8085 HTTP/1.1 200 OK
and I got
HTTP/1.1 200 OK
Server: nginx/1.19.0
Date: Wed, 10 Jun 2020 19:31:08 GMT
Content-Type: text/plain
Content-Length: 27
Connection: keep-alive
Thank you for requesting /
curl: (6) Could not resolve host: HTTP
I have been on this for hours and can't figure out what to do. The two solutions I've found were
IPv6 enabled
Wrong DNS server
I ran the command in cmd with --ipv4 curl --ipv4 -D - http://localhost:8085 HTTP/1.1 200 OK with no success.
I also changed the nameserver in /etc/resolv.conf to 8.8.8.8 instead of 127.0.0.53 which also didn't work.
Any clues on what to do?
That error message spawns due to the command syntax you used. When using curl it should be enough by running:
curl -D - http://localhost:8085
To make a HTTP request to the webserver you define (localhost in this case). Otherwise it will take additional arguments as extra URLs to query if there are not additional options to parse, so it is trying to query HTTP as if you typed http://HTTP, which simply will not work, at least until you define a specific entry for HTTP host in your /etc/hosts for example.

How to make an HTTP GET request manually with netcat?

So, I have to retrieve temperature from any one of the cities from http://www.rssweather.com/dir/Asia/India.
Let's assume I want to retrieve of Kanpur's.
How to make an HTTP GET request with Netcat?
I'm doing something like this.
nc -v rssweather.com 80
GET http://www.rssweather.com/wx/in/kanpur/wx.php HTTP/1.1
I don't know exactly if I'm even in the right direction or not. I am not able to find any good tutorials on how to make an HTTP get request with netcat, so I'm posting it on here.
Of course you could dig in standards searched for google, but actually if you want to get only a single URL, it isn't​‎​‎ worth the effort.
You could also start a netcat in listening mode on a port:
nc -l 64738
(Sometimes nc -l -p 64738 is the correct argument list)
...and then do a browser request into this port with a real browser. Just type in your browser http://localhost:64738 and see.
In your actual case the problem is that HTTP/1.1 doesn't close the connection automatically, but it waits your next URL you want to retrieve. The solution is simple:
Use HTTP/1.0:
GET /this/url/you/want/to/get HTTP/1.0
Host: www.rssweather.com
<empty line>
or use a Connection: request header to say the server you want to close after that:
GET /this/url/you/want/to/get HTTP/1.1
Host: www.rssweather.com
Connection: close
<empty line>
Extension: After the GET header write only the path part of the request. The hostname from which you want to get data belongs to a Host: header as you can see in my examples. This is because multiple websites can run on the same webserver, so the browsers need to say him, from which site it wants to load the page.
This works for me:
$ nc www.rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.0
Host: www.rssweather.com
And then hit double <enter>, i.e. once for the remote http server and once for the nc command.
source: pentesterlabs
You don't even need to use/install netcat
Create a tcp socket via an unused file-descriptor i.e I use 88 here
Write the request into it
use the fd
exec 88<>/dev/tcp/rssweather.com/80
echo -e "GET /dir/Asia/India HTTP/1.1\nhost: www.rssweather.com\nConnection: close\n\n" >&88
sed 's/<[^>]*>/ /g' <&88
On MacOS, you need the -c flag as follows:
Little-Net:~ minfrin$ nc -c rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.1
Host: rssweather.com
Connection: close
[empty line]
The response then appears as follows:
HTTP/1.1 200 OK
Date: Thu, 23 Aug 2018 13:20:49 GMT
Server: Apache
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html
The -c flag is described as "Send CRLF as line-ending".
To be HTTP/1.1 compliant, you need the Host header, as well as the "Connection: close" if you want to disable keepalive.
Test it out locally with python3 http.server
This is also a fun way to test it out. On one shell, launch a local file server:
python3 -m http.server 8000
Then on the second shell, make a request:
printf 'GET / HTTP/1.1\r\nHost: localhost\r\n\r\n' | nc localhost 8000
The Host: header is required in HTTP 1.1.
This shows an HTML listing of the directory, just as you would see from:
firefox http://localhost:8000
Next you can try to list files and directories and observe the response:
printf 'GET /my-subdir/ HTTP/1.1\n\n' | nc localhost 8000
printf 'GET /my-file HTTP/1.1\n\n' | nc localhost 8000
Every time you make a successful request, the server prints:
127.0.0.1 - - [05/Oct/2018 11:20:55] "GET / HTTP/1.1" 200 -
confirming that it was received.
example.com
This IANA maintained domain is another good test URL:
printf 'GET / HTTP/1.1\r\nHost: example.com\r\n\r\n' | nc example.com 80
and compare with: http://example.com/
https SSL
nc does not seem to be able to handle https URLs. Instead, you can use:
sudo apt-get install nmap
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | ncat --ssl github.com 443
See also: https://serverfault.com/questions/102032/connecting-to-https-with-netcat-nc/650189#650189
If you try nc, it just hangs:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
and trying port 80:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
just gives a redirect response to the https version:
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://github.com/
Connection: keep-alive
Tested on Ubuntu 18.04.

Make nginx gzip only if asked

It seems like once gzip is turned on, it will only gzip even if the request doesnt set Accept-Encoding: gzip in the header.
Is there anyway so that if the request doesnt set Accept-Encoding, it will not return gzip?
Thanks!
Try viewing the headers with curl -I http://example.com/ (where example.com is replaced with your site). Verify that Content-Encoding: gzip isn't in the resulting headers.
Often other test clients add headers (including Accept-Encoding) automatically, which can make testing problematic.
Side note: If you are interested in verifying that the server does return gzip encoding when requested, you can use the alternate test query 'curl -I --compressed http://example.com/`.

NGINX + uWSGI Connection Reset by Peer

I'm trying to host Bottle Application on NGINX using uWSGI.
Here's my nginx.conf
location /myapp/ {
include uwsgi_params;
uwsgi_param X-Real-IP $remote_addr;
uwsgi_param Host $http_host;
uwsgi_param UWSGI_SCRIPT myapp;
uwsgi_pass 127.0.0.1:8080;
}
I'm running uwsgi as this
uwsgi --enable-threads --socket :8080 --plugin python -- wsgi-file ./myApp/myapp.py
I'm using POST Request. For that using dev Http Client. Which goes infinite when I send the request
http://localhost/myapp
uWSGI server receives the request and prints
[pid: 4683|app: 0|req: 1/1] 127.0.0.1 () {50 vars in 806 bytes} [Thu Oct 25 12:29:36 2012] POST /myapp => generated 737 bytes in 11 msecs (HTTP/1.1 404) 2 headers in 87 bytes (1 switches on core 0)
but in nginx error log
2012/10/25 12:20:16 [error] 4364#0: *11 readv() failed (104: Connection reset by peer) while reading upstream, client: 127.0.0.1, server: localhost, request: "POST /myApp/myapp/ HTTP/1.1", upstream: "uwsgi://127.0.0.1:8080", host: "localhost"
What to do?
make sure to consume your post data in your application
for example if you have a Django/python application
def my_view(request):
# ensure to read the post data, even if you don't need it
# without this you get a: failed (104: Connection reset by peer)
data = request.DATA
return HttpResponse("Hello World")
Some details: https://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
You cannot post data from the client without reading it in your application. while this is not a problem in uWSGI, nginx will fail. You can 'fake' the thing using the --post-buffering option of uWSGI to automatically read datas from the socket (if available), but you'd better to "fix" (even if i do not consider that a bug) your app
This problem occurs when the body of a request is not consumed, since uwsgi cannot know whether it will still be needed at some point. So uwsgi will keep holding on to the data either until it is consumed or until nginx resets the connection (because upstream timed out).
The author of uwsgi explains it here:
08:21 < unbit> plaes: does your DELETE request (not-response) have a body ?
08:40 < unbit> and do you read that body in your app ?
08:41 < unbit> from the nginx logs it looks like it has a body and you are not reading it in the app
08:43 < plaes> so DELETE request shouldn't have the body?
08:43 < unbit> no i mean if a request has a body you have to read/consume it
08:44 < unbit> otherwise the socket will be clobbered
So to fix this you need to make sure to always either read the whole request body or not to send a body if it is not necessary (for a DELETE e.g.).
Not use threads!
I have same problem with Global Interpretator Lock in Python under uwsgi.
When i don't use threads- not connection reset.
Example of uwsgi config ( 1Gb Ram on server)
[root#mail uwsgi]# cat myproj_config.yaml
uwsgi:
print: Myproject Configuration Started
socket: /var/tmp/myproject_uwsgi.sock
pythonpath: /sites/myproject/myproj
env: DJANGO_SETTINGS_MODULE=settings
module: wsgi
chdir: /sites/myproject/myproj
daemonize: /sites/myproject/log/uwsgi.log
max-requests: 4000
buffer-size: 32768
harakiri: 30
harakiri-verbose: true
reload-mercy: 8
vacuum: true
master: 1
post-buffering: 8192
processes: 4
no-orphans: 1
touch-reload: /sites/myproject/log/uwsgi
post-buffering: 8192

Resources