Adaptive bit rate streaming not working in nginx-vod-module in NGINX server - nginx

I have installed Nginx and configured VOD for adaprive streaming using nginx-vod-module. While requesting the master.m3u8 file I'm getting same ts files served for different network bandwidth.
The master.m3u8 file has the following content:
#EXTM3U
#EXT-X-STREAM-INF:PROGRAMID=1,BANDWIDTH=1914317,RESOLUTION=1280x544,CODECS="avc1.64001f,mp4a.40.2"
http://localhost/content/Input.mp4/index-v1-a1.m3u8
The Nginx configuration is:
location /content {
vod hls;
vod_mode local;
root /usr/share/nginx/html;
gzip on;
gzip_types application/vnd.apple.mpegurl;
expires 100d;
add_header Last-Modified "Sun, 19 Nov 2000 08:52:00 GMT";
}
How can I get adaptive bitrate enabled using nginx-vod-module and what's the best way to verify it ?

You encode multiple versions of your Input.mp4 with different resolutions/bitrates. The aspect ratio should be the same. Eg: Input_high.mp4, Input_low.mp4
You edit the master m3u8 playlist and add each rendition with its specific bitrate and resolution:
#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=...,RESOLUTION=...,CODECS="..."
/content/Input_low.mp4.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=...,RESOLUTION=...,CODECS="..."
/content/Input_high.mp4.m3u8
When the nginx-vod-module receives a request for a filename.mp4.m3u8 it automatically segments filename.mp4 for HLS and creates the playlist for you. Eg: /content/Input_low.mp4.m3u8 for /content/Input_low.mp4

Related

Videos greater than 2 MB are not processed by the Nginx server to backend & to AWS S3 bucket

We have been developing an enterprise application for the last two years. Based on microservice architecture, we have nine services with their respective databases and an Angular frontend on NGINX that calls/connects microservices. During our development, we implemented these services and their databases on the Hetzner cloud server with 4GB RAM and 2 CPUs over the internal network, and everything has been working seamlessly. We are uploading all images, pdf, and videos on AWS S3, and it has been smooth sailing. Videos of all sizes were uploaded and played without any issues.
We liked Hetzner and decided to go production also with them. We took the first server and installed proxmox over it, and deployed LXC containers and our services. I tested again here, and no problems were found again.
We then decided to take another server, deployed proxmox, and clustered them. This is where the problem started when we hired a network guy who configured a bridged network between the containers of both nodes. Each container pings the other well, and the telnet also connects over an internal network. MTU set on this bridge is 1400.
Primary Problem- We are NOT able to upload videos over 2 MB to S3 anymore from this network
Other problems – These are intermittent issues, noted in logs–
NGNIX –
504 Gateway Time-out ERRORS of likes, on multiple services--> upstream timed out (110: Connection timed out) while reading response header from upstream, client: 223.235.101.169, server: abc.xyz.com, request: "GET /courses/course HTTP/1.1", upstream: "http://10.10.XX.XX:8080//courses/course/toBeApprove", host: " abc.xyz.com, ", referrer: "https:// abc.xyz.com, /"
Tomcat-
com.amazonaws.services.s3.model.AmazonS3Exception: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. (Service: Amazon S3; Status Code: 400; Error Code: RequestTimeout; Request ID: 7J2EHKVDWQP3367G; S3 Extended Request ID: xGGCQhESxh/Mo6ddwtGYShLIeCJYbgCRT8oGleQu/IfguEfbZpTQXG/AIzgLnG2F5YuCqk7vVE8=), S3 Extended Request ID: xGGCQhESxh/Mo6ddwtGYShLIeCJYbgCRT8oGleQu/IfguEfbZpTQXG/AIzgLnG2F5YuCqk7vVE8=
(we increased all known timeouts, both in nginx and tomcat)
Mysql- 2022-09-08T04:24:27.235964Z 8 [Warning] [MY-010055]
[Server] IP address '10.10.XX.XX could not be resolved: Name or
service not known
Other key points to note – we allow video up to 100 mb to upload thus known limits set in nginx and tomcat configurations
Nginx, client_max_body_size 100m;
And tomcat <Connector port="8080"
protocol="HTTP/1.1" maxPostSize="102400” maxHttpHeaderSize="102400"
connectionTimeout="20000" redirectPort="8443" />
In these readings and trials running over last 15 days, we stopped, all firewalls, ufw on OS, proxmox firewall, and even the data center firewall while debugging.
This is our nginx.conf
http {
proxy_http_version 1.1;
proxy_set_header Connection "";
##
client_body_buffer_size 16K;
client_header_buffer_size 1k;
client_max_body_size 100m;
client_header_timeout 100s;
client_body_timeout 100s;
large_client_header_buffers 4 16k;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 300;
send_timeout 600;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
gzip on;
gzip_comp_level 2;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/x-javascript text/xml text/css application/xml;
These are our primary test/debugging trials.
**1. Testing with a small video (of size 273 Kb)**
a. Nginx log- clean, nothing related to operations
b. Tomcat log-
Start- CoursesServiceImpl - addCourse - Used Memory:73
add course 703
image file not null org.springframework.web.multipart.support.StandardMultipartHttpServletRequest$StandardMultipartFile#15476ca3
image save to s3 bucket
image folder name images
buckets3 lmsdev-cloudfront/images
image s3 bucket for call
imageUrl https://lmsdev-cloudfront.s3.amazonaws.com/images/703_4_istockphoto-1097843576-612x612.jpg
video file not null org.springframework.web.multipart.support.StandardMultipartHttpServletRequest$StandardMultipartFile#13419d27
video save to s3 bucket
video folder name videos
input Stream java.io.ByteArrayInputStream#4da82ff
buckets3 lmsdev-cloudfront/videos
video s3 bucket for call
video url https://lmsdev-cloudfront.s3.amazonaws.com/videos/703_4_giphy360p.mp4
Before Finally - CoursesServiceImpl - addCourse - Used Memory:126
After Finally- CoursesServiceImpl - addCourse - Used Memory:49
c. S3 bucket
[S3 bucket][1]
[1]: https://i.stack.imgur.com/T7daW.png
3. Testing with video 2 mb (fractionally less)
a. Progress bar keeps running about 5 minutes, then
b. Nginx logs-
2022/09/10 16:15:34 [error] 3698306#3698306: *24091 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 223.235.101.169, server: login.pathnam.education, request: "POST /courses/courses/course HTTP/1.1", upstream: "http://10.10.10.10:8080//courses/course", host: "login.pathnam.education", referrer: "https://login.pathnam.education/"
c. Tomcat logs-
Start- CoursesServiceImpl - addCourse - Used Memory:79
add course 704
image file not null org.springframework.web.multipart.support.StandardMultipartHttpServletRequest$StandardMultipartFile#352d57e3
image save to s3 bucket
image folder name images
buckets3 lmsdev-cloudfront/images
image s3 bucket for call
imageUrl https://lmsdev-cloudfront.s3.amazonaws.com/images/704_4_m_Maldives_dest_landscape_l_755_1487.webp
video file not null org.springframework.web.multipart.support.StandardMultipartHttpServletRequest$StandardMultipartFile#45bdb178
video save to s3 bucket
video folder name videos
input Stream java.io.ByteArrayInputStream#3a85dab9
And after few minutes
com.amazonaws.SdkClientException: Unable to execute HTTP request: Connection timed out (Write failed)
d. S3 Bucket – No entry
Now tried to upload the same video from our test server, and it was instantly uploaded to S3 bucket.
Reading all posts with similar problems,mostly are related to php.ini configurations and thus not related to us.
I have solved the issue now, MTU set in LXC container was set differently than what was configured in virtual switch. Proxmox does not give to set MTU while creating LXC container (and you expect bridge MTU to be used) and you can miss that.
Go to conf file of container; in my case it is 100
nano /etc/pve/lxc/100.conf
find and edit this line
net0: name=eno1,bridge=vmbr4002,firewall=1,hwaddr=0A:14:98:05:8C:C5,ip=192.168.0.2/24,type=veth
to add mtu value, as per switch in towards the last:
name=eno1,bridge=vmbr4002,firewall=1,hwaddr=0A:14:98:05:8C:C5,ip=192.168.0.2/24,type=veth,mtu=1400 (my value at vswitch)
Reboot the container for a permanent change.
And all worked like a charm for me. Hope it helps someone who also uses Proxmox interface to create the containers and thus missed this to configure via CLI (a suggested enhancement to Proxmox)

Log the video duration of a mp4 file in NGINX access log

I am trying to log the video duration of a mp4 file in NGINX access log, which should happen when a client 'GET' a mp4 file from the server. I have configured a custom log format as follows:
log_format nginx_custom_log '$remote_addr ... $request_uri $status $sent_http_content_type $video_duration';
access_log /var/log/nginx/access.log nginx_custom_log;
I can add a custom HTTP header - video_duration pointing to the path of a video file and manually assigning the value, but this require changing nginx configuration every time a video is added and reloading nginx:
location /path/video.mp4 {
add_header video_duration 123456;
}
The following record is written to NGINX access log:
192.168.0.1 ... /path/video.mp4 206 video/mp4 123456
I also tried configuring the X-Content-Duration HTTP header (which is no longer supported by Firefox) in NGINX configuration but no value has been logged.
I found a module named ngx_http_mp4_module. It allows specifying arguments like ?start=238.88&end=555.55, which leads me to believe NGINX is capable of reading the metadata of a mp4 file.
Is there a way to log the duration of a mp4 video file in NGINX access log, similar to how a file's content-length (in bytes) and content-type (video/mp4) can be logged?
Thanks Alex for suggesting to add a metadata file and read it using Lua. Here's the implementation for reference:
nginx.conf
location ~* \.(mp4)$ {
set $directoryPrefix '/usr/local/openresty/nginx/html';
set $metaFile '$directoryPrefix$request_uri.duration';
set $mp4_header "NOT_SET";
rewrite_by_lua_block {
local f = assert(io.open(ngx.var.metaFile, "r"))
ngx.var.mp4_header = f:read("*line")
f:close()
}
add_header mp4_duration $mp4_header;
}
log_format nginxlog '$remote_addr ... "$request_uri" $status $sent_http_content_type $sent_http_mp4_duration';
access_log /usr/local/openresty/nginx/logs/access.log nginxlog;
Please note that $metaFile refers to the metadata file containing the duration:
$directoryPrefix: /usr/local/openresty/nginx/html
$request_uri: /video/VideoABC.mp4
$metaFile: /usr/local/openresty/nginx/html/video/VideoABC.mp4.duration
access.log
192.168.0.1 ... "/video/VideoABC.mp4" 206 video/mp4 1234567890
mp4 & metadata file path
root#ubuntu: cd /usr/local/openresty/nginx/html/video/
root#ubuntu: ls
VideoABC.mp4 VideoABC.mp4.duration
root#ubuntu: cat VideoABC.mp4.duration
1234567890

How can I use nginx brotli_static with proxy_pass?

nginx is compiled with Brotli enabled. In my nginx.conf
http {
...
brotli_static on;
}
My .br files are located on a server with proxy_pass.
location / {
...
proxy_pass http://app;
}
And .br files have been generated on that app server:
$ ls -lh public/js/dist/index.js*
-rw-r--r-- 1 mike wheel 1.2M Apr 4 09:07 public/js/dist/index.js
-rw-r--r-- 1 mike wheel 201K Apr 4 09:07 public/js/dist/index.js.br
Pulling down the uncompressed file works:
wget https://example.com/js/dist/index.js
Pulls down 1,157,704 size uncompressed file.
wget -S --header="accept-encoding: gzip" https://example.com/js/dist/index.js
Pulls down a 309,360 size gzipped file.
But:
wget -S --header="accept-encoding: br" https://example.com/js/dist/index.js
Still gets the full 1,157,704 size uncompressed file.
I had hoped brotli_static would proxy the .br file requests too - sending something a GET request to the backend for the .br equivalent resource - but this doesn't seem to work.
Can brotli_static work through proxy_pass?
Based on Maxim Dounin (an nginx core engineer)'s comment on gzip_static - which I imagine brotli_static behaves similarly to - brotli_static only handles files, not HTTP resources:
That is, gzip_static is only expected to work when nginx is about to return regular files.
So it looks like brotli_static and proxy_pass isn't possible.
Your nginx config file needs a section to tell it to serve the static content folder. You don't want your app server to do that.
I believe you'll need to place it before the location / so that it takes precedence.

HLS using Nginx RTMP Module not working

So I installed NGINX and the RTMP MODULE on my mac in the usr/local/nginx location. RTMP stream works fine just not the HLS version. Here is my config file:
events {
worker_connections 1024;
}
rtmp {
server {
listen 1936;
chunk_size 4000;
application small {
live on;
# Video with reduced resolution comes here from ffmpeg
}
# video on demand
application vod {
play /var/flvs;
}
application vod2 {
play /var/mp4s;
}
# Many publishers, many subscribers
# no checks, no recording
application videochat {
live on;
# The following notifications receive all
# the session variables as well as
# particular call arguments in HTTP POST
# request
# Make HTTP request & use HTTP retcode
# to decide whether to allow publishing
# from this connection or not
on_publish http://localhost:8080/publish;
# Same with playing
on_play http://localhost:8080/play;
# Publish/play end (repeats on disconnect)
on_done http://localhost:8080/done;
# All above mentioned notifications receive
# standard connect() arguments as well as
# play/publish ones. If any arguments are sent
# with GET-style syntax to play & publish
# these are also included.
# Example URL:
# rtmp://localhost/myapp/mystream?a=b&c=d
# record 10 video keyframes (no audio) every 2 minutes
record keyframes;
record_path /tmp/vc;
record_max_frames 10;
record_interval 2m;
# Async notify about an flv recorded
on_record_done http://localhost:8080/record_done;
}
# HLS
# For HLS to work please create a directory in tmpfs (/tmp/hls here)
# for the fragments. The directory contents is served via HTTP (see
# http{} section in config)
#
# Incoming stream must be in H264/AAC. For iPhones use baseline H264
# profile (see ffmpeg example).
# This example creates RTMP stream from movie ready for HLS:
#
# ffmpeg -loglevel verbose -re -i movie.avi -vcodec libx264
# -vprofile baseline -acodec libmp3lame -ar 44100 -ac 1
# -f flv rtmp://localhost:1935/hls/movie
#
# If you need to transcode live stream use 'exec' feature.
#
application hls {
live on;
hls on;
hls_path /tmp/hls;
}
# MPEG-DASH is similar to HLS
application dash {
live on;
dash on;
dash_path /tmp/dash;
}
}
}
# HTTP can be used for accessing RTMP stats
http {
server {
listen 8080;
# This URL provides RTMP statistics in XML
location /stat {
rtmp_stat all;
# Use this stylesheet to view XML as web page
# in browser
rtmp_stat_stylesheet stat.xsl;
}
location /stat.xsl {
# XML stylesheet to view RTMP stats.
# Copy stat.xsl wherever you want
# and put the full directory path here
root /path/to/stat.xsl/;
}
location /hls {
# Serve HLS fragments
types {
application/vnd.apple.mpegurl m3u8;
video/mp2t ts;
}
root /tmp;
add_header Cache-Control no-cache;
}
location /dash {
# Serve DASH fragments
root /tmp;
add_header Cache-Control no-cache;
}
}
}
I am using the hls application to stream to. When I view the stream located at rtmp://ip:1936/hls/test i can see it fine. When I try and view http://ip:1936/hls/test.m3u8 I cannot see it. I created a folder in this location for hls /usr/local/nginx/tmp/hls. Im wondering if this is in the right place as nothing is being created in the folder? Could it be permission issues?
I am using OBS to stream which uses x246 encoding video but not sure if it's AAC for audio.
A similar issue is being had here: https://groups.google.com/forum/#!topic/nginx-rtmp/dBKh4akQpcs
but no answer :(.
Any help is appreciated. Thanks.
you content for HLS is over port 8080 and rtmp is over 1936
meaning that rtmp://ip:1936/hls/test
or http://ip:8080/hls/test.m3u8
I got it fixed by changing to recording, with the following settings:
next to that, I also changed what JPhix mentioned.
seeing your config, but you place files under /tmp folder not under /usr/local/nginx (full path). If problem persists , an good strategy is start with one application with all of codecs (hls,mpeg-dash) ( like the config examples in github ).
P.D: this module is only for h264 and aac

Blank POST with nginx upload module and chunked upload

I am using the nginx upload module to accept large uploads for a PHP application. I have configured nginx following this blog post (modified for my needs).
Here is (the applicable portion of) my nginx configuration:
server {
# [ ... ]
location /upload {
set $upload_field_name "file";
upload_pass /index.php;
upload_store /home/example/websites/example.com/storage/uploads 1;
upload_resumable on;
upload_max_file_size 0;
upload_set_form_field $upload_field_name[filename] "$upload_file_name";
upload_set_form_field $upload_field_name[path] "$upload_tmp_path";
upload_set_form_field $upload_field_name[content_type] "$upload_content_type";
upload_aggregate_form_field $upload_field_name[size] "$upload_file_size";
upload_pass_args on;
upload_cleanup 400-599;
client_max_body_size 200M;
}
}
In the client side JavaScript, I am using 8MB chunks.
With this configuration, I am able to upload any file that is one chunk or smaller. However, when I try to upload any file that is more than one chunk, the response I get from the server for each intermediate chunk is blank, and the final chunk triggers the call to the PHP application without any incoming POST data.
What am I missing?
It turns out that #Brandan's blog post actually leaves out one important directive:
upload_state_store /tmp;
I added that and now everything works as expected.

Resources