Nginx limit upload speed - nginx

I use Nginx as the reverse proxy.
How to limit the upload speed in Nginx?

I would like to share how to limit the upload speed of reverse proxy in Nginx.
Limiting download speed is easy as a piece of cake, but not for upload.
Here is the configuration to limit upload speed
find your directory
/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 1;
}
# 1)
# Add a stream
# This stream is used to limit upload speed
stream {
upstream site {
server your.upload-api.domain1:8080;
server your.upload-api.domain1:8080;
}
server {
listen 12345;
# 19 MiB/min = ~332k/s
proxy_upload_rate 332k;
proxy_pass site;
# you can use directly without upstream
# your.upload-api.domain1:8080;
}
}
http {
server {
# 2)
# Proxy to the stream that limits upload speed
location = /upload {
# It will proxy the data immediately if off
proxy_request_buffering off;
# It will pass to the stream
# Then the stream passes to your.api.domain1:8080/upload?$args
proxy_pass http://127.0.0.1:12345/upload?$args;
}
# You see? limit the download speed is easy, no stream
location /download {
keepalive_timeout 28800s;
proxy_read_timeout 28800s;
proxy_buffering off;
# 75MiB/min = ~1300kilobytes/s
proxy_limit_rate 1300k;
proxy_pass your.api.domain1:8080;
}
}
}
If your Nginx doesn't support the stream.
You may need to add a module.
static:
$ ./configure --with-stream
$ make && sudo make install
dynamic
$ ./configure --with-stream=dynamic
https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/
Note:
If you have a client such as HAProxy, and Nginx as a server.
You may face 504 timeout in HAProxy and 499 client is close in the Nginx while uploading large files with limiting the low upload speed.
You should increase or add timeout server: 605s or over in HAProxy because we want HAProxy not to close the connection while Nginx is busy uploading to your server.
https://stackoverflow.com/a/44028228/10258377
Some references:
https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_upload_rate
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_limit_rate
You will find some other ways by adding third-party modules to limit the upload speed, but it's complex and not working fine
https://www.nginx.com/resources/wiki/modules/
https://www.nginx.com/resources/wiki/modules/upload/
https://github.com/cfsego/limit_upload_rate
Thank me later ;)

Related

roku sending request, but nginx not responding or logging

I am trying to display a series of short mp4's on my Roku. The idea is to serve them from nginx running on a local host. The Roku hangs on "retrieving" the video. I have used wireshark to witness the requests coming from the roku, and they continuously repeat. Though nginx does not respond, nor does it log the request in access.log or error.log.
I feel that the roku scripts are sufficiently coded as I can confirm that the playlist is received by the video node, the screen is focused, and the request for the video is being made through port 80. I can request the url through a browser and the mp4 plays in the browser if made with http: or in a player if made with rtmp:.
This is the simple nginx configuration in conf.d;
server {
listen 80 default_server;
location / { # the '/' matches all requests
root /myNginx/sites; # the request URI would be appended to the root
index 01_default_test.html; # the index directive provides a default file or list of files to look for
}
location /videos/ { # for testing use 'http://10.0.0.13/videos/sample.mp4'
mp4; # activates the http_mp4 module for streaming the video
root /VODs; # allows adding 'videos' to the uri and the file name
}
}
}
I appended this to the nginx.conf file;
rtmp {
server {
listen 1935;
chunk_size 4000;
# Video on demand
application VOD { # rtmp://10.0.0.13/VOD/sample03.mp4
play /VOD/videos/;
}
}
Not sure where to go from here. Does anyone know why nginx seems to be ignoring the requests? I am using Ubuntu, and the firewall is currently inactive.

Nginx memcached with fallback to remote service

I can't get Nginx working with memcached module, the requirement is to query remote service, cache data in memcached and never fetch remote endpoint until backend invalidates the cache. I have 2 containers with memcached v1.4.35 and one with Nginx v1.11.10.
The configuration is the following:
upstream http_memcached {
server 172.17.0.6:11211;
server 172.17.0.7:11211;
}
upstream remote {
server api.example.com:443;
keepalive 16;
}
server {
listen 80;
location / {
set $memcached_key "$uri?$args";
memcached_pass http_memcached;
error_page 404 502 504 = #remote;
}
location #remote {
internal;
proxy_pass https://remote;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
I tried to set memcached upstream incorrectly but I get HTTP 499 instead and warnings:
*3 upstream server temporarily disabled while connecting to upstream
It seems with described configuration Nginx can reach memcached successfully but can't write or read from it. I can write and read to memcached with telnet successfully.
Can you help me please?
My guesses on what's going on with your configuration
1. 499 codes
HTTP 499 is nginx' custom code meaning the client terminated connection before receiving the response (http://lxr.nginx.org/source/src/http/ngx_http_request.h#0120)
We can easily reproduce it, just
nc -k -l 172.17.0.6 172.17.0.6:11211
and curl your resource - curl will hang for a while and then press Ctrl+C — you'll have this message in your access logs
2. upstream server temporarily disabled while connecting to upstream
It means nginx didn't manage to reach your memcached and just removed it from the pool of upstreams. Suffice is to shutdown both memcached servers and you'd constantly see it in your error logs (I see it every time with error_log ... info).
As you see these messages your assumption that nginx can freely communicate with memcached servers doesn't seem to be true.
Consider explicitly setting http://nginx.org/en/docs/http/ngx_http_memcached_module.html#memcached_bind
and use the -b option with telnet to make sure you're correctly testing memcached servers for availability via your telnet client
3. nginx can reach memcached successfully but can't write or read from it
Nginx can only read from memcached via its built-in module
(http://nginx.org/en/docs/http/ngx_http_memcached_module.html):
The ngx_http_memcached_module module is used to obtain responses from
a memcached server. The key is set in the $memcached_key variable. A
response should be put in memcached in advance by means external to
nginx.
4. overall architecture
It's not fully clear from your question how the overall schema is supposed to work.
nginx's upstream uses weighted round-robin by default.
That means your memcached servers will be queried once at random.
You can change it by setting memcached_next_upstream not_found so a missing key will be considered an error and all of your servers will be polled. It's probably ok for a farm of 2 servers, but unlikely is it what your want for 20 servers
the same is ordinarily the case for memcached client libraries — they'd pick a server out of a pool according to some hashing scheme => so your key would end up on only 1 server out of the pool
5. what to do
I've managed to set up a similar configuration in 10 minutes on my local box - it works as expected. To mitigate debugging I'd get rid of docker containers to avoid networking overcomplication, run 2 memcached servers on different ports in single-threaded mode with -vv option to see when requests are reaching them (memcached -p 11211 -U o -vv) and then play with tail -f and curl to see what's really happening in your case.
6. working solution
nginx config:
https and http/1.1 is not used here but it doesn't matter
upstream http_memcached {
server 127.0.0.1:11211;
server 127.0.0.1:11212;
}
upstream remote {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name server.lan;
access_log /var/log/nginx/server.access.log;
error_log /var/log/nginx/server.error.log info;
location / {
set $memcached_key "$uri?$args";
memcached_next_upstream not_found;
memcached_pass http_memcached;
error_page 404 = #remote;
}
location #remote {
internal;
access_log /var/log/nginx/server.fallback.access.log;
proxy_pass http://remote;
proxy_set_header Connection "";
}
}
server.py:
this is my dummy server (python):
from random import randint
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello: {}\n'.format(randint(1, 100000))
This is how to run it (just need to install flask first)
FLASK_APP=server.py [flask][2] run -p 8080
filling in my first memcached server:
$ telnet 127.0.0.1 11211
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
set /? 0 900 5
cache
STORED
quit
Connection closed by foreign host.
checking:
note that we get a result every time although we stored data
only in the first server
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
this one is not in the cache so we'll get a response from server.py
$ curl http://server.lan/?q=1 && echo
Hello: 32337
whole picture:
the 2 windows on the right are
memcached -p 11211 -U o -vv
and
memcached -p 11212 -U o -vv

file size too big for nginx, 413 error

I am running a rails app on nginx and sending an image to my rails api via my ios app.
The IOS app keeps receiving this response from nginx:
{ status code: 413, headers {
Connection = close;
"Content-Length" = 207;
"Content-Type" = "text/html";
Date = "Sun, 17 Jul 2016 23:16:07 GMT";
Server = "nginx/1.4.6 (Ubuntu)";
}
So I did sudo vi /etc/nginx/nginx.conf and added a large client_max_body_size.
Now my nginx.conf reads:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
#fastcgi_read_timeout 300;
client_max_body_size 1000000M;
...
I ran sudo service nginx reload and got [ OK ]
But, my ios still gets the same response.
If I use a tiny image the IOS app gets a 200 response.
Question
Why does nginx give the 413 error when the client_max_body_size is so big?
try to put
client_max_body_size 1000M;
In the server {} block where the sites nginx config is stored. Usually /etc/nginx/sites-available/mysite
As mentioned, set the max size to whatever you need it to be.

HLS using Nginx RTMP Module not working

So I installed NGINX and the RTMP MODULE on my mac in the usr/local/nginx location. RTMP stream works fine just not the HLS version. Here is my config file:
events {
worker_connections 1024;
}
rtmp {
server {
listen 1936;
chunk_size 4000;
application small {
live on;
# Video with reduced resolution comes here from ffmpeg
}
# video on demand
application vod {
play /var/flvs;
}
application vod2 {
play /var/mp4s;
}
# Many publishers, many subscribers
# no checks, no recording
application videochat {
live on;
# The following notifications receive all
# the session variables as well as
# particular call arguments in HTTP POST
# request
# Make HTTP request & use HTTP retcode
# to decide whether to allow publishing
# from this connection or not
on_publish http://localhost:8080/publish;
# Same with playing
on_play http://localhost:8080/play;
# Publish/play end (repeats on disconnect)
on_done http://localhost:8080/done;
# All above mentioned notifications receive
# standard connect() arguments as well as
# play/publish ones. If any arguments are sent
# with GET-style syntax to play & publish
# these are also included.
# Example URL:
# rtmp://localhost/myapp/mystream?a=b&c=d
# record 10 video keyframes (no audio) every 2 minutes
record keyframes;
record_path /tmp/vc;
record_max_frames 10;
record_interval 2m;
# Async notify about an flv recorded
on_record_done http://localhost:8080/record_done;
}
# HLS
# For HLS to work please create a directory in tmpfs (/tmp/hls here)
# for the fragments. The directory contents is served via HTTP (see
# http{} section in config)
#
# Incoming stream must be in H264/AAC. For iPhones use baseline H264
# profile (see ffmpeg example).
# This example creates RTMP stream from movie ready for HLS:
#
# ffmpeg -loglevel verbose -re -i movie.avi -vcodec libx264
# -vprofile baseline -acodec libmp3lame -ar 44100 -ac 1
# -f flv rtmp://localhost:1935/hls/movie
#
# If you need to transcode live stream use 'exec' feature.
#
application hls {
live on;
hls on;
hls_path /tmp/hls;
}
# MPEG-DASH is similar to HLS
application dash {
live on;
dash on;
dash_path /tmp/dash;
}
}
}
# HTTP can be used for accessing RTMP stats
http {
server {
listen 8080;
# This URL provides RTMP statistics in XML
location /stat {
rtmp_stat all;
# Use this stylesheet to view XML as web page
# in browser
rtmp_stat_stylesheet stat.xsl;
}
location /stat.xsl {
# XML stylesheet to view RTMP stats.
# Copy stat.xsl wherever you want
# and put the full directory path here
root /path/to/stat.xsl/;
}
location /hls {
# Serve HLS fragments
types {
application/vnd.apple.mpegurl m3u8;
video/mp2t ts;
}
root /tmp;
add_header Cache-Control no-cache;
}
location /dash {
# Serve DASH fragments
root /tmp;
add_header Cache-Control no-cache;
}
}
}
I am using the hls application to stream to. When I view the stream located at rtmp://ip:1936/hls/test i can see it fine. When I try and view http://ip:1936/hls/test.m3u8 I cannot see it. I created a folder in this location for hls /usr/local/nginx/tmp/hls. Im wondering if this is in the right place as nothing is being created in the folder? Could it be permission issues?
I am using OBS to stream which uses x246 encoding video but not sure if it's AAC for audio.
A similar issue is being had here: https://groups.google.com/forum/#!topic/nginx-rtmp/dBKh4akQpcs
but no answer :(.
Any help is appreciated. Thanks.
you content for HLS is over port 8080 and rtmp is over 1936
meaning that rtmp://ip:1936/hls/test
or http://ip:8080/hls/test.m3u8
I got it fixed by changing to recording, with the following settings:
next to that, I also changed what JPhix mentioned.
seeing your config, but you place files under /tmp folder not under /usr/local/nginx (full path). If problem persists , an good strategy is start with one application with all of codecs (hls,mpeg-dash) ( like the config examples in github ).
P.D: this module is only for h264 and aac

Nginx Load Balancer issue

We are using Nginx As a load balancer for multiple riak nodes. The setup worked fine for some time(few hours) before Nginx started giving bad gateway 502 errors. On checking the individual nodes seemed to be working. We found out that The problem was with nginx buffer size hence increased the buffer size to 16k, it worked fine for one more day before we started getting 502 error for everything.
My Nginx configuration is as follows
upstream riak {
server 127.0.0.1:8091 weight=3;
server 127.0.0.1:8092;
server 127.0.0.1:8093;
server 127.0.0.1:8094;
}
server {
listen 8098;
server_name 127.0.0.1:8098;
location / {
proxy_pass http://riak;
proxy_buffer_size 16k;
proxy_buffers 8 16k;
}
}
Any help is appreciated,Thank you.
Check if you are running out of fd's on the nginx box. Check with netstat if you have too many connections in the TIME_WAIT state. If so, you will need to reduce you tcp_fin_timeout value from default 60 seconds to something smaller.

Resources