I am running a rails app on nginx and sending an image to my rails api via my ios app.
The IOS app keeps receiving this response from nginx:
{ status code: 413, headers {
Connection = close;
"Content-Length" = 207;
"Content-Type" = "text/html";
Date = "Sun, 17 Jul 2016 23:16:07 GMT";
Server = "nginx/1.4.6 (Ubuntu)";
}
So I did sudo vi /etc/nginx/nginx.conf and added a large client_max_body_size.
Now my nginx.conf reads:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
#fastcgi_read_timeout 300;
client_max_body_size 1000000M;
...
I ran sudo service nginx reload and got [ OK ]
But, my ios still gets the same response.
If I use a tiny image the IOS app gets a 200 response.
Question
Why does nginx give the 413 error when the client_max_body_size is so big?
try to put
client_max_body_size 1000M;
In the server {} block where the sites nginx config is stored. Usually /etc/nginx/sites-available/mysite
As mentioned, set the max size to whatever you need it to be.
Related
I am running nginx/1.19.6 on Ubuntu.
I am struggling to get the upstream module to work without returning a 404.
My *.conf files are located in /etc/nginx/conf.d/
FILE factory_upstream.conf:
upstream factoryserver {
server factory.service.corp.com:443;
}
FILE factory_service.conf:
server
{
listen 80;
root /data/www;
proxy_cache factorycache;
proxy_cache_min_uses 1;
proxy_cache_methods GET HEAD POST;
proxy_cache_valid 200 72h;
#proxy_cache_valid any 5m;
location /factory/ {
access_log /var/log/nginx/access-factory.log main;
proxy_set_header x-apikey abcdefgh12345678;
### Works when expressed as a literal.# proxy_pass https://factory.service.corp.com/;
### 404 when using the upstream name.
proxy_pass https://factoryserver/;
}
}
I have error logging set to debug, but after reloading the configuration and attempting a call, there are no new records in the error log.
nginx -t # Scans OK
nginx -s reload # no errors
cat /var/log/nginx/error.log
...
2021/03/16 11:29:52 [notice] 26157#26157: signal process started
2021/03/16 11:38:20 [notice] 27593#27593: signal process started
The access-factory.log does record the request :
127.1.2.3 --;[16/Mar/2021:11:38:50 -0400];";GET /factory/api/manifest/get-full-manifest/PID/904227100052 HTTP/1.1" ";/factory/api/manifest/get-full-manifest/PID/904227100052" ;404; - ;/factory/api/manifest/get-full-manifest/PID/904227100052";-" ";ID="c4cfc014e3faa803c8fe136d6eae347d ";-" ";10.8.9.123:443" ";404" ";-"
To help with debugging, I cached the 404 error, "proxy_cache_valid any 5m;" commented out in the example above:
When I use the upstream name, the cache file contains the followiing:
<##$ non-printable characters $%^>
KEY: https://factoryserver/api/manifest/get-full-manifest/PID/904227100052
HTTP/1.1 404 Not Found
Server: nginx/1.17.8
Date: Tue, 16 Mar 2021 15:38:50 GMT
...
The key contains the name 'factoryserver' I don't know if that matters or not. Does it?
The server version is different than what I see when I enter the command nginx -v, which is: nginx version: nginx/1.19.6
Does the difference in version in the cache file and the command line indicate anything?
When I switch back to the literal server name in the proxy_pass, I get a 200 response with the requested data. The Key in the cache file then contains the literal upstream server name.
<##$ non-printable characters $%^>
KEY: https://factory.service.corp.com/api/manifest/get-full-manifest/PID/904227100052
HTTP/1.1 200
Server: nginx/1.17.8
Date: Tue, 16 Mar 2021 15:59:30 GMT
...
I will have multiple upstream servers, each providing different services. The configuration will be deployed to multiple factories, each with its own upstream servers.
I would like for the deployment team to only have to update the *_upstream.conf files, and keep the *_service.conf files static from deployment site to deployment site.
factory_upstream.conf
product_upstream.conf
shipping_upstream.conf
abc123_upstream.conf
Why do I get a 404 when using a named upstream server?
Based on the nginx version in the cached response not matching what you see on the command line, it seems that maybe the 404 is coming from the upstream server. I.e, your proxying is working, but the upstream server is returning a 404. To troubleshoot further, I would check the nginx logs for the upstream server and if the incoming request is what you expect.
Note that when using proxy_pass, it makes a big difference whether you have a / at the end or not. With a trailing slash, nginx treats this as the URI it should send the upstream request to and it doesn't include the URI matched by the location block (/factory/) while without, it includes the full URI as-is.
proxy_pass https://factoryserver/ results in https://factory.service.corp.com:443/
proxy_pass https://factoryserver results in https://factory.service.corp.com:443/factory/
Docs: https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/
So maybe when switching between using an upstream and specifying the literal server name, you're inadvertently being inconsistent with the trailing slash. It's a really easy detail to miss, especially when you don't know it's important.
Please provide more information about your server configuration where you proxy pass the requests. Only difference I see now is that you specify the port (443) in your upstream server.
I am trying to log the video duration of a mp4 file in NGINX access log, which should happen when a client 'GET' a mp4 file from the server. I have configured a custom log format as follows:
log_format nginx_custom_log '$remote_addr ... $request_uri $status $sent_http_content_type $video_duration';
access_log /var/log/nginx/access.log nginx_custom_log;
I can add a custom HTTP header - video_duration pointing to the path of a video file and manually assigning the value, but this require changing nginx configuration every time a video is added and reloading nginx:
location /path/video.mp4 {
add_header video_duration 123456;
}
The following record is written to NGINX access log:
192.168.0.1 ... /path/video.mp4 206 video/mp4 123456
I also tried configuring the X-Content-Duration HTTP header (which is no longer supported by Firefox) in NGINX configuration but no value has been logged.
I found a module named ngx_http_mp4_module. It allows specifying arguments like ?start=238.88&end=555.55, which leads me to believe NGINX is capable of reading the metadata of a mp4 file.
Is there a way to log the duration of a mp4 video file in NGINX access log, similar to how a file's content-length (in bytes) and content-type (video/mp4) can be logged?
Thanks Alex for suggesting to add a metadata file and read it using Lua. Here's the implementation for reference:
nginx.conf
location ~* \.(mp4)$ {
set $directoryPrefix '/usr/local/openresty/nginx/html';
set $metaFile '$directoryPrefix$request_uri.duration';
set $mp4_header "NOT_SET";
rewrite_by_lua_block {
local f = assert(io.open(ngx.var.metaFile, "r"))
ngx.var.mp4_header = f:read("*line")
f:close()
}
add_header mp4_duration $mp4_header;
}
log_format nginxlog '$remote_addr ... "$request_uri" $status $sent_http_content_type $sent_http_mp4_duration';
access_log /usr/local/openresty/nginx/logs/access.log nginxlog;
Please note that $metaFile refers to the metadata file containing the duration:
$directoryPrefix: /usr/local/openresty/nginx/html
$request_uri: /video/VideoABC.mp4
$metaFile: /usr/local/openresty/nginx/html/video/VideoABC.mp4.duration
access.log
192.168.0.1 ... "/video/VideoABC.mp4" 206 video/mp4 1234567890
mp4 & metadata file path
root#ubuntu: cd /usr/local/openresty/nginx/html/video/
root#ubuntu: ls
VideoABC.mp4 VideoABC.mp4.duration
root#ubuntu: cat VideoABC.mp4.duration
1234567890
I use Nginx as the reverse proxy.
How to limit the upload speed in Nginx?
I would like to share how to limit the upload speed of reverse proxy in Nginx.
Limiting download speed is easy as a piece of cake, but not for upload.
Here is the configuration to limit upload speed
find your directory
/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 1;
}
# 1)
# Add a stream
# This stream is used to limit upload speed
stream {
upstream site {
server your.upload-api.domain1:8080;
server your.upload-api.domain1:8080;
}
server {
listen 12345;
# 19 MiB/min = ~332k/s
proxy_upload_rate 332k;
proxy_pass site;
# you can use directly without upstream
# your.upload-api.domain1:8080;
}
}
http {
server {
# 2)
# Proxy to the stream that limits upload speed
location = /upload {
# It will proxy the data immediately if off
proxy_request_buffering off;
# It will pass to the stream
# Then the stream passes to your.api.domain1:8080/upload?$args
proxy_pass http://127.0.0.1:12345/upload?$args;
}
# You see? limit the download speed is easy, no stream
location /download {
keepalive_timeout 28800s;
proxy_read_timeout 28800s;
proxy_buffering off;
# 75MiB/min = ~1300kilobytes/s
proxy_limit_rate 1300k;
proxy_pass your.api.domain1:8080;
}
}
}
If your Nginx doesn't support the stream.
You may need to add a module.
static:
$ ./configure --with-stream
$ make && sudo make install
dynamic
$ ./configure --with-stream=dynamic
https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/
Note:
If you have a client such as HAProxy, and Nginx as a server.
You may face 504 timeout in HAProxy and 499 client is close in the Nginx while uploading large files with limiting the low upload speed.
You should increase or add timeout server: 605s or over in HAProxy because we want HAProxy not to close the connection while Nginx is busy uploading to your server.
https://stackoverflow.com/a/44028228/10258377
Some references:
https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_upload_rate
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_limit_rate
You will find some other ways by adding third-party modules to limit the upload speed, but it's complex and not working fine
https://www.nginx.com/resources/wiki/modules/
https://www.nginx.com/resources/wiki/modules/upload/
https://github.com/cfsego/limit_upload_rate
Thank me later ;)
I can't get Nginx working with memcached module, the requirement is to query remote service, cache data in memcached and never fetch remote endpoint until backend invalidates the cache. I have 2 containers with memcached v1.4.35 and one with Nginx v1.11.10.
The configuration is the following:
upstream http_memcached {
server 172.17.0.6:11211;
server 172.17.0.7:11211;
}
upstream remote {
server api.example.com:443;
keepalive 16;
}
server {
listen 80;
location / {
set $memcached_key "$uri?$args";
memcached_pass http_memcached;
error_page 404 502 504 = #remote;
}
location #remote {
internal;
proxy_pass https://remote;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
I tried to set memcached upstream incorrectly but I get HTTP 499 instead and warnings:
*3 upstream server temporarily disabled while connecting to upstream
It seems with described configuration Nginx can reach memcached successfully but can't write or read from it. I can write and read to memcached with telnet successfully.
Can you help me please?
My guesses on what's going on with your configuration
1. 499 codes
HTTP 499 is nginx' custom code meaning the client terminated connection before receiving the response (http://lxr.nginx.org/source/src/http/ngx_http_request.h#0120)
We can easily reproduce it, just
nc -k -l 172.17.0.6 172.17.0.6:11211
and curl your resource - curl will hang for a while and then press Ctrl+C — you'll have this message in your access logs
2. upstream server temporarily disabled while connecting to upstream
It means nginx didn't manage to reach your memcached and just removed it from the pool of upstreams. Suffice is to shutdown both memcached servers and you'd constantly see it in your error logs (I see it every time with error_log ... info).
As you see these messages your assumption that nginx can freely communicate with memcached servers doesn't seem to be true.
Consider explicitly setting http://nginx.org/en/docs/http/ngx_http_memcached_module.html#memcached_bind
and use the -b option with telnet to make sure you're correctly testing memcached servers for availability via your telnet client
3. nginx can reach memcached successfully but can't write or read from it
Nginx can only read from memcached via its built-in module
(http://nginx.org/en/docs/http/ngx_http_memcached_module.html):
The ngx_http_memcached_module module is used to obtain responses from
a memcached server. The key is set in the $memcached_key variable. A
response should be put in memcached in advance by means external to
nginx.
4. overall architecture
It's not fully clear from your question how the overall schema is supposed to work.
nginx's upstream uses weighted round-robin by default.
That means your memcached servers will be queried once at random.
You can change it by setting memcached_next_upstream not_found so a missing key will be considered an error and all of your servers will be polled. It's probably ok for a farm of 2 servers, but unlikely is it what your want for 20 servers
the same is ordinarily the case for memcached client libraries — they'd pick a server out of a pool according to some hashing scheme => so your key would end up on only 1 server out of the pool
5. what to do
I've managed to set up a similar configuration in 10 minutes on my local box - it works as expected. To mitigate debugging I'd get rid of docker containers to avoid networking overcomplication, run 2 memcached servers on different ports in single-threaded mode with -vv option to see when requests are reaching them (memcached -p 11211 -U o -vv) and then play with tail -f and curl to see what's really happening in your case.
6. working solution
nginx config:
https and http/1.1 is not used here but it doesn't matter
upstream http_memcached {
server 127.0.0.1:11211;
server 127.0.0.1:11212;
}
upstream remote {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name server.lan;
access_log /var/log/nginx/server.access.log;
error_log /var/log/nginx/server.error.log info;
location / {
set $memcached_key "$uri?$args";
memcached_next_upstream not_found;
memcached_pass http_memcached;
error_page 404 = #remote;
}
location #remote {
internal;
access_log /var/log/nginx/server.fallback.access.log;
proxy_pass http://remote;
proxy_set_header Connection "";
}
}
server.py:
this is my dummy server (python):
from random import randint
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello: {}\n'.format(randint(1, 100000))
This is how to run it (just need to install flask first)
FLASK_APP=server.py [flask][2] run -p 8080
filling in my first memcached server:
$ telnet 127.0.0.1 11211
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
set /? 0 900 5
cache
STORED
quit
Connection closed by foreign host.
checking:
note that we get a result every time although we stored data
only in the first server
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
this one is not in the cache so we'll get a response from server.py
$ curl http://server.lan/?q=1 && echo
Hello: 32337
whole picture:
the 2 windows on the right are
memcached -p 11211 -U o -vv
and
memcached -p 11212 -U o -vv
I work on windows 10 use built-in bash for ubantu
I installed openresty, and start it's nginx with command "nginx -p openresty-test".
My nginx.conf content is:
worker_processes 2;
error_log logs/error.log;
events {
worker_connections 1024;
}
http {
server {
listen 6699;
location / {
default_type text/html;
content_by_lua_block {
ngx.say("HelloWorld")
}
}
}
}
problem:
I try curl "http://localhost:6699 -i", It's blocked without any message and didn't return.
I use web browser to visit localhost:6699, It's blocked too.
I run "ps -ef | grep nginx" show:
27 1 0 2433 ? 00:00:00 nginx: master process ng
29 2 0 2433 ? 00:00:00 grep --color=auto nginx
As we see there is only a master process, no any work process, I doubt that cause my problem,
how should I do?