Nginx client_max_body_size not working - nginx

This should be a quick fix.
So for some reason I still can't get a request that is greater than 1MB to succeed without returning 413 Request Entity Too Large.
For example with the following configuration file and a request of size ~2MB, I get the following error message in my nginx error.log:
*1 client intended to send too large body: 2666685 bytes,
I have tried setting the configuration that is set below and then restarting my nginx server but I still get the 413 error.
Is there anything I am doing wrong?
server {
listen 8080;
server_name *****/api; (*omitted*)
client_body_in_file_only clean;
client_body_buffer_size 32K;
charset utf-8;
client_max_body_size 500M;
sendfile on;
send_timeout 300s;
listen 443 ssl;
location / {
try_files $uri #(*omitted*);
}
location #parachute_server {
include uwsgi_params;
uwsgi_pass unix:/var/www/(*omitted*)/(*omitted*).sock;
}
}
Thank you in advance for the help!

I'm surprised you haven't received a response but my hunch is you already have it set somewhere else in another config file.
Take a look at nginx - client_max_body_size has no effect

Weirdly it works after adding the same thing "client_max_body_size 100M"in http,location,server all the blocks.

Related

Stuggling to set up caching for PyPi server via NGINX/UWSGI

I'm trying to configure caching of an PyPi Server via NGINX/uWSGI and failing miserably.
My /sites-available/pypi config is as follows:
uwsgi_cache_path /mnt/pypi/nginx-cache
levels=1:2
keys_zone=pypiserver_cache:10m
max_size=10g
inactive=60m
use_temp_path=off;
server {
listen 80 default_server;
listen 443 default_server ssl;
ssl_certificate /etc/ssl/certs/domain.pem;
ssl_certificate_key /etc/ssl/private/domain.key;
client_max_body_size 5M;
location / {
uwsgi_cache pypiserver_cache;
uwsgi_buffering on;
uwsgi_cache_key $request_uri;
add_header X-uWSGI-Cache $upstream_cache_status;
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/internal_pypi.socket;
}
}
NGINX runs and reports no errors but requesting the same package multiple times does not cache it (proven by curling the URL and observing the header X-uWSGI-Cache: MISS) and there is nothing being stored in /mnt/pypi/nginx-cache.
Let me know if I can provide any more helpful info, thanks!
References:
https://github.com/pypiserver/pypiserver#serving-thousands-of-packages
http://nginx.org/en/docs/http/ngx_http_uwsgi_module.html
This is resolved, in my case, the pypi server python file needed some changes.
application = pypiserver.app(
root="/mnt/pypi/directory",
redirect_to_fallback=False,
password_file="path/to/file",
cache_control=3600 #this needed to be added
)

How to replace Nginx default error 400 "The plain HTTP request was sent to HTTPS port" page with Play! Framework backend.

I have a website using Play! framework with multiple domains proxying to the backend, example.com and example.ca.
I have all http requests on port 80 being rewritten to https on port 443. This is all working as expected.
But when I type into the address bar http://example.com:443, I'm served nginx's default error page, which says
400 Bad Request
The plain HTTP request was sent to HTTPS port
nginx
I'd like to serve my own error page for this, but I just can't seem to get it working. Here's a snipped of my configuration.
upstream my-backend {
server 127.0.0.1:9000;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/certs/example.crt;
ssl_certificate_key /etc/ssl/private/example.key;
keepalive_timeout 70;
server_name example.com;
add_header Strict-Transport-Security max-age=15768000; #six months
location / {
proxy_pass http://my-backend;
}
error_page 400 502 error.html;
location = /error.html {
root /usr/share/nginx/html;
}
}
It works when my Play! application is shut down, but when it's running it always serves up the default nginx page.
I've tried adding the error page configuration to another server block like this
server {
listen 443;
ssl off;
server_name example.com;
error_page [..]
}
But that fails with the browser complaining about the certificate being wrong.
I'd really ultimately like to be able to catch and handle any errors which aren't handled by my Play! application with a custom page, or pages. I'd also like this solution to work if the user manually enters the site's IP into the address bar instead of the server name.
Any help is appreciated.
I found the answer to this here https://stackoverflow.com/a/12610382/4023897.
In my particular case, where I want to serve a static error page under these circumstances, my configuration is as follows
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/certs/example.crt;
ssl_certificate_key /etc/ssl/private/example.key;
keepalive_timeout 70;
server_name example.com;
add_header Strict-Transport-Security max-age=15768000; #six months
location = /error.html {
root /usr/share/nginx/html;
autoindex off;
}
location / {
proxy_pass http://my-backend;
}
# If they come here using HTTP, bounce them to the correct scheme
error_page 497 https://$host:$server_port/error.html;
}

nginx proxy request buffering is not working as expected

This is my nginx config :
worker_processes auto;
user nginx;
events {
worker_connections 1024;
use epoll;
}
http {
tcp_nodelay on;
add_header Cache-Control no-cache;
upstream servers {
server 127.0.0.1:9999;
server 127.0.0.1:9998;
}
proxy_request_buffering on;
server {
listen 80;
client_max_body_size 200M;
client_body_buffer_size 200M;
server_name localhost;
location / {
try_files $uri #proxy_upload;
}
location #proxy_upload {
proxy_pass_request_body on;
proxy_pass http://servers;
}
}
}
I try to upload files, I am doing something like a chunked upload, files are more than 2G, client send files in chunks, the nginx and the script are working, but nginx is not working as expected
you see, I turned proxy_request_buffering on, so I expect nginx buffer all 200M of file and then pass it to back-end(which is tornado-python) at once, but nginx pass it to back-end in 1M or 2M chunks, this behavior lead to very higher cpu usage, higher system load and lower upload speed, this behavior is not much different than setting proxy_request_buffering to off, so I think I am doing somehthing wrong here
why is nginx not buffering correctly and how am I suppose to make nginx buffer the whole request and then pass it at once ?
I tried to use post_action, but I couldn't pass request body to back-end
UPDATE: Nginx is buffering correctly, I mean it pass request body to back-end as soon as whole file uploaded by client, but Nginx pass request body to back-end in smaller chunk and it won't pass it at once, It has whole body but it won't pass the whole body at once, How can I tell to nginx to pass request body to back-end at once ?
Looks like you need to turn on proxy_buffering, set proxy_buffers to set the number and size of the buffers, and probably proxy_buffers_size to ensure it doesn't overuse memory:
location #proxy_upload {
proxy_buffering on;
proxy_buffers 10 200m;
proxy_buffer_size 1000m;
proxy_pass_request_body on;
proxy_pass http://servers;
}
Not sure how much traffic your upstream server gets, but I could imagine buffers this large isn't that efficient ... What webserver are you using upstream that requires such a large buffer? Perhaps try incrementing the buffer 1m at a time until it's at a decent sweet spot.
From: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers
And: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size

Forge / Nginx DO WWW to non-WWW redirect issue

I just transferred a site to a DO server provisioned by Forge. I installed an SSL certificate and noticed that navigating to https://www.example.com results in a Server Not Found error, while http://example.com returns 200. I attempted to force non-WWW in the Nginx config file but cannot seem to make anything work. I also restarted Nginx after every attempt.
Here is my current Nginx config file:
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name .example.com;
root /home/forge/default/current/public;
# FORGE SSL (DO NOT REMOVE!)
ssl_certificate /etc/nginx/ssl/default/56210/server.crt;
ssl_certificate_key /etc/nginx/ssl/default/56210/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/default-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
The server was set up with the default site - default, rather than example.com. I realized this after launching the site to production and installing the SSL cert, I am trying to avoid any downtime by trying to change this after the fact. I am not sure if the site being called default makes any difference here, but it's key to note.
So, https:// or http://example.com works fine. www.example.com returns a Server Not Found error on all browsers I've tested. I also noticed that there is a www.default file in /etc/nginx/sites-enabled, I tried changing it to the following and restarting nginx:
server {
listen 80;
server_name www.example.com;
return 301 $scheme://example.com/$request_uri;
}
Still receiving Server Not Found no matter what. Here is the error on Chrome:
The server at www.example.com can't be found, because the DNS lookup failed. DNS is the network service that translates a website's name to its Internet address. This error is most often caused by having no connection to the Internet or a misconfigured network. It can also be caused by an unresponsive DNS server or a firewall preventing Google Chrome from accessing the network.
Well, apparently I just needed to take a break. After I finished off my lunch, it occurred to me that Chrome was giving me the answer all along - it was a DNS issue. I added an A record for www pointing to my IP address on Digital Ocean, problem solved.
I believe www is missing by default on DO servers provisioned by Laravel Forge.

unknown directive "server" in /etc/nginx/nginx.conf:4

With nginx/0.7.65 I'm getting this error on line 4. Why doesn't it recognize server?
#### CHAT_FRONT ####
server {
listen 7000 default deferred;
server_name example.com;
root /home/deployer/apps/chat_front/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
#### CHAT_STORE ####
server {
listen 7002 default deferred;
server_name store.example.com;
root /home/deployer/apps/chat_store/current/public;
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
#### LOGIN ####
server {
listen 7004 default deferred;
server_name login.example.com;
root /home/deployer/apps/login/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
#### PERMISSIONS ####
server {
listen 7006 default deferred;
server_name permissions.example.com;
root /home/deployer/apps/permissions/current/public;
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
#### SEARCH ####
server {
listen 7008 default deferred;
server_name search.example.com;
root /home/deployer/apps/search/current/public;
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
#### ANALYTICS ####
server {
listen 7010 default deferred;
server_name analytics.example.com;
root /home/deployer/apps/analytics/current/public;
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
The server directive must be contained in the context of http module. Additionally you are missing top-level events module, which has one obligatory setting, and a bunch of stanzas which are to be in the http module of your config. While nginx documentation is not particularly helpful on creating config from scratch, there are working examples there.
Source: nginx documentation on server directive
Adding a top level entry got around the problem:
events { }
Try to adjust line endings and the encoding of your configuration file. In my case ANSI encoding and possibly "Linux style" line endings (only LF symbols, not both CR and LF symbols) were required.
I know it is a rather old question. However the recommendation of the accepted answer (that the server directive must be contained in the context of http module) might confuse because there are a lot of examples (on the Nginx website also and in this Microsoft guide) where the server directive is not contained in the context of http module.
Possibly the answer of sd z ("I rewrote the *.conf file and it worked") comes from the same reason: there was a .config file with incorrect encoding or line endings which was corrected after the file has been rewrited.
I ran into an issue similar to Hoborg, whose comment pointed me in the right direction. In my case, I had copy and pasted a config from a another post to use for a docker container hosting an Angular app. This was the default config plus one extra line. However, when I pasted, there were a few extra characters included at the beginning of the paste (EF BB BF). The IDE I was using (visual studio) did not display anything to indicate these extra bytes, so it wasn't apparent that they existed. Removing these extra bytes resolved the issue, which I did using a hex editor (HxD) because it was convenient at the time. The windows style line endings did not seem to be an issue in my case.
The default config had server as the outermost directive, so the top answer and error message was indeed confusing. That remained the same as it was originally in my case, and the original file (without the 3 extra bytes) had not thrown the error.
I rewrote the *.conf file and it worked.

Resources