What is the maximum recommended value of client_max_body_size on Nginx for upload of large files?
The web app that I'm working right now will expect uploads of max 100mb. Should I set client_max_body_size to something like 150mb to upload in a single request or do the slice strategy and send chunks of 1mb to the server keeping the client_max_body_size low?
This is a subjective thing and use-case dependent. So the question you should ask yourself is What is the max size beyond which you don't want to allow an upload then use that.
Next what mistake people make is that they just set
client_max_body_size 150M;
In the nginx config in the server block. This is actually wrong because you don't want to allow people to be able to upload 150M of data to everyone and to every url. You will have a specific url for which you want the upload to be allowed. So you should have location like below
location /upload/largefileupload {
client_max_body_size 150M;
}
And for rest urls you can keep it to as low as 2MB. This way you will be less susceptible to a generic ddos attack (large body upload attack). See below url
https://www.tomaz.me/2013/09/15/avoiding-ddos-attacks-caused-by-large-http-request-bodies-by-enforcing-a-hard-limit-in-your-web-server.html
Related
How should I set the value for the below directives?
I am using LEMP Stack.
fastcgi_send_timeout
fastcgi_read_timeout
fastcgi_connect_timeout
From documentation is:
fastcgi_connect_timeout: time to establish connection to upstream
(in your case is FPM)
fastcgi_send_timeout: time to upload the whole
request until it is accepted by FPM
fastcgi_read_timeout:
time after FPM replies accepted until the whole response is
transmitted (downloaded) to NginX
To fine tune this:
fastcgi_connect_timeout use low values when FPM located on same
machine, for different machine try to ping from Nginx to PHP FPM machine to determine average time to response and add a little bit seconds for safe.
fastcgi_send_timeout first you have to estimate how large the request will be, if no upload then low values is ok, when many file
uploads with big filesize try bigger values
fastcgi_read_timeout is time to process your request in PHP and send it back to nginx. If you doing heavy lifting in PHP script, bigger values are recommended. Also if you have large response size e.g Downloading big files.
Hi we need to increase the proxy_buffer_size and related parameters on an IBM Kubernetes implementation for the Ingress/NGINX
INgress/NGINX is throwing us a error upstream sent too big header while reading response header from upstream, client
The app we're running is Meteor based which is known for creating large headers related to Browser Policy. To solve this we need to change the location settings to include:
# Increase the proxy buffers for meteor browser-policy.
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
More info here http://dweldon.silvrback.com/browser-policy, if needed. Note this gist of this is we should not be turning buffering off, but increasing the buffer sizes.
Currently IBM does not support these custom params, so we'd like to inject some custom params, as per nginx.org/location-snippets as per https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/customization
We'd like a way to se the above proxy buffer sizes, please let us know if/how this can be done, pls?
Another alternative, I think would be able to use nginx.org/proxy-buffer-size
thanks
Current answer from IBM support: IBM do not support these directives and are looking to add these capabilities in the future... no ETA provided by IBM.
Update: we have been told that IBM has added this capability and asked me to test out... busy trying to get it working. Will update here when I have it working/solved.
Another update: the annotation work... but are kind of useless as they the NGINX top level conf hard codes the proxy-buffers to 8 4k which means there is still not enough capacity to increase the buffer sizes. It thors thw following error "proxy_busy_buffers_size" must be less than the size of all "proxy_buffers" minus one buffer
We have asked IBM to please allow us to ConfigMap and override the top level settings. We'll wait and see.
I have an API that receives anywhere from 1K to 20MB of data in each transmission. I also have a website that would only ever receive less than 10K in a single transmission. Both the API and the website are behind the same Nginx proxy server.
From the docs for client_body_buffer_size
If the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file."
This means that any time I receive a request over the default, it will be written to disk.
Given that I can receive large payloads, would it be best to set the client_body_buffer_size equal to client_max_body_size, which for me is 20MB? I assume this would prevent nginx from writing the request to disk every time.
Are there any consequences to setting the client_body_buffer_size so high? Would this affect the website, which never receives such large requests?
I would recommend using a smaller client_body_buffer_size (bigger than 10k but not so much, maybe the x64 default of 16k), since a bigger buffer could ease DoS attack vectors, since you would allocate more memory for it, opposed to disk which is cheaper.
Please note that you can also set a different client_max_body_size and client_body_buffer_size on a specific server or location (see Context) so your website wouldn't allow 20MB uploads.
Here's an interesting thread on client_body_buffer_size, it also reminds that if the client body is bigger than your client_body_buffer_size, the nginx variable $request_body will be empty.
It depends on your server memory and how many traffic you have.
A simple formula: MAX_RAM = client_body_buffer_size X concurrent_traffic - OS_RAM - FS_CACHE.
(exactly the same thing with php-fpm pool tuning or even mysql/elasticsearch).
The key is to monitor all the things (RAM/CPU/Traffic) and change settings according your usage, star little of course then increase until you can.
Is the PHP setting max_input_time relevant, when having nginx as webserver in front?
The whole story:
Take the case that a visitor is uploading a file. The nginx webserver, listening on port 80, will get the request first.
Nginx itself has a client_header_timeout setting, which should not be that relevant since file-uploads are handled in the request body. The client_body_timeout is the maximum amount of time, the client can send this request-body, containing the file and some other POST data. The size of this data can be limited by client_max_body_size, right?
PHP now waits for the data. This time is limited by max_input_time. And when it has all the data, it checks that the request-body does not exceed it's post_max_size limitation, parses it and checks, that the file does not exceed the upload_max_filesize limitation. And now the php-script will be executed, which should not take longer than max_execution_time.
But when does my fastcgi-proxy get loaded? Is it after the request-header is loaded, after the request-body is loaded or when does it get triggered?
Or ... put this question another way: Is the PHP configuration max_input_time relevant at all, when I have PHP running using PHP-FPM, backed by an nginx webserver? Do I have to increase this value when the vistor has a bad bandwidth but wants to upload a huge file, or is it enough to increase the nginx setting for client_body_timeout?
Please correct me if the assumption is not correct!
Just to give an answer, that's convenient for me:
I tried to upload a 18MB file and I got it within 50sec. The fastcgi-proxy was limited to 10sec. So, for me, it seems that nginx is caching the whole request before it sends it to the fastcgi-proxy.
So, to have it short: No. I don't need to exceed the max_input_time in my case.
This may vary from configuration to configuration. It would be fine to have someone who knows the code and can tell which options this depends on.
On IRC, nobody could really tell me, when nginx sends the data to the fastcgi-proxy ...
EDIT:
Just wanted to add another resource, that confirms my suggestion here:
Unfortunately PHP gets the data only after the upload is completed and [...]
See the accepted answer in Does session upload progress work with nginx and php-fpm?
I'm developing a facebook canvas application and I want to load-test it. I'm aware of the facebook restriction on automated testing, so I simulated the graph api calls by creating a fake web application served under nginx and altering my /etc/hosts to point graph.facebook.com to 127.0.0.1.
I'm using jmeter to load-test the application and the simulation is working ok. Now I want to simulate slow graph api responses and see how they affect my application. How can I configure nginx so that it inserts a delay to each request sent to the simulated graph.facebook.com application?
You can slow the speed of localhost (network) by adding delay.
Use ifconfig command to see network device: on localhost it may be lo and on LAN its eth0.
to add delay use this command (adding 1000ms delay on lo network device)
tc qdisc add dev lo root netem delay 1000ms
to change delay use this one
tc qdisc change dev lo root netem delay 1ms
to see current delay
tc qdisc show dev lo
and to remove delay
tc qdisc del dev lo root netem delay 1000ms
My earlier answer works but it is more adapted to a case where all requests need to be slowed down. I've since had to come up with a solution that would allow me to turn on the rate limit only on a case-by-case basis, and came up with the following configuration. Make sure to read the entire answer before you use this, because there are important nuances to know.
location / {
if (-f somewhere/sensible/LIMIT) {
echo_sleep 1;
# Yes, we need this here too.
echo_exec /proxy$request_uri;
}
echo_exec /proxy$request_uri;
}
location /proxy/ {
internal;
# Ultimately, all this goes to a Django server.
proxy_pass http://django/;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
}
Important note: the presence or absence of forward slashes in the various paths makes a difference. For instance, proxy_pass http://django, without a trailing slash, does not do the same thing as the line in the code above.
The principle of operation is simple. If the file somewhere/sensible/LIMIT exists, then requests that match location / are paused for one second before moving on. So in my test suite, when I want a network slowdown, I create the file, and when I want to remove the slowdown, I remove it. (And I have cleanup code that removes it between each test.) In theory I'd much prefer using variables for this than a file, but the problem is that variables are reinitialized with each request. So we cannot have a location block that would set a variable to turn the limit, and another to turn it off. (That's the first thing I tried, and it failed due to the lifetime of variables). It would probably be possible to use the Perl module or Lua to persist variables or fiddle with cookies, but I've decided not to go down these routes.
Important notes:
It is not a good idea to mix directives from the echo module (like echo_sleep and echo_exec) with the stock directives of nginx that result in the production of a response. I initially had echo_sleep together with proxy_pass and got bad results. That's why we have the location /proxy/ block that segregates the stock directives from the echo stuff. (See this issue for a similar conflict that was resolved by splitting a block.)
The two echo_exec directives, inside and outside the if, are necessary due to how if works.
The internal directive prevents clients from directly requesting /proxy/... URLs.
I've modified a nginx config to use limit_req_zone and limit_req to introduce delays. The following reduces the rate of service to 20 requests per second (rate=20r/s). I've set burst=1000 so that my application would not get 503 responses.
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=20r/s;
[...]
server {
[...]
location / {
limit_req zone=one burst=1000;
[...]
}
}
}
The documentation is here. I do not believe there is a way to specify a uniform delay using this method.