Nginx, huge size of /usr/share/nginx/on - nginx

I have a debian with WordPress on dedicate server using Nginx.
During a du/df check, I catched une very huge file "/usr/share/nginx/on" with about 400Go size
Is it ok ?
If not, what is this file and how to reduce size ?
Thank

This is an access log, a result of NGINX misconfiguration. This isn't fine.
In your NGINX configuration, you likely have:
access_log on;
This is incorrect and should be changed to the desired path of an access log. (Or simply remove the directive if you don't care about access logs).

Related

Trying to set file upload limit in mup/nginx-proxy

I am running into a file upload error with files > 10M. I have followed the advice here: http://meteor-up.com/docs.html#advanced-configuration which says how to set it in the nginx proxy by setting the clientUploadLimit: '50M'
I pushed the changes using mup proxy reconfig-shared, and it told me it had restarted the proxy. It didn't work, I still get the 413 (Request Entity Too Large) error.
I checked inside the nginx-proxy docker instance, and the file /etc/nginx/conf.d/my_proxy.conf has the correct entry client_max_body_size 50M. I restarted the EC2 box to make sure, but it's still not working.
This article https://www.tecmint.com/limit-file-upload-size-in-nginx/ suggests that the setting needs to go inside a http block, like this:
By default, Nginx has a limit of 1MB on file uploads. To set file upload size, you can use the client_max_body_size directive, which is part of Nginx’s ngx_http_core_module module. This directive can be set in the http, server or location context.
http {
client_max_body_size 100M;
}
I can't see how to achieve this, as the .conf file is read only and somehow locked.
Any ideas on how to proceed?
I suppose I could try a custom nginx.conf file, but I'm not sure what should go in there, and in fact whether it will even improve the situation.
Any help is appreciated :)
I'm happy to report that I solved it... I will explain how.
I was setting the limit in the nginx reverse proxy in the mup.js file
proxy: {
domains: 'website.com,www.website.com',
shared: { clientUploadLimit: '50M' }
}
But it turns out that there is an option to set it for each independent server like this:
proxy: {
domains: 'website.com,www.website.com',
clientUploadLimit: '50M'
}
The limit was being set to 10M by default. I found it by shelling into the nginx-proxy docker image and doing a search with the command grep -R client_max_body_size /etc/nginx and it showed me all the places where it was set (for each vhost)
So I changed the mup.js file for my server, did a mup stop, and a mup setup (to re-do the settings) and then a mup deploy
Now this is speculation but have you tried going to the docker container's root shell changed the permissions to give write permission to root or your user chmod 760 /etc/nginx/nginx.conf and edit the nginx file there?

Redirect default (80) port to 5000 - Flask + NGINX + Ubuntu

I'm successfully able to run a flask app on my IP:5000 path. A simple Hello World program that shows the output on my browser.
Now, what I would like to do is to configure NGINX with a proxy so that if I access only IP which apparently runs on a default port 80, it should navigate to port 5000 and show output of my application.
In other words...
This is working : IP:5000 -> Output = Hello world
This isn't working: IP -> This site can’t be reached
The server settings that I want to add would be something like this.
server {
listen 80;
server_name MY_IP;
location / {
proxy_pass http://127.0.0.1:5000;
}
}
However, I'm not sure where to add this? Should it be inside http block inside /etc/nginx/nginx.conf?
Updates: Based on the answers given below, I've managed to do the following.
I did restart nginx after this. However, I'm still facing the same issue. App works on IP:5000 but does not work on IP
The configuration you have mentioned should be in a separate file, assume example.com.conf under /etc/nginx/conf.d. You can put all the configuration in /etc/nginx/nginx.conf and it'll work, it's just that for readability we create separate configuration files which would be auto included when you add it inside conf.d.
Ok, the problem is fixed. As #senaps and #Mukanahallipatna had mentioned, I created the new configuration file under conf.d.
However, the most imp step that I was missing was this part mentioned in the below link.
It is recommended that you enable the most restrictive profile that will still allow the traffic you've configured. Since we haven't configured SSL for our server yet, in this guide, we will only need to allow traffic on port 80.
Reference Link
sudo ufw allow 'Nginx HTTP'
Now, everything is working fine.
Put the working blocks in a file with any_name.conf inside the folder named /etc/nginx/conf.d and it will be loaded automatically.
You will need to restart your nginx.
update:
What are you using to serve flask? if you are using uwsgi, then you should use configurations like this:
include uwsgi_params;
uwsgi_pass unix:path_to_your.sock;
Other options for uwsgi_pass are:
uwsgi_pass localhost:9000; #normal
uwsgi_pass uwsgi://localhost:9000;
uwsgi_pass suwsgi://[2001:db8::1]:9090; #uwsgi over ssl
If you are using gunicorn to serve your flask app, then your current configs should be fine, check if your app is running and if you can get your index page or not using 5000 port, then check for other problems. your configs looks good, maybe it's a problem on flask not being run?

nginx+php "The connection was reset" on file upload

I get an error that says "The connection was reset" immediately when I upload a file over a certain size, I think it's over around 4MB.
My web server is running on nginx, I tried set client_max_body_size 1G or even setting to 0, no success.
I'd be glad to hear a solution.
Thanks!
I just had to restart the nginx service by using sudo service nginx restart and it solved itself!
In my case, the file was bigger than the allowed size by NGINX in the setting "client_max_body_size". To change this setting open in your terminal the file /etc/nginx/nginx.conf and add the following inside the http section:
http {
...
client_max_body_size 128m; #Any desired size in MB
...
}
In nginx versions from 1.0 and above, this setting is not included by default in the nginx.conf file.

nginx errors readv() and recv() failed

I use nginx along with fastcgi. I see a lot of the following errors in the error logs
readv() failed (104: Connection reset
by peer) while reading upstream and
recv() failed (104: Connection reset
by peer) while reading response header
from upstream
I don't see any problem using the application. Are these errors serious or how to get rid of them.
I was using php-fpm in the background and slow scripts were getting killed after a said timeout because it was configured that way. Thus, scripts taking longer than a specified time would get killed and nginx would report a recv or readv error as the connection is closed from the php-fpm engine/process.
Update:
Since nginx version 1.15.3 you can fix this by setting the keepalive_requests option of your upstream to the same number as your php-fpm's pm.max_requests:
upstream name {
...
keepalive_requests number;
...
}
Original answer:
If you are using nginx to connect to php-fpm, one possible cause can also be having nginx' fastcgi_keep_conn parameter set to on (especially if you have a low pm.max_requests setting in php-fpm):
http|server|location {
...
fastcgi_keep_conn on;
...
}
This may cause the described error every time a child process of php-fpm restarts (due to pm.max_requests being reached) while nginx is still connected to it. To test this, set pm.max_requests to a really low number (like 1) and see if you get even more of the above errors.
The fix is quite simple - just deactivate fastcgi_keep_conn:
fastcgi_keep_conn off;
Or remove the parameter completely (since the default value is off). This does mean your nginx will reconnect to php-fpm on every request, but the performance impact is negligible if you have both nginx and php-fpm on the same machine and connect via unix socket.
Regarding this error:
readv() failed (104: Connection reset by peer) while reading upstream and recv() failed (104: Connection reset by peer) while reading response header from upstream
there was 1 more case where I could still see this.
Quick set up overview:
CentOS 5.5
PHP with PHP-FPM 5.3.8 (compiled from scratch with some 3rd party
modules)
Nginx 1.0.5
After looking at the PHP-FPM error logs as well and enabling catch_workers_output = yes in the php-fpm pool config, I found the root cause in this case was actually the amfext module (PHP module for Flash).
There's a known bug and fix for this module that can be corrected by altering the amf.c file.
After fixing this PHP extension issue, the error above was no longer an issue.
This is a very vague error as it can mean a few things. The key is to look at all possible logs and figure it out.
In my case, which is probably somewhat unique, I had a working nginx + php / fastcgi config. I wanted to compile a new updated version of PHP with PHP-FPM and I did so. The reason was that I was working on a live server that couldn't afford downtime. So I had to upgrade and move to PHP-FPM as seamlessly as possible.
Therefore I had 2 instances of PHP.
1 directly talking with fastcgi (PHP 5.3.4) - using TCP / 127.0.0.1:9000 (PHP 5.3.4)
1 configured with PHP-FPM - using Unix socket - unix:/dir/to/socket-fpm
(PHP 5.3.8)
Once I started up PHP-FPM (PHP 5.3.8) on an nginx vhost using a socket connection instead of TCP I started getting this upstream error on any fastcgi page taking longer than x minutes whether they were using FPM or not. Typically it was pages doing large SELECTS in mysql that took ~2 min to load. Bad I know, but this is because of back end DB design.
What I did to fix it was add this in my vhost configuration:
fastcgi_read_timeout 5m;
Now this can be added in the nginx global fastcgi settings as well. It depends on your set up. http://wiki.nginx.org/HttpFcgiModule
Answer # 2.
Interestingly enough fastcgi_read_timeout 5m; fixed one vhost for me.
However I was still getting the error in another vhost, just by running phpinfo();
What fixed this for me was by copying over a default production php.ini file and adding the config I needed into it.
What I had was an old copy of my php.ini from the previous PHP install.
Once I put the default php.ini from 'shared' and just added in the extensions and config I needed, this solved my problem and no longer did I have nginx errors readv() and recv() failed.
I hope 1 of these 2 fixes helps someone.
Also it can be a very simple problem - there is an infinity cicle somewhere in your code, or an infinity trying to connect an external host on your page.
Some times this problem happen because of huge of requests. By default the pm.max_requests in php5-fpm maybe is 100 or below.
To solve it increase its value depend on the your site's requests, For example 500.
And after the you have to restart the service
sudo service php5-fpm restart
Others have mentioned the fastcgi_read_timeout parameter, which is located in the nginx.conf file:
http {
...
fastcgi_read_timeout 600s;
...
}
In addition to that, I also had to change the setting request_terminate_timeout in the file: /etc/php5/fpm/pool.d/www.conf
request_terminate_timeout = 0
Source of information (there are also a few other recommendations for changing php.ini parameters, which may be relevant in some cases): https://ma.ttias.be/nginx-and-php-fpm-upstream-timed-out-failed-110-connection-timed-out-or-reset-by-peer-while-reading/

nginx - client_max_body_size has no effect

nginx keeps saying client intended to send too large body. Googling and RTM pointed me to client_max_body_size. I set it to 200m in the nginx.conf as well as in the vhost conf, restarted Nginx a couple of times but I'm still getting the error message.
Did I overlook something? The backend is php-fpm (max_post_size and max_upload_file_size are set accordingly).
Following nginx documentation, you can set client_max_body_size 20m ( or any value you need ) in the following context:
context: http, server, location
NGINX large uploads are successfully working on hosted WordPress sites, finally (as per suggestions from nembleton & rjha94)
I thought it might be helpful for someone, if I added a little clarification to their suggestions. For starters, please be certain you have included your increased upload directive in ALL THREE separate definition blocks (server, location & http). Each should have a separate line entry. The result will like something like this (where the ... reflects other lines in the definition block):
http {
...
client_max_body_size 200M;
}
(in my ISPconfig 3 setup, this block is in the /etc/nginx/nginx.conf file)
server {
...
client_max_body_size 200M;
}
location / {
...
client_max_body_size 200M;
}
(in my ISPconfig 3 setup, these blocks are in the /etc/nginx/conf.d/default.conf file)
Also, make certain that your server's php.ini file is consistent with these NGINX settings. In my case, I changed the setting in php.ini's File_Uploads section to read:
upload_max_filesize = 200M
Note: if you are managing an ISPconfig 3 setup (my setup is on CentOS 6.3, as per The Perfect Server), you will need to manage these entries in several separate files. If your configuration is similar to one in the step-by-step setup, the NGINX conf files you need to modify are located here:
/etc/nginx/nginx.conf
/etc/nginx/conf.d/default.conf
My php.ini file was located here:
/etc/php.ini
I continued to overlook the http {} block in the nginx.conf file. Apparently, overlooking this had the effect of limiting uploading to the 1M default limit. After making the associated changes, you will also want to be sure to restart your NGINX and PHP FastCGI Process Manager (PHP-FPM) services. On the above configuration, I use the following commands:
/etc/init.d/nginx restart
/etc/init.d/php-fpm restart
As of March 2016, I ran into this issue trying to POST json over https (from python requests, not that it matters).
The trick is to put "client_max_body_size 200M;" in at least two places http {} and server {}:
1. the http directory
Typically in /etc/nginx/nginx.conf
2. the server directory in your vhost.
For Debian/Ubuntu users who installed via apt-get (and other distro package managers which install nginx with vhosts by default), thats /etc/nginx/sites-available/mysite.com, for those who do not have vhosts, it's probably your nginx.conf or in the same directory as it.
3. the location / directory in the same place as 2.
You can be more specific than /, but if its not working at all, i'd recommend applying this to / and then once its working be more specific.
Remember - if you have SSL, that will require you to set the above for the SSL server and location too, wherever that may be (ideally the same as 2.). I found that if your client tries to upload on http, and you expect them to get 301'd to https, nginx will actually drop the connection before the redirect due to the file being too large for the http server, so it has to be in both.
Recent comments suggest that there is an issue with this on SSL with newer nginx versions, but i'm on 1.4.6 and everything is good :)
You need to apply following changes:
Update php.ini (Find right ini file from phpinfo();) and increase post_max_size and upload_max_filesize to size you want:
sed -i "s/post_max_size =.*/post_max_size = 200M/g" /etc/php5/fpm/php.ini
sed -i "s/upload_max_filesize =.*/upload_max_filesize = 200M/g" /etc/php5/fpm/php.ini```
Update NginX settings for your website and add client_max_body_size value in your location, http, or server context.
location / {
client_max_body_size 200m;
...
}
Restart NginX and PHP-FPM:
service nginx restart
service php5-fpm restart
NOTE: Sometime (In my case almost every time) you need to kill php-fpm process if it didn't refresh by service command properly. To do that you can get list of processes (ps -elf | grep php-fpm) and kill one by one (kill -9 12345) or use following command to do it for you:
ps -elf | grep php-fpm | grep -v grep | awk '{ print $4 }' | xargs kill -9
Please see if you are setting client_max_body_size directive inside http {} block and not inside location {} block. I have set it inside http{} block and it works
Someone correct me if this is bad, but I like to lock everything down as much as possible, and if you've only got one target for uploads (as it usually the case), then just target your changes to that one file. This works for me on the Ubuntu nginx-extras mainline 1.7+ package:
location = /upload.php {
client_max_body_size 102M;
fastcgi_param PHP_VALUE "upload_max_filesize=102M \n post_max_size=102M";
(...)
}
I had a similar problem recently and found out, that client_max_body_size 0; can solve such an issue. This will set client_max_body_size to no limit. But the best practice is to improve your code, so there is no need to increase this limit.
I meet the same problem, but I found it nothing to do with nginx. I am using nodejs as backend server, use nginx as a reverse proxy, 413 code is triggered by node server. node use koa parse the body. koa limit the urlencoded length.
formLimit: limit of the urlencoded body. If the body ends up being larger than this limit, a 413 error code is returned. Default is 56kb.
set formLimit to bigger can solve this problem.
Assuming you have already set the client_max_body_size and various PHP settings (upload_max_filesize / post_max_size , etc) in the other answers, then restarted or reloaded NGINX and PHP without any result, run this...
nginx -T
This will give you any unresolved errors in your NGINX configs. In my case, I struggled with the 413 error for a whole day before I realized there were some other unresolved SSL errors in the NGINX config (wrong pathing for certs) that needed to be corrected. Once I fixed the unresolved issues I got from 'nginx -T', reloaded NGINX, and EUREKA!! That fixed it.
I'm setting up a dev server to play with that mirrors our outdated live one, I used The Perfect Server - Ubuntu 14.04 (nginx, BIND, MySQL, PHP, Postfix, Dovecot and ISPConfig 3)
After experiencing the same issue, I came across this post and nothing was working. I changed the value in every recommended file (nginx.conf, ispconfig.vhost, /sites-available/default, etc.)
Finally, changing client_max_body_size in my /etc/nginx/sites-available/apps.vhost and restarting nginx is what did the trick. Hopefully it helps someone else.
In case you are using Kubernetes, add the following annotations to your Ingress:
annotations:
nginx.ingress.kubernetes.io/client-max-body-size: "5m"
nginx.ingress.kubernetes.io/client-body-buffer-size: "8k"
nginx.ingress.kubernetes.io/proxy-body-size: "5m"
nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"
Confirm the changes were applied:
kubectl -n <namespace> describe ingress <ingress-name>
References:
Client Body Buffer Size
Custom max body size
Had the same issue that the client_max_body_size directive was ignored.
My silly error was, that I put a file inside /etc/nginx/conf.d which did not end with .conf. Nginx will not load these by default.
If you tried the above options and no success, also you're using IIS (iisnode) to host your node app, putting this code on web.config resolved the problem for me:
Here is the reference: https://www.inflectra.com/support/knowledgebase/kb306.aspx
Also, you can chagne the length allowed because now I think its 2GB. Modify it by your needs.
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="2147483648" />
</requestFiltering>
</security>
The following config worked for me. Notice I only set client_max_body_size 50M; once, contrary to what others are saying...
File: /etc/nginx/conf.d/sites.conf
server {
listen 80 default_server;
server_name portal.myserver.com;
return 301 https://$host$request_uri;
}
server {
resolver 127.0.0.11 valid=30s;
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /secret/portal.myserver.com.crt;
ssl_certificate_key /secret/portal.myserver.com.pem;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server_name portal.myserver.com;
client_max_body_size 50M;
location /fileserver/ {
set $upstream http://fileserver:6976;
proxy_pass $upstream;
}
}
If you are using windows version nginx, you can try to kill all nginx process and restart it to see.
I encountered same issue In my environment, but resolved it with this solution.

Resources