I have installed Nginx 1.12.2 on CentOS 7. I have an extremely simple nginx config and it is not working at all. I have setup several nginx instances on Ubuntu in the past without any issue I wonder if there is something to do with CentOS.
I have double-checked that the "root" directory exists and the files also exist with proper permissions. But I am getting 404 error. Also for debugging purpose, I tried to put "return 200 $uri" in the location block and it seems to be returning me the proper URI but try_files doesn't work
/var/www/mydomain/public/test.html exists with proper permissions
For debugging when I put "return 200 $uri" it shows up when I hit the domain
Hitting mydomain.com/test.html gives 404
server {
listen 80;
root /var/www/mydomain/public;
index index.html index.htm;
server_name mydomain.com;
location / {
# return 200 "$uri";
try_files $uri $uri/;
}
}
Few things:
Check your NGINX error log at /var/log/nginx/error.log, you will likely see what file is being accessed and make conclusions from that
Be aware of the presence of /etc/nginx/conf.d/default.conf, which is shipped with the package. It has default server, which is what NGINX will use when no domain has matched, however it's a sample file rather than a real config. I tend to just echo > /etc/nginx/conf.d/default.conf, to "remove it" in a safe way. (If you just remove the file, then package update will restore it, but if you nullify it like I do, then package upgrades won't touch it).
I'm successfully able to run a flask app on my IP:5000 path. A simple Hello World program that shows the output on my browser.
Now, what I would like to do is to configure NGINX with a proxy so that if I access only IP which apparently runs on a default port 80, it should navigate to port 5000 and show output of my application.
In other words...
This is working : IP:5000 -> Output = Hello world
This isn't working: IP -> This site can’t be reached
The server settings that I want to add would be something like this.
server {
listen 80;
server_name MY_IP;
location / {
proxy_pass http://127.0.0.1:5000;
}
}
However, I'm not sure where to add this? Should it be inside http block inside /etc/nginx/nginx.conf?
Updates: Based on the answers given below, I've managed to do the following.
I did restart nginx after this. However, I'm still facing the same issue. App works on IP:5000 but does not work on IP
The configuration you have mentioned should be in a separate file, assume example.com.conf under /etc/nginx/conf.d. You can put all the configuration in /etc/nginx/nginx.conf and it'll work, it's just that for readability we create separate configuration files which would be auto included when you add it inside conf.d.
Ok, the problem is fixed. As #senaps and #Mukanahallipatna had mentioned, I created the new configuration file under conf.d.
However, the most imp step that I was missing was this part mentioned in the below link.
It is recommended that you enable the most restrictive profile that will still allow the traffic you've configured. Since we haven't configured SSL for our server yet, in this guide, we will only need to allow traffic on port 80.
Reference Link
sudo ufw allow 'Nginx HTTP'
Now, everything is working fine.
Put the working blocks in a file with any_name.conf inside the folder named /etc/nginx/conf.d and it will be loaded automatically.
You will need to restart your nginx.
update:
What are you using to serve flask? if you are using uwsgi, then you should use configurations like this:
include uwsgi_params;
uwsgi_pass unix:path_to_your.sock;
Other options for uwsgi_pass are:
uwsgi_pass localhost:9000; #normal
uwsgi_pass uwsgi://localhost:9000;
uwsgi_pass suwsgi://[2001:db8::1]:9090; #uwsgi over ssl
If you are using gunicorn to serve your flask app, then your current configs should be fine, check if your app is running and if you can get your index page or not using 5000 port, then check for other problems. your configs looks good, maybe it's a problem on flask not being run?
Nginx: Built with passenger-install-nignx-module
Passenger Version: 5.0.28
OS: Ubuntu 14.04
I have symlinked each of my apps into their own set of environment folders:
/Repository
/development.manager
/app
...
/test.manager
/staging.manager
...
Where the actual folders is at another location on my HDD. All of these folders are symlinks pointing to that one folder.
The problem is that Nginx doesn't seem to be setting the passenger environment variable properly. Checking the logs it throws an app error that doesn't make sense (and the nginx config is the only thing that's changed since things broke). Also, the error page showing states:
Because you are running this web application in staging or production
mode, the details of the error have been omitted from this web page
for security reasons.
Which means that it's not using the development environment even though the root directory in the logs shows development.manager. This is when I access through the url: http://manager-development/.
Here's the relevant excerpt from my nginx sites-enabled configuration:
server {
listen 80;
server_name ~^manager-(?<environment>development|test)$;
passenger_app_env $environment;
passenger_ruby /home/vagrant/.rvm/gems/ruby-2.3.1#manager/wrappers/ruby;
passenger_enabled on;
root /home/vagrant/apps/$environment.manager/public;
client_max_body_size 30M;
}
I have a feeling the solution might be a combination of an answer I provided here as well as a possibly misconfigured nginx block.
EDIT: I explicitly raised an error in my rails app that output the environment as a string and it's literally "$environment"...
I've given up on this approach as it seems variables aren't interpreted by nginx when used in certain places. I'm now using a custom Bash/Ruby script to iterate over my environments/app names and generate the configuration blocks.
I'm getting some problems to make spiderable work with PhantomJS on my Ubuntu server. I saw this troubleshooting on Meteorpedia:
Ensure that the ROOT_URL that your Meteor server is configured to use
is accessible from the server itself. (Since v0.8.1.3[1])
I think that this could be a possible answer to why it is not working. What is exactly the purpose of this environment variable?
My application is publicly accessible on http://gentlenode.com/ but my proxy_pass on nginx is set to http://gentlenode/.
# HTTPS Server
server {
listen 443;
server_name gentlenode.com;
# ...
location / {
proxy_pass http://gentlenode/;
proxy_http_version 1.1;
# ...
}
}
Should I set ROOT_URL to http://gentlenode.com/, to http://gentlenode/ or to http://localhost/?
You can find my nginx configuration here: https://gist.github.com/LeCoupa/9877434
The ROOT_URL environment variable should be set to the URL that clients will be accessing your application with. So in your case, it would be http://gentlenode.com or https://gentlenode.com.
The ROOT_URL environment variable is read by Meteor.absoluteUrl, which is used in many (core) packages. Thus, setting ROOT_URL may be a requirement if you use these packages. spiderable is one such package.
// Line 62 of spiderable_server.js
var url = Spiderable._urlForPhantom(Meteor.absoluteUrl(), req.url);
I'll admit that we don't use spiderable so I'm not 100% certain if this will fix your problem, but here's what we do...
We set ROOT_URL to the URL which clients will use to initially connect. In your case, the nginx config automatically upgrades all HTTP requests to HTTPS, so all requests will be seen by your app under https://gentlenode.com. I think you should start your server after:
export ROOT_URL=https://gentlenode.com
Your proxy_pass section may be correct. We manually spell out the name of the local port. So we'd write:
proxy_pass http://localhost:58080;
If you have something that works already, this may not be necessary. I don't know all the quirks of nginx well enough to say if that part matters.
nginx keeps saying client intended to send too large body. Googling and RTM pointed me to client_max_body_size. I set it to 200m in the nginx.conf as well as in the vhost conf, restarted Nginx a couple of times but I'm still getting the error message.
Did I overlook something? The backend is php-fpm (max_post_size and max_upload_file_size are set accordingly).
Following nginx documentation, you can set client_max_body_size 20m ( or any value you need ) in the following context:
context: http, server, location
NGINX large uploads are successfully working on hosted WordPress sites, finally (as per suggestions from nembleton & rjha94)
I thought it might be helpful for someone, if I added a little clarification to their suggestions. For starters, please be certain you have included your increased upload directive in ALL THREE separate definition blocks (server, location & http). Each should have a separate line entry. The result will like something like this (where the ... reflects other lines in the definition block):
http {
...
client_max_body_size 200M;
}
(in my ISPconfig 3 setup, this block is in the /etc/nginx/nginx.conf file)
server {
...
client_max_body_size 200M;
}
location / {
...
client_max_body_size 200M;
}
(in my ISPconfig 3 setup, these blocks are in the /etc/nginx/conf.d/default.conf file)
Also, make certain that your server's php.ini file is consistent with these NGINX settings. In my case, I changed the setting in php.ini's File_Uploads section to read:
upload_max_filesize = 200M
Note: if you are managing an ISPconfig 3 setup (my setup is on CentOS 6.3, as per The Perfect Server), you will need to manage these entries in several separate files. If your configuration is similar to one in the step-by-step setup, the NGINX conf files you need to modify are located here:
/etc/nginx/nginx.conf
/etc/nginx/conf.d/default.conf
My php.ini file was located here:
/etc/php.ini
I continued to overlook the http {} block in the nginx.conf file. Apparently, overlooking this had the effect of limiting uploading to the 1M default limit. After making the associated changes, you will also want to be sure to restart your NGINX and PHP FastCGI Process Manager (PHP-FPM) services. On the above configuration, I use the following commands:
/etc/init.d/nginx restart
/etc/init.d/php-fpm restart
As of March 2016, I ran into this issue trying to POST json over https (from python requests, not that it matters).
The trick is to put "client_max_body_size 200M;" in at least two places http {} and server {}:
1. the http directory
Typically in /etc/nginx/nginx.conf
2. the server directory in your vhost.
For Debian/Ubuntu users who installed via apt-get (and other distro package managers which install nginx with vhosts by default), thats /etc/nginx/sites-available/mysite.com, for those who do not have vhosts, it's probably your nginx.conf or in the same directory as it.
3. the location / directory in the same place as 2.
You can be more specific than /, but if its not working at all, i'd recommend applying this to / and then once its working be more specific.
Remember - if you have SSL, that will require you to set the above for the SSL server and location too, wherever that may be (ideally the same as 2.). I found that if your client tries to upload on http, and you expect them to get 301'd to https, nginx will actually drop the connection before the redirect due to the file being too large for the http server, so it has to be in both.
Recent comments suggest that there is an issue with this on SSL with newer nginx versions, but i'm on 1.4.6 and everything is good :)
You need to apply following changes:
Update php.ini (Find right ini file from phpinfo();) and increase post_max_size and upload_max_filesize to size you want:
sed -i "s/post_max_size =.*/post_max_size = 200M/g" /etc/php5/fpm/php.ini
sed -i "s/upload_max_filesize =.*/upload_max_filesize = 200M/g" /etc/php5/fpm/php.ini```
Update NginX settings for your website and add client_max_body_size value in your location, http, or server context.
location / {
client_max_body_size 200m;
...
}
Restart NginX and PHP-FPM:
service nginx restart
service php5-fpm restart
NOTE: Sometime (In my case almost every time) you need to kill php-fpm process if it didn't refresh by service command properly. To do that you can get list of processes (ps -elf | grep php-fpm) and kill one by one (kill -9 12345) or use following command to do it for you:
ps -elf | grep php-fpm | grep -v grep | awk '{ print $4 }' | xargs kill -9
Please see if you are setting client_max_body_size directive inside http {} block and not inside location {} block. I have set it inside http{} block and it works
Someone correct me if this is bad, but I like to lock everything down as much as possible, and if you've only got one target for uploads (as it usually the case), then just target your changes to that one file. This works for me on the Ubuntu nginx-extras mainline 1.7+ package:
location = /upload.php {
client_max_body_size 102M;
fastcgi_param PHP_VALUE "upload_max_filesize=102M \n post_max_size=102M";
(...)
}
I had a similar problem recently and found out, that client_max_body_size 0; can solve such an issue. This will set client_max_body_size to no limit. But the best practice is to improve your code, so there is no need to increase this limit.
I meet the same problem, but I found it nothing to do with nginx. I am using nodejs as backend server, use nginx as a reverse proxy, 413 code is triggered by node server. node use koa parse the body. koa limit the urlencoded length.
formLimit: limit of the urlencoded body. If the body ends up being larger than this limit, a 413 error code is returned. Default is 56kb.
set formLimit to bigger can solve this problem.
Assuming you have already set the client_max_body_size and various PHP settings (upload_max_filesize / post_max_size , etc) in the other answers, then restarted or reloaded NGINX and PHP without any result, run this...
nginx -T
This will give you any unresolved errors in your NGINX configs. In my case, I struggled with the 413 error for a whole day before I realized there were some other unresolved SSL errors in the NGINX config (wrong pathing for certs) that needed to be corrected. Once I fixed the unresolved issues I got from 'nginx -T', reloaded NGINX, and EUREKA!! That fixed it.
I'm setting up a dev server to play with that mirrors our outdated live one, I used The Perfect Server - Ubuntu 14.04 (nginx, BIND, MySQL, PHP, Postfix, Dovecot and ISPConfig 3)
After experiencing the same issue, I came across this post and nothing was working. I changed the value in every recommended file (nginx.conf, ispconfig.vhost, /sites-available/default, etc.)
Finally, changing client_max_body_size in my /etc/nginx/sites-available/apps.vhost and restarting nginx is what did the trick. Hopefully it helps someone else.
In case you are using Kubernetes, add the following annotations to your Ingress:
annotations:
nginx.ingress.kubernetes.io/client-max-body-size: "5m"
nginx.ingress.kubernetes.io/client-body-buffer-size: "8k"
nginx.ingress.kubernetes.io/proxy-body-size: "5m"
nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"
Confirm the changes were applied:
kubectl -n <namespace> describe ingress <ingress-name>
References:
Client Body Buffer Size
Custom max body size
Had the same issue that the client_max_body_size directive was ignored.
My silly error was, that I put a file inside /etc/nginx/conf.d which did not end with .conf. Nginx will not load these by default.
If you tried the above options and no success, also you're using IIS (iisnode) to host your node app, putting this code on web.config resolved the problem for me:
Here is the reference: https://www.inflectra.com/support/knowledgebase/kb306.aspx
Also, you can chagne the length allowed because now I think its 2GB. Modify it by your needs.
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="2147483648" />
</requestFiltering>
</security>
The following config worked for me. Notice I only set client_max_body_size 50M; once, contrary to what others are saying...
File: /etc/nginx/conf.d/sites.conf
server {
listen 80 default_server;
server_name portal.myserver.com;
return 301 https://$host$request_uri;
}
server {
resolver 127.0.0.11 valid=30s;
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /secret/portal.myserver.com.crt;
ssl_certificate_key /secret/portal.myserver.com.pem;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server_name portal.myserver.com;
client_max_body_size 50M;
location /fileserver/ {
set $upstream http://fileserver:6976;
proxy_pass $upstream;
}
}
If you are using windows version nginx, you can try to kill all nginx process and restart it to see.
I encountered same issue In my environment, but resolved it with this solution.