I am in the process of deploying my MERN app to a Digital Ocean droplet (Ubuntu 20.04 server).
I followed the steps in the following tutorial to install Nginx. [I have completed all the previous steps]
https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04
When I visit http://134.122.112.22, I see the Nginx landing page, as expected.
However, after setting up server blocks, when I visit http://sundaray.io, I get the following error.
sundaray.io is my domain name.
When I run the command /var/log/nginx/error.log, I see the following:
How can I fix the error?
EDIT-1
SERVER BLOCK
In order to create the server block, I executed the following commands in order:
mkdir -p /var/www/sundaray.io/html
nano /etc/nginx/sites-available/sundaray.io
Then, I pasted in the following configuration block.
server {
listen 80;
listen [::]:80;
root /var/www/sundaray.io/html;
index index.html index.htm index.nginx-debian.html;
server_name sundaray.io www.sundaray.io;
location / {
try_files $uri $uri/ =404;
}
}
ERROR
Executing the command cat /var/log/nginx/error.log gave me the following result:
EDIT-2
Executing chown -R nginx:nginx /var/www/sundaray.io/html threw the following error:
EDIT-3
Executing ps -elf |grep nginx gave the following result:
EDIT-4
When I executed the command ls /var/www/sundaray.io/html, I got the following result:
1. chmod 777 is NEVER a good idea.
NGINX operats under a runuser. How to check the runuser:
ps -elf |grep nginx. Normaly it will be nginx. Instead of 777 (Open for everyone) do chmod -R 755 /path/to/folder and chown -R nginx:nginx /path/to/folder
But agree. For troubleshooting it can be used. But back to your problem.
2. Directory-Listing disabled.
The error is telling you nginx can not list the directory content. Which is the default behavior. Make sure
root /var/www/sundaray.io/html;
This path exists AND
index index.html index.htm index.nginx-debian.html;
there are one of these files located!
Without any of these files NGINX can't load the default index file on /. Put something in /var/www/sundaray.io/html like
printf "<html>\n<body>\n<h1>Hello from NGINX</h1>\n</body>\n</html>" > /var/www/sundaray.io/html/index.html && chown nginx:nginx /var/www/sundaray.io/html/index.html. This should generate an index.html for you.
If you just want to test your server configuration without any files:
location / {
return 200 "Hello on $host$uri\n";
}
Related
I am trying to build a nginx image from scratch (instead of using the official nginx image)
FROM ubuntu
RUN apt-get update
RUN apt-get install -y nginx
RUN rm -v /etc/nginx/nginx.conf
ADD nginx.conf /etc/nginx/
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80
COPY ./files/ /var/www/html/
CMD service nginx start
And this is my nginx.conf file under current directory.
server {
root /var/www/html
location / {
index.html
}
}
And my dummy index.html file under ./files folder
<p1>hello world</p1>
I run this command
docker build -t hello-world .
And
docker run -p 80:80 hello-world
But I got error saying
* Starting nginx nginx
...fail!
What maybe the issue?
Don't use "service xyz start"
To run a server inside a container, don't use the service command. That is a script which will run the requested server in the background, and then exit. When the script exits, the container will stop (because that script was the primary process).
Instead, directly run the command that the service script would have started for you. Unless it exits or crashes, the container should remain running.
CMD ["/usr/sbin/nginx"]
nginx.conf is missing the events section
This is required. Something like:
events {
worker_connections 1024;
}
The server directive is not a top-level element
You have server { } at the top level of the nginx.conf, but it has to be inside a protocol definition such as http { } to be valid.
http {
server {
...
nginx directives end with a semicolon
These are missing at the end of the root statement and your index.html line.
Missing the "index" directive
To define the index file, use index, not just the filename by itself.
index index.html;
There is no HTML element "p1"
I assume you meant to use <p> here.
<p>hello world</p>
Final result
Dockerfile:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y nginx
RUN rm -v /etc/nginx/nginx.conf
ADD nginx.conf /etc/nginx/
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80
COPY ./files/ /var/www/html/
CMD ["/usr/sbin/nginx"]
nginx.conf:
http {
server {
root /var/www/html;
location / {
index index.html;
}
}
}
events {
worker_connections 1024;
}
daemon off;
one can use directly the official image of nginx in docker hub, just start your docker file with this line : FROM nginx
here is an example of docker file that you can use :
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
COPY static-html-directory /usr/share/nginx/html
EXPOSE 80
as you see there is no need to use a CMD to run your nginx server
I am trying to host the Swagger UI on a docker container using Nginx.
When I access my webpage via hostAddress.com it returns the webpage as plain text and inspecting it tells me that it can't find any of the javascript or css files despite them seeming to be present in the container as I have ssh into the container to check.
My dockerfile
FROM nginx
COPY src /usr/share/nginx/html
COPY config/nginx.conf /etc/nginx
EXPOSE 80
nginx.config
events {
worker_connections 4096; ## Default: 1024
}
http {
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;
location /swagger {
try_files $uri /index.html;
}
#Static File Caching. All static files with the following extension will be cached for 1 day
location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {
expires 1d;
}
}
}
Here what you can do achieve that.
Like it was mentioned the comment, you can download the artifacts from
https://github.com/ianneub/docker-swagger-ui/blob/master
Create a directory say ngix
Copy Dockerfile, run.sh into that directory
Edit the Dockerfile to make the customization that you need to change the location as shown below:
FROM nginx:1.9
ENV SWAGGER_UI_VERSION 2.1.2-M2
ENV URL **None**
RUN apt-get update \
&& apt-get install -y curl \
&& curl -L https://github.com/swagger-api/swagger-ui/archive/v${SWAGGER_UI_VERSION}.tar.gz | tar -zxv -C /tmp \
&& mkdir /usr/share/nginx/html/swagger \
&& cp -R /tmp/swagger-ui-${SWAGGER_UI_VERSION}/dist/* /usr/share/nginx/html/swagger \
&& rm -rf /tmp/*
COPY run.sh /run.sh
CMD ["/run.sh"]
If you compare the above changes with original Dockerfile, there are changes made in the lines 9, 10 to include additional path i.e., swagger. Of course, you may change as needed.
Next, run the following docker commands to build and run
Build Image:
docker build -t myswagger .
Run it
docker run -it --rm -p 3000:80 --name testmyswagger -e "URL=http://petstore.swagger.io/v2/swagger.json" myswagger
Now you should be able to access your swagger using http://localhost:3000/swagger/index.html
I have Nginx setup and displaying the test page properly. If I try to change the root path, I get a 403 Forbidden error, even though all permissions are identical. Additionally, the nginx user exists.
nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
index index.html index.htm;
server {
listen 80;
server_name localhost;
root /var/www/html; #changed from the default /usr/share/nginx/html
}
}
namei -om /usr/share/nginx/html/index.html
f: /usr/share/nginx/html/index.html
dr-xr-xr-x root root /
drwxr-xr-x root root usr
drwxr-xr-x root root share
drwxr-xr-x root root nginx
drwxr-xr-x root root html
-rw-r--r-- root root index.html
namei -om /var/www/html/index.html
f: /var/www/html/index.html
dr-xr-xr-x root root /
drwxr-xr-x root root var
drwxr-xr-x root root www
drwxr-xr-x root root html
-rw-r--r-- root root index.html
error log
2014/03/23 12:45:08 [error] 5490#0: *13 open()
"/var/www/html/index.html" failed (13: Permission denied), client:
XXX.XX.XXX.XXX, server: localhost, request: "GET /index.html HTTP/1.1", host: "ec2-XXX-XX-XXX-XXX.compute-1.amazonaws.com"
I experienced the same problem and it was due to SELinux.
To check if SELinux is running:
# getenforce
To disable SELinux until next reboot:
# setenforce Permissive
Restart Nginx and see if the problem persists. If you would like to permanently alter the settings you can edit /etc/sysconfig/selinux
If SELinux is your problem you can run the following to allow nginx to serve your www directory (make sure you turn SELinux back on before testing this. i.e, # setenforce Enforcing)
# chcon -Rt httpd_sys_content_t /path/to/www
If you're still having issues take a look at the boolean flags in getsebool -a, in particular you may need to turn on httpd_can_network_connect for network access
# setsebool -P httpd_can_network_connect on
For me it was enough to allow http to serve my www directory.
First of all you have to run following command to allow nginx to access filesystem
sudo setsebool -P httpd_read_user_content 1
You can check if the files or directory with following command:
ls -Z
If it is still not accessible, you can try changing the SELinux property of the files and folder with following command:
chcon -Rt httpd_sys_content_t /path/to/www
However, above command cannot apply to files under FUSE or NFS system.
To enable serving files from FUSE mounts, you can use:
setsebool httpd_use_fusefs 1
To enable serving files from NFS mounts, you can use:
setsebool httpd_use_nfs 1
I ran into the same problem. If you're using Fedora/RedHat/CentOS, this might help you:
According to SELinux: setsebool -P httpd_read_user_content 1
Hope this helps.
This is an addition to Prowlas answer but I dont have enough reputation to commment:
If the /path/to/www is a home directory of a user. You should try:
setsebool -P httpd_enable_homedirs=1
This solved my problem
Source: http://forums.fedoraforum.org/archive/index.php/t-250779.html
There are 2 possible reasons for denied access:
Access is denied by DAC. Double check user, group and file permissions. Make sure the nginx process, when running as the user specified in its config file, can access the new html root path.
Access is denied by MAC. The most widely used of such is SELinux. To check whether it caused the problem, you can stop the nginx process and run this command:
setenforce Permissive
Then start nginx again to see if access is granted.
Alternatively, you can check the file context:
setenforce Enforcing
ls -Zd /usr/share/nginx/html /var/www/html
If the two contexts differ, you may need to change the context for the new html root path:
chcon -R -t httpd_sys_content_t /var/www/html
Restart nginx and see if it works fine. If so, you can make the change permanent:
semanage fcontext -a -t httpd_sys_content_t '/var/www/html(/.*)?'
restorecon -Rv /var/www/html
Some of these commands need to be run as root.
well seems logical, all files are root user, try changing it to nginx user, just wanted to make sure it's not a listing permission denied first.
sudo chown -R nginx:nginx /var/www/html
I have met this problem when I added a new user with a folder /home/new_user as a new virtual host. Make sure these folders (/home, /home/new_user, /home/new_user/xxx...) are 755 so that it resolved my problem. At last, I found my problem were correctly according to the /var/log/nginx/error.log file.
Remember you need to allow other users to read the entire path. Also remember Dropbox will set 700 to its root directory. So chmod 755 ~/Dropbox solved my problem.
The folks using the /home/{user} directory to serve their website need to provide a chmod 755 access on their /home/{user} directory to make this work .
Also , if SELinux is enabled on the server please use the below mentioned commands :-
sudo setsebool -P httpd_can_network_connect on
chcon -Rt httpd_sys_content_t /path/to/www
I was using:
sudo service nginx start
If I use:
sudo nginx
...everything works fine. Can anyone explain the difference between these two?
I ran into the same problem:
Checked nginx.conf to verify the user
Permissions were set properly
Made sure "x" right was set for the entire path
Did a restart from the command line (I'd been using Webmin all this time) and noticed this error:
aed#aed:/var/www/test.local$ sudo service nginx restart
* Restarting nginx nginx
nginx: [warn] conflicting server name "test.local" on 0.0.0.0:80, ignored
nginx: [warn] conflicting server name "test.local" on 0.0.0.0:80, ignored
Apparently there was a duplicate definition and thus my attempt to access "test.local" failed.
Work fine for me on nginx
semanage permissive -a httpd_t
Another possible reason (NOT IN THIS CASE) is a symlink for index.html file pointing to another directory.
ls -lrt /usr/share/nginx/html/
rsync files to that particular directory will easily solve the problem.
or disable symlinks in nginx.conf
http {
disable_symlinks off;
}
i meet another issue(don't know why yet, but it might be useful for someone else)
i first put the folder under my /home/my_name/www/site_name, and change the owner and change the permission.
then check the selinux stuff.
all the above doesn't solve my problem.
finally, i change the folder to /srv/www/site_name, all is good now.
Modify the file nginx.conf, change the user name to your account name, and restart nginx.it work !
this solved the same problem:
restart Nginx and try again. If it fails, check again the logs. This worked for me
Is there a way to have the master process log to STDOUT STDERR instead of to a file?
It seems that you can only pass a filepath to the access_log directive:
access_log /var/log/nginx/access.log
And the same goes for error_log:
error_log /var/log/nginx/error.log
I understand that this simply may not be a feature of nginx, I'd be interested in a concise solution that uses tail, for example. It is preferable though that it comes from the master process though because I am running nginx in the foreground.
Edit: it seems nginx now supports error_log stderr; as mentioned in Anon's answer.
You can send the logs to /dev/stdout. In nginx.conf:
daemon off;
error_log /dev/stdout info;
http {
access_log /dev/stdout;
...
}
edit: May need to run ln -sf /proc/self/fd /dev/ if using running certain docker containers, then use /dev/fd/1 or /dev/fd/2
If the question is docker related... the official nginx docker images do this by making softlinks towards stdout/stderr
RUN ln -sf /dev/stdout /var/log/nginx/access.log && ln -sf /dev/stderr /var/log/nginx/error.log
REF: https://microbadger.com/images/nginx
Syntax: error_log file | stderr | syslog:server=address[,parameter=value] | memory:size [debug | info | notice | warn | error | crit | alert | emerg];
Default:
error_log logs/error.log error;
Context: main, http, stream, server, location
http://nginx.org/en/docs/ngx_core_module.html#error_log
Don't use: /dev/stderr
This will break your setup if you're going to use systemd-nspawn.
For a debug purpose:
/usr/sbin/nginx -g "daemon off;error_log /dev/stdout debug;"
For a classic purpose
/usr/sbin/nginx -g "daemon off;error_log /dev/stdout info;"
Require
Under the server bracket on the config file
access_log /dev/stdout;
When running Nginx in a Docker container, be aware that a volume mounted over the log dir defeats the purpose of creating a softlink between the log files and stdout/stderr in your Dockerfile, as described in #Boeboe 's answer.
In that case you can either create the softlink in your entrypoint (executed after volumes are mounted) or not use a volume at all (e.g. when logs are already collected by a central logging system).
In docker image of PHP-FPM, i've see such approach:
# cat /usr/local/etc/php-fpm.d/docker.conf
[global]
error_log = /proc/self/fd/2
[www]
; if we send this to /proc/self/fd/1, it never appears
access.log = /proc/self/fd/2
Based on the official docker Nginx image this is already in place
For more information you can see the Dockerfile written for the 1-alpine version which soft link the access logs and error logs to the stdout and stderr respectively. Other docker tags also have it.
ref: https://github.com/nginxinc/docker-nginx/blob/1.23.1/stable/alpine/Dockerfile#L118-L119
Just want to help somebody out. yes ,you just want to serve static file using nginx, and you got everything right in nginx.conf:
location /static {
autoindex on;
#root /root/downloads/boxes/;
alias /root/downloads/boxes/;
}
But , in the end , you failed. You got "403 forbidden" from browser...
----------------------------------------The Answer Below:----------------------------------------
The Solution is very Simple:
Way 1 : Run nginx as the user as the '/root/downloads/boxes/' owner
In nginx.conf :
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
YES, in the first line "#user noboy;" , just delete "#" , and change "nobody" to your own username in Linux/OS X, i.e change to "root" for test. The restart nginx.
Attention , You'd better not run nginx as root! Here just for testing, it's dangerous for the Hacker.
For more reference , see nginx (engine X) – What a Pain in the BUM! [13: Permission denied]
Way 2 : Change '/root/downloads/boxes/' owner to 'www-data' or 'nobody'
In Terminal:
ps aux | grep nginx
Get the username of running nginx . It should be 'www-data' or 'nobody' determined by the version of nginx. Then hit in Terminal(use 'www-data' for example):
chown -R www-data:www-data /root/downloads/boxes/
------------------------------One More Important Thing Is:------------------------------
These parent directories "/", "/root", "/root/downloads" should give the execute(x) permission to 'www-data' or 'nobody'. i.e.
ls -al /root
chmod o+x /root
chmod o+x /root/downloads
For more reference , see Resolving "403 Forbidden" error and Nginx 403 forbidden for all files
You should give nginx permissions to read the file. That means you should give the user that runs the nginx process permissions to read the file.
This user that runs the nginx process is configurable with the user directive in the nginx config, usually located somewhere on the top of nginx.conf:
user www-data
http://wiki.nginx.org/CoreModule#user
The second argument you give to user is the group, but if you don't specify it, it uses the same one as the user, so in my example the user and the group both are www-data.
Now the files you want to serve with nginx should have the correct permissions. Nginx should have permissions to read the files. You can give the group www-data read permissions to a file like this:
chown :www-data my-file.html
http://linux.die.net/man/1/chown
With chown you can change the user and group owner of a file. In this command I only change the group, if you would change the user too you would specify the username before the colon, like chown www-data:www-data my-file.html. But setting the group permissions correct should be enough for nginx to be able to read the file.
Since Nginx is handling the static files directly, it needs access to
the appropriate directories. We need to give it executable permissions for our home directory.
The safest way to do this is to add the Nginx user to our own user group. We can then add the executable permission to the group owners of our home directory, giving just enough access for Nginx to serve the files:
CentOS / Fedora
sudo usermod -a -G your_user nginx
chmod 710 /home/your_user
Set SELinux to globally permissive mode, run:
sudo setenforce 0
for more info, please visit
https://www.nginx.com/blog/using-nginx-plus-with-selinux/
Ubuntu / Debian
sudo usermod -a -G your_user www-data
sudo chown -R :www-data /path/to/your/static/folder
for accepted answer
sudo chown -R :www-data static_folder
for changing group owner of all files in that folder
For me is was SElinux, I had to run the following: (RHEL/Centos on AWS)
sudo setsebool -P httpd_can_network_connect on
chcon -Rt httpd_sys_content_t /var/www/
I ran into this issue with a Django project. Changing user permissions and groups didn't work. However, moving the entire static folder from my project to /var/www did.
Copy your project static files to /var/www/static
# cp -r /project/static /var/www/static
Point nginx to proper directory
# sudo nano /etc/nginx/sites-available/default
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
location /static/ {
root /var/www;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
Test nginx config and reload
# sudo nginx -t
# sudo systemctl reload nginx
After digging into very useful answers decided to collect everything related to permissions as a recipe. Specifically, the simplest solution with maximal security (=minimal permissions).
Suppose we deploy the site as user admin, that is, she owns site dir and everything within. We do not want to run nginx as this user (too many permissions). It's OK for testing, not for prod.
By default Nginx runs workers as a user nginx, that is, config contains line user nginx
By default user nginx is in the group with the same name: nginx.
We want to give minimal permissions to user nginx without changing file ownership. This seems to be the most secure of naive options.
In order to serve static files, the minimal required permissions in the folders hierarchy (see the group permissions) should be like this (use the command namei -l /home/admin/WebProject/site/static/hmenu.css):
dr-xr-xr-x root root /
drwxr-xr-x root root home
drwxr-x--- admin nginx admin
drwx--x--- admin nginx WebProject
drwx--x--- admin nginx site
drwx--x--- admin nginx static
-rwxr----- admin nginx hmenu.css
Next, how to get this beautiful picture? To change group ownership for dirs, we first apply sudo chown :nginx /home/admin/WebProject/site/static and then repeat the command stripping dirs from the right one-by-one.
To change permissions for dirs, we apply sudo chmod g+x /home/admin/WebProject/site/static and again strip dirs.
Change group for the files in the /static dir: sudo chown -R :nginx /home/admin/WebProject/site/static
Finally, change permissions for the files in the /static dir: sudo chmod g+r /home/admin/WebProject/site/static/*
(Of course one can create a dedicated group and change the user name, but this would obscure the narration with unimportant details.)
Setting user root in nginx can be really dangerous. Having to set permissions to all file hierarchy can be cumbersome (imagine the folder's full path is under more than 10 subfolders).
What I'd do is to mirror the folder you want to share, under /usr/share/nginx/any_folder_name with permissions for nginx's configured user (usually www-data). That you can do with bindfs.
In your case I would do:
sudo bindfs -u www-data -g www-data /root/downloads/boxes/ /usr/share/nginx/root_boxes
It will mount /root/downloads/boxes into /usr/share/nginx/root_boxes with all permissions for user www-data. Now you set that path in your location block config
location /static {
autoindex on;
alias /usr/share/nginx/root_boxes/;
}
Try the accepted answer by #gitaarik, and if it still gives 403 Forbidden or 404 Not Found and your location target is / read on.
I also experienced this issue, but none of the permission changes mentioned above solved my problem. It was solved by adding the root directive because I was defining the root location (/) and accidentally used the alias directive when I should have used the root directive.
The configuration is accepted, but gives 403 Forbidden, or 404 Not Found if auto-indexing is enabled for /:
location / {
alias /my/path/;
index index.html;
}
Correct definition:
location / {
root /my/path/;
index index.html;
}
You can just do like what is did:
CentOS / Fedora
sudo usermod -a -G your_user_name nginx
chmod 710 /home/your_user_name
Ubuntu / Debian
sudo usermod -a -G your_user_name www-data
sudo chown -R :www-data /path/to/your/static_folder
And in your nginx file that serve your site make sure that your location for static is like this:
location /static/ {
root /path/to/your/static_folder;
}
I bang my head on this 403 problem for quite some time.
I'm using CentOS from DigitalOcean.
I thought to fix the problem was just to set SELINUX=disabled in /etc/selinux/config but I was wrong. Somehow, I screwed my droplet.
This works for me!
sudo chown nginx:nginx /var/www/mydir
My nginx is run as nginx user and nginx group, but add nginx group to public folder not work for me.
I check the permission as a nginx user.
su nginx -s /bin/bash
I found the I need to add the group for the full path. My path is start at /root, so I need to do following:
chown -R :nginx /root