Certbot (letsencrypt) Could not open file sites-enabled/default - nginx

I'm using an Amazon EC3 with 2 vhosts now, they both have valid certificates.
Now I want to add a third virtual host, I added the nginx config file but when I try to run the following command
it's looking for the "default" nginx config file. but in my case it's named "web"
certbot --nginx -d mysite.be www.mysite.be
Can I add a command so that certbot looks in my web file for the valid config instead of the default one?

You could make an alias.
ln -s /etc/nginx/sites-available/your_file_name /etc/nginx/sites-enabled

Related

certbot SSL certifact stops working on nginx configuration update

I have a Django application setup CI/CD via Bitbucket on AWS EC2 via AWS CodeDeploy.
In the AWS CodeDeploy hooks under AfterInstall
hooks:
AfterInstall:
- location: scripts/ngnix.sh
timeout: 6000
runas: ubuntu
and the nginx.sh script is
#!/usr/bin/env bash
mkdir -p /etc/nginx/sites-enabled
mkdir -p /etc/nginx/sites-available
sudo mkdir -p /etc/nginx/log/
sudo unlink /etc/nginx/sites-enabled/*
sudo cp /path_to_app/configs/nginx.conf /etc/nginx/sites-available/app-host.conf
sudo ln -s /etc/nginx/sites-available/app-host.conf /etc/nginx/sites-enabled/app-host.conf
sudo /etc/init.d/nginx stop
sudo /etc/init.d/nginx start
sudo /etc/init.d/nginx status
But every time this script is run via CI/CD pipeline, SSL stops working and the website is not accessible using https.
To re-enable SSL, I have to manually run
sudo certbot --nginx
And re-configure SSL certificate.
What could be the issue for not working of the SSL and how to automate this?
The certbot procures the ssl certificates from Lets Encrypt and keeps those certificates on your machine. You can run the command sudo certbot certificates to see the certificates path.
Found the following certs:
Certificate Name: example.com
Domains: example.com, www.example.com
Expiry Date: 2017-02-19 19:53:00+00:00 (VALID: 30 days)
Certificate Path: /etc/letsencrypt/live/example.com/fullchain.pem
Private Key Path: /etc/letsencrypt/live/example.com/privkey.pem
You need to store the the files located at Certificate Path & Private Key Path in a persisted volume so they don't get wiped out everytime you deploy your app. In your case I think these certificate files are getting wiped out and that is the reason you have to run the command sudo certbot --nginx to procure new cerificate.

List all redirects in Nginx

I have an Nginx server that has some 50 redirect config files it pulls from.
Is there any way to pull this data as a single list of the server names being listened on once Nginx is running? Or is my best option to manually compile the data?
I have SSH'd in but can't see anywhere obvious that this data could be. Is there a command I could use?
Add in your nginx.conf include /etc/nginx/sites-enabled/*; or another path where your sites located
after it check your configs
command nginx -t
and reload
command service nginx reload
If you meant you want to see complete config in one go then you can use below command
nginx -T
This will tell you if there are any errors in config and if not will print the whole config also
Edit-1: 5th Jul 2018
There is nothing like a apachectl -S in nginx. The only other thing you may try and do is to filter the complete config
nginx -T | grep server_name

nginx: why multiple conf files?

There are multiple nginx conf files on single installation. Here is what I found:
/opt/nginx/conf/nginx.conf
/etc/nginx/nginx.conf
/etc/nginx/sites-available/default
more in /etc/nginx/conf.d
more in /etc/nginx/sites-available
What's the use of those multiple conf files? What is going to happen if there are conflict? Which one is the master copy?
Start with /etc/nginx/nginx.conf, all of the other files are included into it. See this document for details.
Use nginx -T to see the complete configuration as nginx sees it.

docker custom nginx container failed to start

I am trying to build a nginx image from scratch (instead of using the official nginx image)
FROM ubuntu
RUN apt-get update
RUN apt-get install -y nginx
RUN rm -v /etc/nginx/nginx.conf
ADD nginx.conf /etc/nginx/
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80
COPY ./files/ /var/www/html/
CMD service nginx start
And this is my nginx.conf file under current directory.
server {
root /var/www/html
location / {
index.html
}
}
And my dummy index.html file under ./files folder
<p1>hello world</p1>
I run this command
docker build -t hello-world .
And
docker run -p 80:80 hello-world
But I got error saying
* Starting nginx nginx
...fail!
What maybe the issue?
Don't use "service xyz start"
To run a server inside a container, don't use the service command. That is a script which will run the requested server in the background, and then exit. When the script exits, the container will stop (because that script was the primary process).
Instead, directly run the command that the service script would have started for you. Unless it exits or crashes, the container should remain running.
CMD ["/usr/sbin/nginx"]
nginx.conf is missing the events section
This is required. Something like:
events {
worker_connections 1024;
}
The server directive is not a top-level element
You have server { } at the top level of the nginx.conf, but it has to be inside a protocol definition such as http { } to be valid.
http {
server {
...
nginx directives end with a semicolon
These are missing at the end of the root statement and your index.html line.
Missing the "index" directive
To define the index file, use index, not just the filename by itself.
index index.html;
There is no HTML element "p1"
I assume you meant to use <p> here.
<p>hello world</p>
Final result
Dockerfile:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y nginx
RUN rm -v /etc/nginx/nginx.conf
ADD nginx.conf /etc/nginx/
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80
COPY ./files/ /var/www/html/
CMD ["/usr/sbin/nginx"]
nginx.conf:
http {
server {
root /var/www/html;
location / {
index index.html;
}
}
}
events {
worker_connections 1024;
}
daemon off;
one can use directly the official image of nginx in docker hub, just start your docker file with this line : FROM nginx
here is an example of docker file that you can use :
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
COPY static-html-directory /usr/share/nginx/html
EXPOSE 80
as you see there is no need to use a CMD to run your nginx server

Nginx serve static file and got 403 forbidden

Just want to help somebody out. yes ,you just want to serve static file using nginx, and you got everything right in nginx.conf:
location /static {
autoindex on;
#root /root/downloads/boxes/;
alias /root/downloads/boxes/;
}
But , in the end , you failed. You got "403 forbidden" from browser...
----------------------------------------The Answer Below:----------------------------------------
The Solution is very Simple:
Way 1 : Run nginx as the user as the '/root/downloads/boxes/' owner
In nginx.conf :
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
YES, in the first line "#user noboy;" , just delete "#" , and change "nobody" to your own username in Linux/OS X, i.e change to "root" for test. The restart nginx.
Attention , You'd better not run nginx as root! Here just for testing, it's dangerous for the Hacker.
For more reference , see nginx (engine X) – What a Pain in the BUM! [13: Permission denied]
Way 2 : Change '/root/downloads/boxes/' owner to 'www-data' or 'nobody'
In Terminal:
ps aux | grep nginx
Get the username of running nginx . It should be 'www-data' or 'nobody' determined by the version of nginx. Then hit in Terminal(use 'www-data' for example):
chown -R www-data:www-data /root/downloads/boxes/
------------------------------One More Important Thing Is:------------------------------
These parent directories "/", "/root", "/root/downloads" should give the execute(x) permission to 'www-data' or 'nobody'. i.e.
ls -al /root
chmod o+x /root
chmod o+x /root/downloads
For more reference , see Resolving "403 Forbidden" error and Nginx 403 forbidden for all files
You should give nginx permissions to read the file. That means you should give the user that runs the nginx process permissions to read the file.
This user that runs the nginx process is configurable with the user directive in the nginx config, usually located somewhere on the top of nginx.conf:
user www-data
http://wiki.nginx.org/CoreModule#user
The second argument you give to user is the group, but if you don't specify it, it uses the same one as the user, so in my example the user and the group both are www-data.
Now the files you want to serve with nginx should have the correct permissions. Nginx should have permissions to read the files. You can give the group www-data read permissions to a file like this:
chown :www-data my-file.html
http://linux.die.net/man/1/chown
With chown you can change the user and group owner of a file. In this command I only change the group, if you would change the user too you would specify the username before the colon, like chown www-data:www-data my-file.html. But setting the group permissions correct should be enough for nginx to be able to read the file.
Since Nginx is handling the static files directly, it needs access to
the appropriate directories. We need to give it executable permissions for our home directory.
The safest way to do this is to add the Nginx user to our own user group. We can then add the executable permission to the group owners of our home directory, giving just enough access for Nginx to serve the files:
CentOS / Fedora
sudo usermod -a -G your_user nginx
chmod 710 /home/your_user
Set SELinux to globally permissive mode, run:
sudo setenforce 0
for more info, please visit
https://www.nginx.com/blog/using-nginx-plus-with-selinux/
Ubuntu / Debian
sudo usermod -a -G your_user www-data
sudo chown -R :www-data /path/to/your/static/folder
for accepted answer
sudo chown -R :www-data static_folder
for changing group owner of all files in that folder
For me is was SElinux, I had to run the following: (RHEL/Centos on AWS)
sudo setsebool -P httpd_can_network_connect on
chcon -Rt httpd_sys_content_t /var/www/
I ran into this issue with a Django project. Changing user permissions and groups didn't work. However, moving the entire static folder from my project to /var/www did.
Copy your project static files to /var/www/static
# cp -r /project/static /var/www/static
Point nginx to proper directory
# sudo nano /etc/nginx/sites-available/default
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
location /static/ {
root /var/www;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
Test nginx config and reload
# sudo nginx -t
# sudo systemctl reload nginx
After digging into very useful answers decided to collect everything related to permissions as a recipe. Specifically, the simplest solution with maximal security (=minimal permissions).
Suppose we deploy the site as user admin, that is, she owns site dir and everything within. We do not want to run nginx as this user (too many permissions). It's OK for testing, not for prod.
By default Nginx runs workers as a user nginx, that is, config contains line user nginx
By default user nginx is in the group with the same name: nginx.
We want to give minimal permissions to user nginx without changing file ownership. This seems to be the most secure of naive options.
In order to serve static files, the minimal required permissions in the folders hierarchy (see the group permissions) should be like this (use the command namei -l /home/admin/WebProject/site/static/hmenu.css):
dr-xr-xr-x root root /
drwxr-xr-x root root home
drwxr-x--- admin nginx admin
drwx--x--- admin nginx WebProject
drwx--x--- admin nginx site
drwx--x--- admin nginx static
-rwxr----- admin nginx hmenu.css
Next, how to get this beautiful picture? To change group ownership for dirs, we first apply sudo chown :nginx /home/admin/WebProject/site/static and then repeat the command stripping dirs from the right one-by-one.
To change permissions for dirs, we apply sudo chmod g+x /home/admin/WebProject/site/static and again strip dirs.
Change group for the files in the /static dir: sudo chown -R :nginx /home/admin/WebProject/site/static
Finally, change permissions for the files in the /static dir: sudo chmod g+r /home/admin/WebProject/site/static/*
(Of course one can create a dedicated group and change the user name, but this would obscure the narration with unimportant details.)
Setting user root in nginx can be really dangerous. Having to set permissions to all file hierarchy can be cumbersome (imagine the folder's full path is under more than 10 subfolders).
What I'd do is to mirror the folder you want to share, under /usr/share/nginx/any_folder_name with permissions for nginx's configured user (usually www-data). That you can do with bindfs.
In your case I would do:
sudo bindfs -u www-data -g www-data /root/downloads/boxes/ /usr/share/nginx/root_boxes
It will mount /root/downloads/boxes into /usr/share/nginx/root_boxes with all permissions for user www-data. Now you set that path in your location block config
location /static {
autoindex on;
alias /usr/share/nginx/root_boxes/;
}
Try the accepted answer by #gitaarik, and if it still gives 403 Forbidden or 404 Not Found and your location target is / read on.
I also experienced this issue, but none of the permission changes mentioned above solved my problem. It was solved by adding the root directive because I was defining the root location (/) and accidentally used the alias directive when I should have used the root directive.
The configuration is accepted, but gives 403 Forbidden, or 404 Not Found if auto-indexing is enabled for /:
location / {
alias /my/path/;
index index.html;
}
Correct definition:
location / {
root /my/path/;
index index.html;
}
You can just do like what is did:
CentOS / Fedora
sudo usermod -a -G your_user_name nginx
chmod 710 /home/your_user_name
Ubuntu / Debian
sudo usermod -a -G your_user_name www-data
sudo chown -R :www-data /path/to/your/static_folder
And in your nginx file that serve your site make sure that your location for static is like this:
location /static/ {
root /path/to/your/static_folder;
}
I bang my head on this 403 problem for quite some time.
I'm using CentOS from DigitalOcean.
I thought to fix the problem was just to set SELINUX=disabled in /etc/selinux/config but I was wrong. Somehow, I screwed my droplet.
This works for me!
sudo chown nginx:nginx /var/www/mydir
My nginx is run as nginx user and nginx group, but add nginx group to public folder not work for me.
I check the permission as a nginx user.
su nginx -s /bin/bash
I found the I need to add the group for the full path. My path is start at /root, so I need to do following:
chown -R :nginx /root

Resources