nginx ubuntu user password - nginx

I have a nodejs backend hosted on AWS EC2 Ubuntu 20.04 instances.
When i ssh into my server, everything is working accordingly. Today i tried configuring nginx, so i created website.com files inside sites-available .
website.com
server {
listen 80;
listen [::]:80;
root /home/ubuntu/apps/yelp-app/client/build;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name api.website.com www.api.website.com;
location / {
try_files $uri /index.html;
}
location /api {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
After saving that file, i ran the following command:
sudo ln -s /etc/nginx/sites-available/website.com /etc/nginx/sites-enabled/
From the docs, in order to enable the new site i need to restart nginx using the following:
systemctl restart nginx
Unfortunately, it keeps asking for the ubuntu user password which i did not ever set.
Can someone help me out?
When i run journalctl -xe -u nginx this is what i get:
-- Subject: A start job for unit nginx.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit nginx.service has begun execution.
--
-- The job identifier is 23231.
Jan 11 12:44:45 ip-172-31-40-105 nginx[164236]: nginx: [emerg] a duplicate default server for 0.0.0.0:80 in >
Jan 11 12:44:45 ip-172-31-40-105 nginx[164236]: nginx: configuration file /etc/nginx/nginx.conf test failed
Jan 11 12:44:45 ip-172-31-40-105 systemd[1]: nginx.service: Control process exited, code=exited, status=1/FA>
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- An ExecStartPre= process belonging to unit nginx.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 1.
Jan 11 12:44:45 ip-172-31-40-105 systemd[1]: nginx.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit nginx.service has entered the 'failed' state with result 'exit-code'.
Jan 11 12:44:45 ip-172-31-40-105 systemd[1]: Failed to start A high performance web server and a reverse pro>
-- Subject: A start job for unit nginx.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit nginx.service has finished with a failure.
--
-- The job identifier is 23231 and the job result is failed.

First be root then restart it.
sudo su
systemctl restart nginx

This solution provided by Mehmet works but if you require not to change user use the following:
sudo service nginx restart

Related

Cannot restart Nginx after deleting letsencrypt certificate

I am getting connection refused while accessing the website.
while trying to resolve the issue i tried to restart the nginx server i got the below error message.
systemd[1]: Starting A high performance web server and a reverse proxy server...
nginx[4400]: nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/mobilitytechnews.net-0001/fullchain.p>
nginx[4400]: nginx: configuration file /etc/nginx/nginx.conf test failed
systemd[1]: nginx.service: Control process exited, code=exited, status=1/FAILURE
systemd[1]: nginx.service: Failed with result 'exit-code'.
systemd[1]: Failed to start A high performance web server and a reverse proxy server.
nginx.confi file
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
nginx has already told you exactly what's wrong:
nginx[4400]: nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/mobilitytechnews.net-0001/fullchain.pem"
It means right what it says: somewhere in your configuration there is an ssl_certificate directive pointing to this file.
Also, I'm sure the nginx configuration you showed is incomplete, since nginx will never try to load certificate by itself, without corresponding directive in it's configuration.

Flask + Gunicorn + Nginx: Group www-data not installed

I am new to web services especially when it comes to deployment options.
I made a Flask application webserver, and now I would like to deploy it on production mode. I went for Gunicorn + Nginx options and followed this tutorial on Medium.
I installed nginx with:
~ >>> sudo pacman -S nginx
~ >>> sudo systemctl start nginx
~ >>> sudo systemctl enable nginx
Everything worked well, but when I created my systemd service webserver.service, the Group=www-data made the service exited, with status=216/GROUP.
Here is the webserver.service file:
[Unit]
Description=Gunicorn instance to serve the test server webserver
After=network.target
[Service]
User=user
Group=www-data
WorkingDirectory=/home/user/webserver/
Environment="PATH=/home/user/webserver/.env/bin"
ExecStart=/home/user/webserver/.env/bin/gunicorn --workers 3 --bind unix:app.sock -m 007 wsgi:app
[Install]
WantedBy=multi-user.target
Here is the full log:
~ >>> sudo systemctl start webserver
~ >>> sudo systemctl enable webserver
~ >>> sudo systemctl status webserver
● webserver.service - Gunicorn instance to serve the test server webserver
Loaded: loaded (/etc/systemd/system/webserver.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sun 2020-07-05 14:50:02 CEST; 20min ago
Main PID: 5464 (code=exited, status=216/GROUP)
juil. 05 14:50:02 user systemd[1]: Started Gunicorn instance to serve the test server webserver.
juil. 05 14:50:02 user systemd[5464]: webserver.service: Failed to determine group credentials: No such process
juil. 05 14:50:02 user systemd[5464]: webserver.service: Failed at step GROUP spawning /home/user/webserver/.env/bin/gunicorn: No such process
juil. 05 14:50:02 user systemd[1]: webserver.service: Main process exited, code=exited, status=216/GROUP
juil. 05 14:50:02 user systemd[1]: webserver.service: Failed with result 'exit-code'.
In fact, when I list all the groups, the www-data required by Nginx is missing:
~ >>> groups
sys network power docker lp wheel user
So obviously the above code won't work with www-data group.
What I tried
1. A different group
I tried to change the group option to Group=root, and it worked. I then finished the tutorial without any errors.
I thought it fixed my issue, but I couldn't access my server on my browser at http://www.my_domain_webserver.com, so I guess the www-data is mandatory to work with Nginx and GUnicorn.
My nginx location block:
server {
listen 80;
server_name my_domain_webserver.com www.my_domain_webserver.com;
location / {
include proxy_params;
proxy_pass http://unix:/home/user/webserver/app.sock;
}
}
2. Reloading Daemon
I also tried to re-execute daemon with systemctl daemon-reexec, but it didn't solved my issue.
My project tree is:
webserver
├── app.py
├── app.sock
└── wsgi.py
Why is the group www-data missing ?
Do I need to add special nginx.conf files ? I didn't modify any of them.
Thanks for your help !
You could try to add the folder to the "www-data" group:
sudo chown www-data /home/user/webserver
That helped for me...

Setting up a load balancer using nginx

I am learning how to set up a load balancer, using nginx on AWS.
I set up a basic ubuntu 18.04 server on AWS, and then did the following:
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install nginx -y
I then replaced /etc/nginx/nginx.conf with the following:
upstream backend {
server xxx.24.20.11;
server xxx.24.20.12;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
I then tried restarting the nginx server by doing:
sudo service nginx stop
sudo service nginx start
but I'm getting the error message:
Job for nginx.service failed because the control process exited with error code.
See "systemctl status nginx.service" and "journalctl -xe" for details.
So, I did
systemctl status nginx.service
And here's what I got:
nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2019-04-03 01:13:27 UTC; 23s ago
Docs: man:nginx(8)
Process: 1822 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS)
Process: 1748 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 1865 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
Main PID: 1752 (code=exited, status=0/SUCCESS)
Apr 03 01:13:27 load-balancer.xxxx.com systemd[1]: Starting A high performance web server and a reverse proxy server...
Apr 03 01:13:27 load-balancer.xxx.com nginx[1865]: nginx: [emerg] "upstream" directive is not allowed here in /etc/nginx/nginx.conf:2
Apr 03 01:13:27 load-balancer.xxx.com nginx[1865]: nginx: configuration file /etc/nginx/nginx.conf test failed
Apr 03 01:13:27 load-balancer.xxx.com systemd[1]: nginx.service: Control process exited, code=exited status=1
Apr 03 01:13:27 load-balancer.xxx.com systemd[1]: nginx.service: Failed with result 'exit-code'.
Apr 03 01:13:27 load-balancer.xxx.com systemd[1]: Failed to start A high performance web server and a reverse proxy server.
I looked at two separate tutorials, and they both use the "upstream" directive. Any ideas?
Edit:
I returned nginx.conf to its original format:
sudo cp /etc/nginx/nginx.original /etc/nginx/nginx.conf
Then, I did the following:
sudo su
echo > /etc/nginx/sites-available/load-balancer.conf
I then added the following to /etc/nginx/sites-available/load-balancer.conf
http {
upstream backend {
server docker-one.xxxxxxx.com;
server docker-two.xxxxxxx.com;
}
server {
listen 80;
server_name load-balancer.xxxxxxx.com;
location / {
proxy_pass http://backend;
}
}
}
load-balancer.xxxxxxx.com is the domain name that I am using for testing, and the docker-one and docker-two are the two domains that will be running the actual web app.
I then did a symlink:
ln -s /etc/nginx/sites-available/load-balancer.conf /etc/nginx/sites-enabled/
then I rebooted the server. When it was back up, I did the following:
sudo service nginx stop
sudo service nginx start
I got an error message telling me the nginx service failed, so I did:
systemctl status nginx.service
Which gave me the following error:
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2019-04-03 19:42:34 UTC; 19s ago
Docs: man:nginx(8)
Process: 1549 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
Apr 03 19:42:34 load-balancer.xxxxxxx.com systemd[1]: Starting A high performance web server and a reverse proxy server...
Apr 03 19:42:34 load-balancer.xxxxxxx.com nginx[1549]: nginx: configuration file /etc/nginx/nginx.conf test failed
Apr 03 19:42:34 load-balancer.xxxxxxx.com systemd[1]: nginx.service: Control process exited, code=exited status=1
Apr 03 19:42:34 load-balancer.xxxxxxx.com systemd[1]: nginx.service: Failed with result 'exit-code'.
Apr 03 19:42:34 load-balancer.xxxxxxx.com systemd[1]: Failed to start A high performance web server and a reverse proxy server.
Overwriting whole nginx.conf you deleted http context. upstream is only allowed inside http {} block as per https://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream
Debian/ubuntu uses /etc/nginx/sites-available/* for site definitions, thats where you need to create your own config file, one per vhost if needed. And then enable it so it becomes symlinked to /etc/nginx/sites-enabled/*. Or for simplicity, use /etc/nginx/sites-available/default as existing config file, make your changes there. Save original, until you get it working. And reload nginx after said changes. Restore original /etc/nginx/nginx.conf also, ofcourse.

Certbot renew: nginx: [error] open() "/run/nginx.pid" failed (2: No such file or directory)

Certbot and nginx versions:
certbot installed using certbot.eff.org install guide.
Certbot version: 0.22.2
Nginx version: 1.10.3
Getting ssl certificates works fine:
certbot --nginx
But, in renewal of cerbot certificated
certbot renew --dry-run
nginx fails to start causing:
nginx: [error] open() "/run/nginx.pid" failed (2: No such file or directory)
I have tried changing post-hook and pre-hook in /etc/letsencrypt/renewal/*com.conf/
commenting installer=nginx
changing authenticator to nginx and standalone
Adding post and pre hooks in /etc/letsencrypt/renewal-hooks/pre/ and /etc/lestencrypt/renewal-hooks/post/ to stop and start nginx service.
Seems nginx is not starting properly or isn't stop properly.
after renewal completes nginx fails with (code=exited, status=1/FAILURE)
Nginx error log show:
Error while certbot renew:
Try to execute:
sudo service nginx restart
Then test your nginx configuration file(s) (until you see "nginx: configuration file /etc/nginx/nginx.conf test is successful")
sudo nginx -s reload -t
Pay attention on paths to certificates, and other stuff
and then reload configuration without -t option:
sudo nginx -s reload
It's not recommended to modify configuration files in /etc/letsencrypt/ but creating (if it doesn't exist) and modifying cli.ini file here is working for me. You can specify post-hook in this file once and it will work for all your certificates, see my current file:
# /etc/letsencrypt/cli.ini
max-log-backups = 0
authenticator = webroot
webroot-path = /var/www/html
post-hook = service nginx reload
text = True
I hope this will help future readers. Solution source is here (however the article is in Russian)
I had the same issue on Ubuntu 16.04
I've just removed post and pre hooks in /etc/letsencrypt/renewal/*.conf and changed authenticator to nginx - I had in two entries standalone.
And it is working now fine.
Edit:
Recommended way to update renewal config is to reissue new certificate using:
certbot -i nginx -d example.com -d www.example.com certonly
You can run this command line before run reload nginx.
sudo nginx -c /etc/nginx/nginx.conf
or
sudo nginx -c /usr/local/etc/nginx/nginx.conf
then you can start nginx nomaly
sudo nginx -s reload
Good luck.
I had the same error...
When I installed certbot, I followed the instructions and put in a cronjob (5 3 15 * *):
certbot renew --pre-hook "service nginx stop" --post-hook "service nginx start"
this morning nginx was dead, and the log showed
open() "/run/nginx.pid" failed (2: No such file or directory)
I did not connect the two, but do I understand that certbot triggers the nginx failure?
ps -ef | grep nginx, find all nginx process
sudo kill -9 xxx xxx xxx or sudo pkill nginx
sudo systemctl restart nginx
sudo nginx -t
I had this problem and followed a similar tack to those outlined here.
I had had certbot install a certificate, but it was in certonly --nginx mode, I supplied my own nginx serverblocks. certbot worked, but an nginx failure cast doubt on the accuracy of my provisioning.
This certbot call "restarts" nginx with a modified server block configuration, so it can answer the HTTP-01 challenges. I know this because when it fails, it will log, "nginx restart failed:" just before the bind() failures I'm about to show. My nginx server was down when provisioning succeeded.
I couldn't get systemctl or service to start it and systemd status nginx would only ever show "failed".
Whilst I could get nginx up, and serving, with nginx -s reload I wanted systemd to manage it for me.
No amount of systemctl {start|restart|stop|quit} nginx, would work. The status remained as failed and would show errors with bind():
Oct 07 10:04:13 HostXYZ systemd[1]: Starting A high performance web server and a reverse proxy server...
Oct 07 10:04:13 HostXYZ nginx[17096]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Unknown error)
Oct 07 10:04:13 HostXYZ nginx[17096]: nginx: [emerg] bind() to [::]:80 failed (98: Unknown error)
Oct 07 10:04:13 HostXYZ nginx[17096]: nginx: [emerg] bind() to [::]:443 failed (98: Unknown error)
Oct 07 10:04:13 HostXYZ nginx[17096]: nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Unknown error)
That would repeat in journalctl output, 4 or 5 times.
I checked the process and saw:
:~$ ps aux | grep nginx
root 12960 0.0 0.6 77216 9816 ? Ss Oct06 0:00 nginx: master process nginx -c /etc/nginx/nginx.conf
www-data 16944 0.0 0.5 77360 8604 ? S 08:43 0:00 nginx: worker process
That process, which appeared to be occupying the ports needed by my systemd service. My systemd service doesn't use that -c /etc/nginx/nginx.conf. It uses:
ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'
nginx -s stop, and quit would not rid me of the rogue process. Instead they both gave the error the OP had:
:~$ sudo nginx -s stop
nginx: [error] open() "/run/nginx.pid" failed (2: No such file or directory)
Both my systemd service unit and /etc/nginx/nginx.conf gave /run/nginx.pid as the PIDFile/pid. For some reason, /etc/nginx/nginx.conf wasn't creating it.
What I needed to do:
sudo killall nginx
sudo systemctl start nginx
That knocked out the other nginx service (I think it came from nginx -s reload but I couldn't shut it down by the corollary command) Which looked like this:
:~$ sudo killall nginx
:~$ ps aux | grep nginx
john 17140 0.0 0.1 4008 2004 pts/0 S+ 10:10 0:00 grep --color=auto nginx
:~$ sudo systemctl start nginx
:~$ sudo systemctl status nginx
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2022-10-07 10:10:25 UTC; 1s ago
...
:~$ ps aux | grep nginx
root 11481 0.0 0.1 76484 2588 ? Ss 10:10 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 11482 0.0 0.2 76876 4284 ? S 10:10 0:00 nginx: worker process
:~$ cat /run/nginx.pid
11481

NGINX & Consul-Template in Docker

I'm having trouble with consistent service discovery using EC2, AWS, Docker, Consul-Template, Consul, and NGINX.
I have multiple services, each running on it's own EC2 instance. On these instances I run the following containers (in this order):
cAdvisor (monitoring)
node-exporter (monitoring)
Consul (running in agent mode)
Registrator
My service
Custom container running both nginx and consul-template
The custom container has the following Dockerfile:
FROM nginx:1.9
#Install Curl
RUN apt-get update -qq && apt-get -y install curl
#Install Consul Template
RUN curl -L https://github.com/hashicorp/consul-template/releases/download/v0.10.0/consul-template_0.10.0_linux_amd64.tar.gz | tar -C /usr/local/bin --strip-components 1 -zxf -
#Setup Consul Template Files
RUN mkdir /etc/consul-templates
COPY ./app.conf.tmpl /etc/consul-templates/app.conf
# Remove all other conf files from nginx
RUN rm /etc/nginx/conf.d/*
#Default Variables
ENV CONSUL consul:8500
CMD /usr/sbin/nginx -c /etc/nginx/nginx.conf && consul-template -consul=$CONSUL -template "/etc/consul-templates/app.conf:/etc/nginx/conf.d/app.conf:/usr/sbin/nginx -s reload"
The app.conf file looks like this:
{{range services}}
upstream {{.Name}} {
least_conn;{{range service .Name}}
server {{.Address}}:{{.Port}};{{end}}
}
{{end}}
server {
listen 80 default_server;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location / {
proxy_pass http://cart/cart/;
}
location /cart {
proxy_pass http://cart/cart;
}
{{range services}}
location /api/{{.Name}} {
proxy_read_timeout 180;
proxy_pass http://{{.Name}}/{{.Name}};
}
{{end}}
}
Everything seems to start up perfectly ok, but at some point (which I'm yet to identify) after start up, consul-template seems to return that there are no available servers for a particular service. This means that the upstream section for that service contains no servers, and I end up with this in the logs:
2015/12/04 07:09:34 [emerg] 77#77: no servers are inside upstream in /etc/nginx/conf.d/app.conf:336
nginx: [emerg] no servers are inside upstream in /etc/nginx/conf.d/app.conf:336
2015/12/04 07:09:34 [ERR] (runner) error running command: exit status 1
Consul Template returned errors:
1 error(s) occurred:
* exit status 1
2015/12/04 07:09:34 [DEBUG] (logging) setting up logging
2015/12/04 07:09:34 [DEBUG] (logging) config:
{
"name": "consul-template",
"level": "WARN",
"syslog": false,
"syslog_facility": "LOCAL0"
}
2015/12/04 07:09:34 [emerg] 7#7: no servers are inside upstream in /etc/nginx/conf.d/app.conf:336
nginx: [emerg] no servers are inside upstream in /etc/nginx/conf.d/app.conf:336
After this, NGINX will no longer accept requests.
I'm sure I'm missing something obvious, but I've tied myself in mental knots about the sequence of events etc. What I think might be happening is that NGINX crashes, but because consul-template is still running, the Docker container doesn't restart. I don't actually care if the container itself restarts, or if just NGINX restarts.
Can someone help?
Consul Template will exit once the script it runs after writing returns a non-zero exit code. See here for the documentation.
The documentation suggests to put a || true just after the restart (or reload) command. This will keep Consul Template running independent of the exit code.
You could consider wrapping the restart in its own shell script that first tests the configuration (with nginx -t) before triggering a reload. You could even move the initial start of nginx to this script as it only makes sense to start nginx once the first (valid) configuration has been written?!

Resources