duplicate upstream "backend" in /etc/nginx/sites-enabled installing Mastodon - nginx

I have already installed Mastodon, but I'm in the step of setting up nginx according to this doc https://github.com/mastodon/documentation/blob/master/content/en/admin/install.md
I already changed the mastodon file to put my domain and uncomment the lines of the certificate.
so it looks like this:
# Uncomment these lines once you acquire a certificate:
ssl_certificate /etc/letsencrypt/live/my.domain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/my.domain/privkey.pem;
In the upper part of the file I have this:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream backend {
server 127.0.0.1:3000 fail_timeout=0;
}
upstream streaming {
server 127.0.0.1:4000 fail_timeout=0;
}
In /etc/nginx/sites-enabled/ there are three files: default which have I don't know what; and mastodon and my.domain.conf that looks exactly the same, as described above.
Once I modified the mastodon file to put my.domain when it appears, I was told to restart nginx so I run:
sudo systemctl restart nginx
I got this:
Job for nginx.service failed because the control process exited with error code.
See "systemctl status nginx.service" and "journalctl -xeu nginx.service" for details.
so I run "systemctl status nginx.service" and got this:
× nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2022-11-12 22:28:59 UTC; 15s ago
Docs: man:nginx(8)
Process: 144802 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
CPU: 40ms
Nov 12 22:28:59 my-instance-1 systemd[1]: Starting A high performance web server and a reverse proxy server...
Nov 12 22:28:59 my-instance-1 nginx[144802]: nginx: [emerg] duplicate upstream "backend" in /etc/nginx/sites-enabled/my.domain.conf:6
Nov 12 22:28:59 my-instance-1 nginx[144802]: nginx: configuration file /etc/nginx/nginx.conf test failed
Nov 12 22:28:59 my-instance-1 systemd[1]: nginx.service: Control process exited, code=exited, status=1/FAILURE
Nov 12 22:28:59 my-instance-1 systemd[1]: nginx.service: Failed with result 'exit-code'.
Nov 12 22:28:59 my-instance-1 systemd[1]: Failed to start A high performance web server and a reverse proxy server.
I don't know why it says I have duplicated upstream backend if it appears only once in the file. I need to be able to restart my nginx.
UPDATE
Current state is that in the picture. I changed in the mastodon file in sites-AVAILABLE all example.com by my.domain (lines 28, 37,38 aprox). cert lines are uncommented and nginx is throwing this error.
My current instance looks like this, I created a user but the mail verification link got ERR_CERT_COMMON_NAME_INVALID.

Related

nginx: [emerg] unknown directive "match", why does it appear?

this is my config
enter image description here
log_format mqtt '$remote_addr [$time_local] $protocol $status $bytes_received '
'$bytes_sent $upstream_addr';
upstream hive_mq {
server 192.168.11.200:1883; #node1
server 127.0.0.1:1883; #node2
zone tcp_mem 64k;
}
match mqtt_conn {
# Send CONNECT packet with client ID "nginx health check"
send \x10\x20\x00\x06\x4d\x51\x49\x73\x64\x70\x03\x02\x00\x3c\x00\x12\x6e\x67\x69\x6e\x78\x20\x68\x65\x61\x6c\x74\x68\x20\x63\x68\x65\x63\x6b;
expect \x20\x02\x00\x00; # Entire payload of CONNACK packet
}
server {
listen 8081;
proxy_pass hive_mq;
proxy_connect_timeout 1s;
health_check match=mqtt_conn;
access_log /var/log/nginx/mqtt_access.log mqtt;
error_log /var/log/nginx/mqtt_error.log; # Health check notifications
}
but when I reload the setting and it return failed as following
enter image description here
nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/nginx.service.d
└─php-fpm.conf
Active: failed (Result: exit-code) since Fri 2022-12-02 03:20:49 UTC; 2s ago
Process: 51793 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
Process: 51821 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE)
Process: 51819 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Main PID: 51794 (code=killed, signal=KILL)
Dec 02 03:20:49 localhost systemd[1]: Starting The nginx HTTP and reverse proxy server...
Dec 02 03:20:49 localhost nginx[51821]: nginx: [emerg] unknown directive "match" in /etc/nginx/nginx.conf:171
Dec 02 03:20:49 localhost nginx[51821]: nginx: configuration file /etc/nginx/nginx.conf test failed
Dec 02 03:20:49 localhost systemd[1]: nginx.service: Control process exited, code=exited status=1
Dec 02 03:20:49 localhost systemd[1]: nginx.service: Failed with result 'exit-code'.
Dec 02 03:20:49 localhost systemd[1]: Failed to start The nginx HTTP and reverse proxy server.
noted I have include modules/*.conf and the modules are
enter image description here
Do I miss someting?
How to address the config problem in Nginx?

Upstream closing down connections for uwsgi, Flask and Nginx stack

I am trying to run a basic Flask app using Nginx 1.14.0 on Ubuntu Server 18.04.
The app itself runs fine in test environment but I am trying to deploy it now with uwsgi and nginx and am just getting either the default nginx landing page or a 502 Bad Gateway.
I removed the nginx default config from /etc/nginx/sites-available and deleted the symlink from /etc/nginx/sites-enabled.
I set replacements for my site as below in /etc/nginx/sites-available.
What am I missing in terms of config to make nginx redirect to my site?
server {
listen 80;
server_name www.myserver.com myserver.com;
root /srv/server/myserver/;
index index.html;
location /static {
alias /srv/server/myserver/static;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/srv/server/myserver/myserver.sock;
uwsgi_read_timeout 30;
uwsgi_connect_timeout 30;
}
}
I created a symlink sudo ln -s /etc/nginx/sites-available/myserver/etc/nginx/sites-enabled
/srv/server is owned by www-data using sudo shown -R www-data:www-data /srv/server
and this is is myserver.ini
[uwsgi]
http = 0.0.0.0:80
harakiri = 30
module = wsgi:app
master = true
processes = 5
binary-path = /srv/server/myserver/venv/bin/uwsgi
virtualenv = /srv/server/myserver/myserverenv
module = myserver:app
uid = www-data
gid = www-data
socket = myserver.sock
chmod-socket = 0775
vacuum = true
die-on-term = true
myserver.service
[Unit]
Description=uWSGI instance for myserver
[Service]
User=www-data
Group=www-data
After=network.target
WorkingDirectory=/srv/server/myserver
Environment="PATH=/srv/server/myserver/myserverenv/bin"
ExecStart=/srv/server/myserver/myserverenv/bin/uwsgi --ini myserver.ini
[Install]
WantedBy=multi-user.target
As this is on my local machine I have added the below to /etc/hosts in order to access via FQDN in the browser while I test and I have allowed for http and https with ufw.
127.0.0.1 www.myserver.com myserver.com
I have stopped, started, restarted etc via sudo systemctl restart nginx
Error logs from /etc/nginx/error.log
2020/04/17 15:42:24 [error] 26747#26747: *1 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: www.myserver.com, request: "GET / HTTP/1.1", upstream: "uwsgi://unix:/srv/server/myserver/myserver.sock:", host: "www.myserver.com"
EDIT:
I tried restarting uwsgi and got teh below error when running either as www-data and via sudo:
3therk1ll#3therk1ll:/var/log/nginx$ sudo -u www-data systemctl status uwsgi
● uwsgi.service - uWSGI instance for myserver
Loaded: loaded (/etc/systemd/system/uwsgi.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2020-04-17 16:30:42 BST; 5s ago
Process: 27147 ExecStart=/srv/server/myserver/myserverenv/bin/uwsgi --ini myserver.ini (code=exited, status=1/FAILURE)
Main PID: 27147 (code=exited, status=1/FAILURE)
3therk1ll#3therk1ll:/var/log/nginx$ sudo systemctl status uwsgi
● uwsgi.service - uWSGI instance for myserver
Loaded: loaded (/etc/systemd/system/uwsgi.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2020-04-17 16:30:42 BST; 1min 10s ago
Process: 27147 ExecStart=/srv/server/myserver/myserverenv/bin/uwsgi --ini myserver.ini (code=exited, status=1/FAILURE)
Main PID: 27147 (code=exited, status=1/FAILURE)
Apr 17 16:30:42 3therk1ll uwsgi[27147]: dropping root privileges as early as possible
Apr 17 16:30:42 3therk1ll uwsgi[27147]: your processes number limit is 7645
Apr 17 16:30:42 3therk1ll uwsgi[27147]: your memory page size is 4096 bytes
Apr 17 16:30:42 3therk1ll uwsgi[27147]: detected max file descriptor number: 1024
Apr 17 16:30:42 3therk1ll uwsgi[27147]: lock engine: pthread robust mutexes
Apr 17 16:30:42 3therk1ll uwsgi[27147]: thunder lock: disabled (you can enable it with --thunder-lock)
Apr 17 16:30:42 3therk1ll uwsgi[27147]: error removing unix socket, unlink(): Permission denied [core/socket.c line 198]
Apr 17 16:30:42 3therk1ll uwsgi[27147]: bind(): Address already in use [core/socket.c line 230]
Apr 17 16:30:42 3therk1ll systemd[1]: uwsgi.service: Main process exited, code=exited, status=1/FAILURE
Apr 17 16:30:42 3therk1ll systemd[1]: uwsgi.service: Failed with result 'exit-code'.
Both nginx and uwsgi try to bind port 80, so try to change uwsgi's port to something different value or just delete the http = 0.0.0.0:80 line from uwsgi config, since nginx talking with uwsgi by unix socket

CentOS 7 - NGINX - DNS Load Balance

Working on building a DNS Load Balance service on CentOS 7 using NGINX.
Had this working on Ubuntu but started getting spotty results and wanted to move to centos.
Problem I am running into is something has port 53 tied up and I can't seem to figure out what.
This makes sense because Ubuntu has the same problem but easy fix. Just turn off the service that is running holding port 53.
I've been digging and googling my bum off but can't seem to find the smoking gun.
What service is holding port 53 by default on CentOS?
Any help is much appreciated. Thank you.
● nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/nginx.service.d
└─nginx.conf
Active: failed (Result: exit-code) since Wed 2019-12-18 16:11:02 EST; 13min ago
Process: 1863 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE)
Process: 1861 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Dec 18 16:11:02 dnsload.dutil.com systemd[1]: Starting The nginx HTTP and reverse proxy server...
Dec 18 16:11:02 dnsload.dutil.com nginx[1863]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Dec 18 16:11:02 dnsload.dutil.com nginx[1863]: nginx: [emerg] bind() to 0.0.0.0:53 failed (13: Permission denied)
Dec 18 16:11:02 dnsload.dutil.com nginx[1863]: nginx: configuration file /etc/nginx/nginx.conf test failed
Dec 18 16:11:02 dnsload.dutil.com systemd[1]: nginx.service: Control process exited, code=exited status=1
Dec 18 16:11:02 dnsload.dutil.com systemd[1]: nginx.service: Failed with result 'exit-code'.
Dec 18 16:11:02 dnsload.dutil.com systemd[1]: Failed to start The nginx HTTP and reverse proxy server.
stream {
upstream dns_servers {
least_conn;
zone dns_mem 64k;
server 192.168.100.240:53 fail_timeout=60s;
server 192.168.100.241:53 fail_timeout=60s;
server 192.168.100.239:53 fail_timeout=60s;
}
server {
listen 53 udp;
listen 53; #tcp
proxy_pass dns_servers;
error_log /var/log/nginx/dns.log debug;
proxy_responses 1;
proxy_timeout 1s;
}
}
PowerDNS DNSDIST
https://dnsdist.org/
Found this to be an AMAZING! solution to dns load balancing!

Errors trying to start nginx after updating Ubuntu

I am trying to troubleshoot my nginx and it is not going to well.
I start off by running sudo systemctl start nginx and I get
Job for nginx.service failed because the control process exited with error code. See "systemctl status nginx.service" and "journalctl -xe" for details.
and so I continue and try systemctl status nginx.service to get:
Dec 13 00:29:18 systemd[1]: nginx.service: Control process exited, code=exited status=1
Dec 13 00:29:18 systemd[1]: Failed to start A high performance web server and a reverse proxy server.
Dec 13 00:29:18 systemd[1]: nginx.service: Unit entered failed state.
Dec 13 00:29:18 systemd[1]: nginx.service: Failed with result 'exit-code'
and so I continue to give it a test sudo nginx -t -c /etc/nginx/nginx.conf
resulting in
nginx: [emerg] unknown directive "passenger_enabled" in /etc/nginx/sites-enabled/nginx_aggrigator.conf:164
nginx: configuration file /etc/nginx/nginx.conf test failed
I am trying to run this on ubuntu 16.04 if that is any help
Like the error say..
unknown directive "passenger_enabled"
nginx_aggrigator.conf:164
So start by checking this file:
/etc/nginx/sites-enabled/nginx_aggrigator.conf
and look at line 164 and find the part where you declare passenger_enabled As this is where your error is.

Issues starting up nginx with conf files

I had setup my nginx server fine last week until I noticed I was receiving DOSS attacks against it. I then noticed at this point my Nginx server was failing to start. I have tried everything else and unsure what to do to resolve the issue apart from reading documentation which does not help.
Documentation on Nginx
main nginx.conf appears to be empty and I cannot save to it for some reason.
root#ubuntu-vpc-do-moon:~# /etc/init.d/nginx status
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2019-11-04 10:54:44 UTC; 1min 43s ago
Docs: man:nginx(8)
Process: 2550 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: Starting A high performance web server and a reverse proxy server...
Nov 04 10:54:44 ubuntu-vpc-do-moon nginx[2550]: nginx: [emerg] open() "/etc/nginx/sites-enabled/nginx.conf" failed (2: No such file or directory) in /etc/nginx/nginx.conf:62
Nov 04 10:54:44 ubuntu-vpc-do-moon nginx[2550]: nginx: configuration file /etc/nginx/nginx.conf test failed
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: nginx.service: Control process exited, code=exited status=1
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: nginx.service: Failed with result 'exit-code'.
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: Failed to start A high performance web server and a reverse proxy server.
Removed Nginx from ubuntu and done a clean installation onto server. Managed to sort the server blocks out this time so all good.

Resources