Issues starting up nginx with conf files - nginx

I had setup my nginx server fine last week until I noticed I was receiving DOSS attacks against it. I then noticed at this point my Nginx server was failing to start. I have tried everything else and unsure what to do to resolve the issue apart from reading documentation which does not help.
Documentation on Nginx
main nginx.conf appears to be empty and I cannot save to it for some reason.
root#ubuntu-vpc-do-moon:~# /etc/init.d/nginx status
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2019-11-04 10:54:44 UTC; 1min 43s ago
Docs: man:nginx(8)
Process: 2550 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: Starting A high performance web server and a reverse proxy server...
Nov 04 10:54:44 ubuntu-vpc-do-moon nginx[2550]: nginx: [emerg] open() "/etc/nginx/sites-enabled/nginx.conf" failed (2: No such file or directory) in /etc/nginx/nginx.conf:62
Nov 04 10:54:44 ubuntu-vpc-do-moon nginx[2550]: nginx: configuration file /etc/nginx/nginx.conf test failed
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: nginx.service: Control process exited, code=exited status=1
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: nginx.service: Failed with result 'exit-code'.
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: Failed to start A high performance web server and a reverse proxy server.

Removed Nginx from ubuntu and done a clean installation onto server. Managed to sort the server blocks out this time so all good.

Related

nginx: [emerg] unknown directive "match", why does it appear?

this is my config
enter image description here
log_format mqtt '$remote_addr [$time_local] $protocol $status $bytes_received '
'$bytes_sent $upstream_addr';
upstream hive_mq {
server 192.168.11.200:1883; #node1
server 127.0.0.1:1883; #node2
zone tcp_mem 64k;
}
match mqtt_conn {
# Send CONNECT packet with client ID "nginx health check"
send \x10\x20\x00\x06\x4d\x51\x49\x73\x64\x70\x03\x02\x00\x3c\x00\x12\x6e\x67\x69\x6e\x78\x20\x68\x65\x61\x6c\x74\x68\x20\x63\x68\x65\x63\x6b;
expect \x20\x02\x00\x00; # Entire payload of CONNACK packet
}
server {
listen 8081;
proxy_pass hive_mq;
proxy_connect_timeout 1s;
health_check match=mqtt_conn;
access_log /var/log/nginx/mqtt_access.log mqtt;
error_log /var/log/nginx/mqtt_error.log; # Health check notifications
}
but when I reload the setting and it return failed as following
enter image description here
nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/nginx.service.d
└─php-fpm.conf
Active: failed (Result: exit-code) since Fri 2022-12-02 03:20:49 UTC; 2s ago
Process: 51793 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
Process: 51821 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE)
Process: 51819 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Main PID: 51794 (code=killed, signal=KILL)
Dec 02 03:20:49 localhost systemd[1]: Starting The nginx HTTP and reverse proxy server...
Dec 02 03:20:49 localhost nginx[51821]: nginx: [emerg] unknown directive "match" in /etc/nginx/nginx.conf:171
Dec 02 03:20:49 localhost nginx[51821]: nginx: configuration file /etc/nginx/nginx.conf test failed
Dec 02 03:20:49 localhost systemd[1]: nginx.service: Control process exited, code=exited status=1
Dec 02 03:20:49 localhost systemd[1]: nginx.service: Failed with result 'exit-code'.
Dec 02 03:20:49 localhost systemd[1]: Failed to start The nginx HTTP and reverse proxy server.
noted I have include modules/*.conf and the modules are
enter image description here
Do I miss someting?
How to address the config problem in Nginx?

duplicate upstream "backend" in /etc/nginx/sites-enabled installing Mastodon

I have already installed Mastodon, but I'm in the step of setting up nginx according to this doc https://github.com/mastodon/documentation/blob/master/content/en/admin/install.md
I already changed the mastodon file to put my domain and uncomment the lines of the certificate.
so it looks like this:
# Uncomment these lines once you acquire a certificate:
ssl_certificate /etc/letsencrypt/live/my.domain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/my.domain/privkey.pem;
In the upper part of the file I have this:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream backend {
server 127.0.0.1:3000 fail_timeout=0;
}
upstream streaming {
server 127.0.0.1:4000 fail_timeout=0;
}
In /etc/nginx/sites-enabled/ there are three files: default which have I don't know what; and mastodon and my.domain.conf that looks exactly the same, as described above.
Once I modified the mastodon file to put my.domain when it appears, I was told to restart nginx so I run:
sudo systemctl restart nginx
I got this:
Job for nginx.service failed because the control process exited with error code.
See "systemctl status nginx.service" and "journalctl -xeu nginx.service" for details.
so I run "systemctl status nginx.service" and got this:
× nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2022-11-12 22:28:59 UTC; 15s ago
Docs: man:nginx(8)
Process: 144802 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
CPU: 40ms
Nov 12 22:28:59 my-instance-1 systemd[1]: Starting A high performance web server and a reverse proxy server...
Nov 12 22:28:59 my-instance-1 nginx[144802]: nginx: [emerg] duplicate upstream "backend" in /etc/nginx/sites-enabled/my.domain.conf:6
Nov 12 22:28:59 my-instance-1 nginx[144802]: nginx: configuration file /etc/nginx/nginx.conf test failed
Nov 12 22:28:59 my-instance-1 systemd[1]: nginx.service: Control process exited, code=exited, status=1/FAILURE
Nov 12 22:28:59 my-instance-1 systemd[1]: nginx.service: Failed with result 'exit-code'.
Nov 12 22:28:59 my-instance-1 systemd[1]: Failed to start A high performance web server and a reverse proxy server.
I don't know why it says I have duplicated upstream backend if it appears only once in the file. I need to be able to restart my nginx.
UPDATE
Current state is that in the picture. I changed in the mastodon file in sites-AVAILABLE all example.com by my.domain (lines 28, 37,38 aprox). cert lines are uncommented and nginx is throwing this error.
My current instance looks like this, I created a user but the mail verification link got ERR_CERT_COMMON_NAME_INVALID.

Deploy Flask with Gunicorn and NginX Gunicorn Error "Failed with result 'exit-code'"

I'm following this YouTube Tutorial to setup my Server with Flask and Nginx: https://www.youtube.com/watch?v=o2WA-A67Bks&t=360s
Right now I'm at the Gunicorn part. I tried to start following commands:
systemctl daemon-reload
systemctl start index
systemctl enable index
systemctl status index
But it doesn't work. The Error Message looks like this:
● index.service - Gunicorn instance to serve One
Loaded: loaded (/etc/systemd/system/index.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2020-12-01 18:14:40 CET; 8s ago
Main PID: 24620 (code=exited, status=203/EXEC)
Dez 01 18:14:40 h2912959.stratoserver.net systemd[1]: Started Gunicorn instance to serve One.
Dez 01 18:14:40 h2912959.stratoserver.net systemd[24620]: index.service: Failed to execute command: No such file or directory
Dez 01 18:14:40 h2912959.stratoserver.net systemd[24620]: index.service: Failed at step EXEC spawning /var/www/html/myprojectenv/bin/gunicorn: No such file or directory
Dez 01 18:14:40 h2912959.stratoserver.net systemd[1]: index.service: Main process exited, code=exited, status=203/EXEC
Dez 01 18:14:40 h2912959.stratoserver.net systemd[1]: index.service: Failed with result 'exit-code'.
My index.service file looks like that:
[Unit]
Description=Gunicorn instance to serve One
After=network.target
[Service]
User=daemon
Group=www-data
WorkingDirectory=/var/www/html
Environment="PATH=/var/www/html/myprojectenv/bin"
ExecStart=/var/www/html/myprojectenv/bin/gunicorn --workers 3 --bind unix:myproject.sock -m 007 wsgi:app
[Install]
WantedBy=multi-user.target
Can please help anyone?

Main process exited code=exited status=203/exec

I followed a flask+uwsgi+nginx tutorial like this. However it comes with error:
Warning: The unit file, source configuration file or drop-ins of myproject.service changed on disk. Run 'systemctl daem-reload' to reload units>
● myproject.service - uWSGI instance to serve myproject
Loaded: loaded (/etc/systemd/system/myproject.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2020-05-05 18:26:18 AEST; 20min ago
Main PID: 38304 (code=exited, status=203/EXEC)
May 05 18:26:18 ip-10-86-162-214.ap-southeast-2.compute.internal systemd[1]: Started uWSGI instance to serve myproject。
May 05 18:26:18 ip-10-86-162-214.ap-southeast-2.compute.internal systemd[1]: myproject.service: Main process exited code=exited status=203/exec
May 05 18:26:18 ip-10-86-162-214.ap-southeast-2.compute.internal systemd[1]: myproject.service: Failed with result 'exit-code'.
/etc/systemd/system/myproject.service
[Unit]
Description=uWSGI instance to serve myproject
After=network.target
[Service]
User=rh
WorkingDirectory=/home/rh/myproject
Environment="PATH=/home/rh/myproject/myprojectenv/bin"
ExecStart=/home/rh/myproject/myprojectenv/bin/uwsgi --ini myproject.ini
[Install]
WantedBy=multi-user.target
The "rh" is the user I used.

CentOS 7 - NGINX - DNS Load Balance

Working on building a DNS Load Balance service on CentOS 7 using NGINX.
Had this working on Ubuntu but started getting spotty results and wanted to move to centos.
Problem I am running into is something has port 53 tied up and I can't seem to figure out what.
This makes sense because Ubuntu has the same problem but easy fix. Just turn off the service that is running holding port 53.
I've been digging and googling my bum off but can't seem to find the smoking gun.
What service is holding port 53 by default on CentOS?
Any help is much appreciated. Thank you.
● nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/nginx.service.d
└─nginx.conf
Active: failed (Result: exit-code) since Wed 2019-12-18 16:11:02 EST; 13min ago
Process: 1863 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE)
Process: 1861 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Dec 18 16:11:02 dnsload.dutil.com systemd[1]: Starting The nginx HTTP and reverse proxy server...
Dec 18 16:11:02 dnsload.dutil.com nginx[1863]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Dec 18 16:11:02 dnsload.dutil.com nginx[1863]: nginx: [emerg] bind() to 0.0.0.0:53 failed (13: Permission denied)
Dec 18 16:11:02 dnsload.dutil.com nginx[1863]: nginx: configuration file /etc/nginx/nginx.conf test failed
Dec 18 16:11:02 dnsload.dutil.com systemd[1]: nginx.service: Control process exited, code=exited status=1
Dec 18 16:11:02 dnsload.dutil.com systemd[1]: nginx.service: Failed with result 'exit-code'.
Dec 18 16:11:02 dnsload.dutil.com systemd[1]: Failed to start The nginx HTTP and reverse proxy server.
stream {
upstream dns_servers {
least_conn;
zone dns_mem 64k;
server 192.168.100.240:53 fail_timeout=60s;
server 192.168.100.241:53 fail_timeout=60s;
server 192.168.100.239:53 fail_timeout=60s;
}
server {
listen 53 udp;
listen 53; #tcp
proxy_pass dns_servers;
error_log /var/log/nginx/dns.log debug;
proxy_responses 1;
proxy_timeout 1s;
}
}
PowerDNS DNSDIST
https://dnsdist.org/
Found this to be an AMAZING! solution to dns load balancing!

Resources