nginx server and ssh stop responding - nginx

I have a running Flash server on gunicorn under nginx on Raspberry pi zero.
My problem is the raspberry sometime go to sleep a cupe of minutes and server can not be reached and ssh do not work anymore.
So i desable the pi power save with this: sudo iw dev wlan0 set power_save off.
And it's better, but because having issue with 413 Request Entity Too Large i set client_max_body_size to my nginx config file.
But now it's worse, the 'sleep' happen more frequenly, sometime i have to reboot.
This my reverse-proxy.conf:
server {
listen 80;
listen [::]:80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/ssl/certs/selfsigned.crt;
ssl_certificate_key /etc/ssl/private/selfsigned.key;
error_log /var/www/flask/nginx.log debug;
ssl_dhparam /etc/nginx/dhparam.pem;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
proxy_pass http://127.0.0.1:8080;
proxy_redirect off;
}
location /upload {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
proxy_pass http://127.0.0.1:8080;
proxy_redirect off;
client_max_body_size 200M; # file needed to upload is just a big image around 1m
}
# increase timeout , 300s, 1d, default: 60s
fastcgi_read_timeout 1d;
proxy_read_timeout 1d;
}
This is the last lines on my nginx log file after 'sleep'.
2021/03/13 21:44:18 [debug] 8220#8220: *445 reusable connection: 1
2021/03/13 21:44:18 [debug] 8220#8220: *445 event timer add: 3: 65000:7060228
2021/03/13 21:44:38 [debug] 8220#8220: *445 http keepalive handler
2021/03/13 21:44:38 [debug] 8220#8220: *445 malloc: 018D46F0:1024
2021/03/13 21:44:38 [debug] 8220#8220: *445 SSL_read: -1
2021/03/13 21:44:38 [debug] 8220#8220: *445 SSL_get_error: 5
2021/03/13 21:44:38 [debug] 8220#8220: *445 peer shutdown SSL cleanly
2021/03/13 21:44:38 [info] 8220#8220: *445 client 192.168.1.72 closed keepalive connection (104: Connection reset by peer)
2021/03/13 21:44:38 [debug] 8220#8220: *445 close http connection: 3
2021/03/13 21:44:38 [debug] 8220#8220: *445 SSL_shutdown: 1
2021/03/13 21:44:38 [debug] 8220#8220: *445 event timer del: 3: 7060228
2021/03/13 21:44:38 [debug] 8220#8220: *445 reusable connection: 0
2021/03/13 21:44:38 [debug] 8220#8220: *445 free: 018D46F0
2021/03/13 21:44:38 [debug] 8220#8220: *445 free: 00000000
2021/03/13 21:44:38 [debug] 8220#8220: *445 free: 018F56F0, unused: 8
2021/03/13 21:44:38 [debug] 8220#8220: *445 free: 01933360, unused: 120
kermel log (/var/log/syslog):
Mar 13 22:48:09 raspberrypi rngd[270]: stats: Time spent starving for entropy: (min=0; avg=0.000; max=0)us
Mar 13 22:58:08 raspberrypi systemd[1]: session-10.scope: Succeeded.
Mar 13 23:17:01 raspberrypi CRON[25098]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Mar 13 23:44:19 raspberrypi dhcpcd[385]: wlan0: hardware address 00:00:00:00:00:00 claims 192.168.1.64
Mar 13 23:44:21 raspberrypi dhcpcd[385]: wlan0: hardware address 00:00:00:00:00:00 claims 192.168.1.64
Mar 13 23:44:21 raspberrypi dhcpcd[385]: wlan0: 10 second defence failed for 192.168.1.64
Mar 13 23:44:21 raspberrypi avahi-daemon[260]: Withdrawing address record for 192.168.1.64 on wlan0.
Mar 13 23:44:21 raspberrypi avahi-daemon[260]: Leaving mDNS multicast group on interface wlan0.IPv4 with address 192.168.1.64.
Mar 13 23:44:21 raspberrypi dhcpcd[385]: wlan0: deleting route to 192.168.1.0/24
Mar 13 23:44:21 raspberrypi dhcpcd[385]: wlan0: deleting default route via 192.168.1.254
Mar 13 23:44:21 raspberrypi avahi-daemon[260]: Interface wlan0.IPv4 no longer relevant for mDNS.
Mar 13 23:44:21 raspberrypi dhcpcd[385]: wlan0: rebinding lease of 192.168.1.64
Mar 13 23:44:21 raspberrypi dhcpcd[385]: wlan0: probing address 192.168.1.64/24
Mar 13 23:44:26 raspberrypi dhcpcd[385]: wlan0: leased 192.168.1.64 for 86400 seconds
Mar 13 23:44:26 raspberrypi avahi-daemon[260]: Joining mDNS multicast group on interface wlan0.IPv4 with address 192.168.1.64.
Mar 13 23:44:26 raspberrypi avahi-daemon[260]: New relevant interface wlan0.IPv4 for mDNS.
Mar 13 23:44:26 raspberrypi avahi-daemon[260]: Registering new address record for 192.168.1.64 on wlan0.IPv4.
Mar 13 23:44:26 raspberrypi dhcpcd[385]: wlan0: adding route to 192.168.1.0/24
Mar 13 23:44:26 raspberrypi dhcpcd[385]: wlan0: adding default route via 192.168.1.254
Mar 13 23:48:09 raspberrypi rngd[270]: stats: bits received from HRNG source: 180064
Mar 13 23:48:09 raspberrypi rngd[270]: stats: bits sent to kernel pool: 123584
Mar 13 23:48:09 raspberrypi rngd[270]: stats: entropy added to kernel pool: 123584
Mar 13 23:48:09 raspberrypi rngd[270]: stats: FIPS 140-2 successes: 9
Mar 13 23:48:09 raspberrypi rngd[270]: stats: FIPS 140-2 failures: 0
Mar 13 23:48:09 raspberrypi rngd[270]: stats: FIPS 140-2(2001-10-10) Monobit: 0
Mar 13 23:48:09 raspberrypi rngd[270]: stats: FIPS 140-2(2001-10-10) Poker: 0
Mar 13 23:48:09 raspberrypi rngd[270]: stats: FIPS 140-2(2001-10-10) Runs: 0
Mar 13 23:48:09 raspberrypi rngd[270]: stats: FIPS 140-2(2001-10-10) Long run: 0
Mar 13 23:48:09 raspberrypi rngd[270]: stats: FIPS 140-2(2001-10-10) Continuous run: 0
Mar 13 23:48:09 raspberrypi rngd[270]: stats: HRNG source speed: (min=101.599; avg=254.741; max=920.244)Kibits/s
Mar 13 23:48:09 raspberrypi rngd[270]: stats: FIPS tests speed: (min=924.206; avg=3071.971; max=9096.996)Kibits/s
Mar 13 23:48:09 raspberrypi rngd[270]: stats: Lowest ready-buffers level: 2
Mar 13 23:48:09 raspberrypi rngd[270]: stats: Entropy starvations: 0
Mar 13 23:48:09 raspberrypi rngd[270]: stats: Time spent starving for entropy: (min=0; avg=0.000; max=0)us
Mar 13 23:57:38 raspberrypi dhcpcd[385]: wlan0: part of Router Advertisement expired
Edit:
It's possible the problem come from my computer or the pi filtering my compyter ip, because sometime i can ssh or reach the http server from my Android phone which is in the same network, but no internet or firewall(ESET antivirus) problem in my computer.

Related

nginx: [emerg] unknown directive "match", why does it appear?

this is my config
enter image description here
log_format mqtt '$remote_addr [$time_local] $protocol $status $bytes_received '
'$bytes_sent $upstream_addr';
upstream hive_mq {
server 192.168.11.200:1883; #node1
server 127.0.0.1:1883; #node2
zone tcp_mem 64k;
}
match mqtt_conn {
# Send CONNECT packet with client ID "nginx health check"
send \x10\x20\x00\x06\x4d\x51\x49\x73\x64\x70\x03\x02\x00\x3c\x00\x12\x6e\x67\x69\x6e\x78\x20\x68\x65\x61\x6c\x74\x68\x20\x63\x68\x65\x63\x6b;
expect \x20\x02\x00\x00; # Entire payload of CONNACK packet
}
server {
listen 8081;
proxy_pass hive_mq;
proxy_connect_timeout 1s;
health_check match=mqtt_conn;
access_log /var/log/nginx/mqtt_access.log mqtt;
error_log /var/log/nginx/mqtt_error.log; # Health check notifications
}
but when I reload the setting and it return failed as following
enter image description here
nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/nginx.service.d
└─php-fpm.conf
Active: failed (Result: exit-code) since Fri 2022-12-02 03:20:49 UTC; 2s ago
Process: 51793 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
Process: 51821 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE)
Process: 51819 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Main PID: 51794 (code=killed, signal=KILL)
Dec 02 03:20:49 localhost systemd[1]: Starting The nginx HTTP and reverse proxy server...
Dec 02 03:20:49 localhost nginx[51821]: nginx: [emerg] unknown directive "match" in /etc/nginx/nginx.conf:171
Dec 02 03:20:49 localhost nginx[51821]: nginx: configuration file /etc/nginx/nginx.conf test failed
Dec 02 03:20:49 localhost systemd[1]: nginx.service: Control process exited, code=exited status=1
Dec 02 03:20:49 localhost systemd[1]: nginx.service: Failed with result 'exit-code'.
Dec 02 03:20:49 localhost systemd[1]: Failed to start The nginx HTTP and reverse proxy server.
noted I have include modules/*.conf and the modules are
enter image description here
Do I miss someting?
How to address the config problem in Nginx?

pfSense 2.5.0 upgrade broke my NordVPN gateway

Ever since I upgraded to pfSense 2.5.0, my NordVPN interface does not work anymore. Traffic does not get routes to the NordVPN gateway, as pfSense reports it as "down" with 100% package loss. When checking "Status -> OpenVPN" the connection is reported as UP, but the gateway is DOWN. I don't understand how this is possible, but the log provides some clues, although I don't understand what goes wrong when reading the log.
OpenVPN Log (private IPs removed):
Feb 19 07:42:59 openvpn 79266 Initialization Sequence Completed
Feb 19 07:43:58 openvpn 79266 Authenticate/Decrypt packet error: missing authentication info
Feb 19 07:44:58 openvpn 79266 Authenticate/Decrypt packet error: missing authentication info
Feb 19 07:45:58 openvpn 79266 [nl852.nordvpn.com] Inactivity timeout (--ping-restart), restarting
Feb 19 07:45:58 openvpn 79266 SIGUSR1[soft,ping-restart] received, process restarting
Feb 19 07:45:58 openvpn 79266 Restart pause, 10 second(s)
Feb 19 07:46:08 openvpn 79266 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
Feb 19 07:46:08 openvpn 79266 Outgoing Control Channel Authentication: Using 512 bit message hash 'SHA512' for HMAC authentication
Feb 19 07:46:08 openvpn 79266 Incoming Control Channel Authentication: Using 512 bit message hash 'SHA512' for HMAC authentication
Feb 19 07:46:08 openvpn 79266 TCP/UDP: Preserving recently used remote address: [AF_INET]194.127.172.103:1194
Feb 19 07:46:08 openvpn 79266 Socket Buffers: R=[42080->524288] S=[57344->524288]
Feb 19 07:46:08 openvpn 79266 UDPv4 link local (bound): [AF_INET]x.x.x.x:0
Feb 19 07:46:08 openvpn 79266 UDPv4 link remote: [AF_INET]y.y.y.y:1194
Feb 19 07:46:08 openvpn 79266 TLS: Initial packet from [AF_INET]y.y.y.y.z:1194, sid=2ce7940f f02613d1
Feb 19 07:46:08 openvpn 79266 VERIFY WARNING: depth=0, unable to get certificate CRL: CN=nl852.nordvpn.com
Feb 19 07:46:08 openvpn 79266 VERIFY WARNING: depth=1, unable to get certificate CRL: C=PA, O=NordVPN, CN=NordVPN CA5
Feb 19 07:46:08 openvpn 79266 VERIFY WARNING: depth=2, unable to get certificate CRL: C=PA, O=NordVPN, CN=NordVPN Root CA
Feb 19 07:46:08 openvpn 79266 VERIFY OK: depth=2, C=PA, O=NordVPN, CN=NordVPN Root CA
Feb 19 07:46:08 openvpn 79266 VERIFY OK: depth=1, C=PA, O=NordVPN, CN=NordVPN CA5
Feb 19 07:46:08 openvpn 79266 VERIFY KU OK
Feb 19 07:46:08 openvpn 79266 Validating certificate extended key usage
Feb 19 07:46:08 openvpn 79266 ++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication
Feb 19 07:46:08 openvpn 79266 VERIFY EKU OK
Feb 19 07:46:08 openvpn 79266 VERIFY OK: depth=0, CN=nl852.nordvpn.com
Feb 19 07:46:08 openvpn 79266 WARNING: 'link-mtu' is used inconsistently, local='link-mtu 1582', remote='link-mtu 1634'
Feb 19 07:46:08 openvpn 79266 WARNING: 'auth' is used inconsistently, local='auth [null-digest]', remote='auth SHA512'
Feb 19 07:46:08 openvpn 79266 Control Channel: TLSv1.2, cipher TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384, 4096 bit RSA
Feb 19 07:46:08 openvpn 79266 [nl852.nordvpn.com] Peer Connection Initiated with [AF_INET]194.127.172.103:1194
Feb 19 07:46:09 openvpn 79266 SENT CONTROL [nl852.nordvpn.com]: 'PUSH_REQUEST' (status=1)
Feb 19 07:46:09 openvpn 79266 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,dhcp-option DNS 103.86.96.100,dhcp-option DNS 103.86.99.100,sndbuf 524288,rcvbuf 524288,explicit-exit-notify,comp-lzo no,route-gateway z.z.z.z,topology subnet,ping 60,ping-restart 180,ifconfig g.g.g.g 255.255.255.0,peer-id 3'
Feb 19 07:46:09 openvpn 79266 OPTIONS IMPORT: timers and/or timeouts modified
Feb 19 07:46:09 openvpn 79266 OPTIONS IMPORT: explicit notify parm(s) modified
Feb 19 07:46:09 openvpn 79266 OPTIONS IMPORT: compression parms modified
Feb 19 07:46:09 openvpn 79266 OPTIONS IMPORT: --sndbuf/--rcvbuf options modified
Feb 19 07:46:09 openvpn 79266 Socket Buffers: R=[524288->524288] S=[524288->524288]
Feb 19 07:46:09 openvpn 79266 OPTIONS IMPORT: --ifconfig/up options modified
Feb 19 07:46:09 openvpn 79266 OPTIONS IMPORT: route options modified
Feb 19 07:46:09 openvpn 79266 OPTIONS IMPORT: route-related options modified
Feb 19 07:46:09 openvpn 79266 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
Feb 19 07:46:09 openvpn 79266 OPTIONS IMPORT: peer-id set
Feb 19 07:46:09 openvpn 79266 OPTIONS IMPORT: adjusting link_mtu to 1657
Feb 19 07:46:09 openvpn 79266 Using peer cipher 'AES-256-CBC'
Feb 19 07:46:09 openvpn 79266 Data Channel: using negotiated cipher 'AES-256-CBC'
Feb 19 07:46:09 openvpn 79266 Outgoing Data Channel: Cipher 'AES-256-CBC' initialized with 256 bit key
Feb 19 07:46:09 openvpn 79266 Outgoing Data Channel: Using 512 bit message hash 'SHA512' for HMAC authentication
Feb 19 07:46:09 openvpn 79266 Incoming Data Channel: Cipher 'AES-256-CBC' initialized with 256 bit key
Feb 19 07:46:09 openvpn 79266 Incoming Data Channel: Using 512 bit message hash 'SHA512' for HMAC authentication
Feb 19 07:46:09 openvpn 79266 Preserving previous TUN/TAP instance: ovpnc8
Feb 19 07:46:09 openvpn 79266 NOTE: Pulled options changed on restart, will need to close and reopen TUN/TAP device.
Feb 19 07:46:09 openvpn 79266 Closing TUN/TAP interface
Feb 19 07:46:09 openvpn 79266 /usr/local/sbin/ovpn-linkdown ovpnc8 1500 1637 a.b.c.d 255.255.255.0 init
Feb 19 07:46:10 openvpn 79266 ROUTE_GATEWAY a.b.c.d/255.255.254.0 IFACE=re0 HWADDR=00:e2:6c:68:07:be
Feb 19 07:46:10 openvpn 79266 TUN/TAP device ovpnc8 exists previously, keep at program end
Feb 19 07:46:10 openvpn 79266 TUN/TAP device /dev/tun8 opened
Feb 19 07:46:10 openvpn 79266 /sbin/ifconfig ovpnc8 x.x.x.x y.y.y.y mtu 1500 netmask 255.255.255.0 up
Feb 19 07:46:10 openvpn 79266 /sbin/route add -net x.x.x.x x.x.x.x 255.255.255.0
Feb 19 07:46:10 openvpn 79266 /usr/local/sbin/ovpn-linkup ovpnc8 1500 1637 x.x.x.x 255.255.255.0 init
Feb 19 07:46:10 openvpn 79266 Initialization Sequence Completed
And the gateway log:
Feb 19 04:16:02 dpinger 68141 send_interval 500ms loss_interval 2000ms time_period 60000ms report_interval 0ms data_len 1 alert_interval 1000ms latency_alarm 500ms loss_alarm 20% dest_addr x.x.x.x bind_addr x.x.x.x identifier "NORDVPN_VPNV4 "
Feb 19 04:16:04 dpinger 68141 NORDVPN_VPNV4 x.x.x.x: Alarm latency 0us stddev 0us loss 100%
Feb 19 04:19:13 dpinger 16894 send_interval 500ms loss_interval 2000ms time_period 60000ms report_interval 0ms data_len 1 alert_interval 1000ms latency_alarm 500ms loss_alarm 20% dest_addr x.x.x.x bind_addr x.x.x.x identifier "WAN_DHCP "
Feb 19 04:19:13 dpinger 17398 send_interval 500ms loss_interval 2000ms time_period 60000ms report_interval 0ms data_len 1 alert_interval 1000ms latency_alarm 500ms loss_alarm 20% dest_addr x.x.x.x bind_addr x.x.x.x identifier "NORDVPN_VPNV4 "
Feb 19 04:19:15 dpinger 17398 NORDVPN_VPNV4 x.x.x.x: Alarm latency 0us stddev 0us loss 100%
In Firewall -> Rules -> LAN I adjusted the "default allow LAN to any rule" to the gateway "NordVPN". Outbound NAT is set to manual, with the top rule taking the LAN net as source and the NORDVPN interface.
Any help is appreciated. As said, the current configuration worked fine in 2.4.5 -- the latest release before upgrading to 2.5.0. I'm considering downgrading at this point.
Changed fallback DEA to AES-256-CBC from AES-256-GCM, and it's working fine
Go to VPN/OpenVPN/Client, and edit the setting "Fallback Data Encryption Algorithm"
NordVPN has posted updated documentation for pfSense 2.5.0, titled: pfSense 2.5 Setup with NordVPN.
As #NDK has mentioned in their A'er the updated docs show that you need to change the Fallback Data Encryption Algorithm to AES-256-CBC.

Upstream closing down connections for uwsgi, Flask and Nginx stack

I am trying to run a basic Flask app using Nginx 1.14.0 on Ubuntu Server 18.04.
The app itself runs fine in test environment but I am trying to deploy it now with uwsgi and nginx and am just getting either the default nginx landing page or a 502 Bad Gateway.
I removed the nginx default config from /etc/nginx/sites-available and deleted the symlink from /etc/nginx/sites-enabled.
I set replacements for my site as below in /etc/nginx/sites-available.
What am I missing in terms of config to make nginx redirect to my site?
server {
listen 80;
server_name www.myserver.com myserver.com;
root /srv/server/myserver/;
index index.html;
location /static {
alias /srv/server/myserver/static;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/srv/server/myserver/myserver.sock;
uwsgi_read_timeout 30;
uwsgi_connect_timeout 30;
}
}
I created a symlink sudo ln -s /etc/nginx/sites-available/myserver/etc/nginx/sites-enabled
/srv/server is owned by www-data using sudo shown -R www-data:www-data /srv/server
and this is is myserver.ini
[uwsgi]
http = 0.0.0.0:80
harakiri = 30
module = wsgi:app
master = true
processes = 5
binary-path = /srv/server/myserver/venv/bin/uwsgi
virtualenv = /srv/server/myserver/myserverenv
module = myserver:app
uid = www-data
gid = www-data
socket = myserver.sock
chmod-socket = 0775
vacuum = true
die-on-term = true
myserver.service
[Unit]
Description=uWSGI instance for myserver
[Service]
User=www-data
Group=www-data
After=network.target
WorkingDirectory=/srv/server/myserver
Environment="PATH=/srv/server/myserver/myserverenv/bin"
ExecStart=/srv/server/myserver/myserverenv/bin/uwsgi --ini myserver.ini
[Install]
WantedBy=multi-user.target
As this is on my local machine I have added the below to /etc/hosts in order to access via FQDN in the browser while I test and I have allowed for http and https with ufw.
127.0.0.1 www.myserver.com myserver.com
I have stopped, started, restarted etc via sudo systemctl restart nginx
Error logs from /etc/nginx/error.log
2020/04/17 15:42:24 [error] 26747#26747: *1 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: www.myserver.com, request: "GET / HTTP/1.1", upstream: "uwsgi://unix:/srv/server/myserver/myserver.sock:", host: "www.myserver.com"
EDIT:
I tried restarting uwsgi and got teh below error when running either as www-data and via sudo:
3therk1ll#3therk1ll:/var/log/nginx$ sudo -u www-data systemctl status uwsgi
● uwsgi.service - uWSGI instance for myserver
Loaded: loaded (/etc/systemd/system/uwsgi.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2020-04-17 16:30:42 BST; 5s ago
Process: 27147 ExecStart=/srv/server/myserver/myserverenv/bin/uwsgi --ini myserver.ini (code=exited, status=1/FAILURE)
Main PID: 27147 (code=exited, status=1/FAILURE)
3therk1ll#3therk1ll:/var/log/nginx$ sudo systemctl status uwsgi
● uwsgi.service - uWSGI instance for myserver
Loaded: loaded (/etc/systemd/system/uwsgi.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2020-04-17 16:30:42 BST; 1min 10s ago
Process: 27147 ExecStart=/srv/server/myserver/myserverenv/bin/uwsgi --ini myserver.ini (code=exited, status=1/FAILURE)
Main PID: 27147 (code=exited, status=1/FAILURE)
Apr 17 16:30:42 3therk1ll uwsgi[27147]: dropping root privileges as early as possible
Apr 17 16:30:42 3therk1ll uwsgi[27147]: your processes number limit is 7645
Apr 17 16:30:42 3therk1ll uwsgi[27147]: your memory page size is 4096 bytes
Apr 17 16:30:42 3therk1ll uwsgi[27147]: detected max file descriptor number: 1024
Apr 17 16:30:42 3therk1ll uwsgi[27147]: lock engine: pthread robust mutexes
Apr 17 16:30:42 3therk1ll uwsgi[27147]: thunder lock: disabled (you can enable it with --thunder-lock)
Apr 17 16:30:42 3therk1ll uwsgi[27147]: error removing unix socket, unlink(): Permission denied [core/socket.c line 198]
Apr 17 16:30:42 3therk1ll uwsgi[27147]: bind(): Address already in use [core/socket.c line 230]
Apr 17 16:30:42 3therk1ll systemd[1]: uwsgi.service: Main process exited, code=exited, status=1/FAILURE
Apr 17 16:30:42 3therk1ll systemd[1]: uwsgi.service: Failed with result 'exit-code'.
Both nginx and uwsgi try to bind port 80, so try to change uwsgi's port to something different value or just delete the http = 0.0.0.0:80 line from uwsgi config, since nginx talking with uwsgi by unix socket

CentOS 7 - NGINX - DNS Load Balance

Working on building a DNS Load Balance service on CentOS 7 using NGINX.
Had this working on Ubuntu but started getting spotty results and wanted to move to centos.
Problem I am running into is something has port 53 tied up and I can't seem to figure out what.
This makes sense because Ubuntu has the same problem but easy fix. Just turn off the service that is running holding port 53.
I've been digging and googling my bum off but can't seem to find the smoking gun.
What service is holding port 53 by default on CentOS?
Any help is much appreciated. Thank you.
● nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/nginx.service.d
└─nginx.conf
Active: failed (Result: exit-code) since Wed 2019-12-18 16:11:02 EST; 13min ago
Process: 1863 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE)
Process: 1861 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Dec 18 16:11:02 dnsload.dutil.com systemd[1]: Starting The nginx HTTP and reverse proxy server...
Dec 18 16:11:02 dnsload.dutil.com nginx[1863]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Dec 18 16:11:02 dnsload.dutil.com nginx[1863]: nginx: [emerg] bind() to 0.0.0.0:53 failed (13: Permission denied)
Dec 18 16:11:02 dnsload.dutil.com nginx[1863]: nginx: configuration file /etc/nginx/nginx.conf test failed
Dec 18 16:11:02 dnsload.dutil.com systemd[1]: nginx.service: Control process exited, code=exited status=1
Dec 18 16:11:02 dnsload.dutil.com systemd[1]: nginx.service: Failed with result 'exit-code'.
Dec 18 16:11:02 dnsload.dutil.com systemd[1]: Failed to start The nginx HTTP and reverse proxy server.
stream {
upstream dns_servers {
least_conn;
zone dns_mem 64k;
server 192.168.100.240:53 fail_timeout=60s;
server 192.168.100.241:53 fail_timeout=60s;
server 192.168.100.239:53 fail_timeout=60s;
}
server {
listen 53 udp;
listen 53; #tcp
proxy_pass dns_servers;
error_log /var/log/nginx/dns.log debug;
proxy_responses 1;
proxy_timeout 1s;
}
}
PowerDNS DNSDIST
https://dnsdist.org/
Found this to be an AMAZING! solution to dns load balancing!

502 Bad Gateway and failed to read PID from file /run/nginx.pid: Invalid argument using nginx and gunicorn

I already successfully deployed nginx and gunicorn in my centos 7 server but got 502 Bad Gateway error message. I'm using nginx/1.12.2. I already check both status for gunicorn and nginx.
gunicorn status
● deepagi.service - Gunicorn instance to serve deepagi
Loaded: loaded (/etc/systemd/system/deepagi.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2017-12-02 10:49:30 UTC; 41min ago
Main PID: 1829 (gunicorn)
CGroup: /system.slice/deepagi.service
├─1829 /root/deepagi/deepagienv/bin/python2 /root/deepagi/deepagienv/bin/gunicorn --workers 3 --bind unix:deepagi.sock -m 007 wsgi
├─1834 /root/deepagi/deepagienv/bin/python2 /root/deepagi/deepagienv/bin/gunicorn --workers 3 --bind unix:deepagi.sock -m 007 wsgi
├─1839 /root/deepagi/deepagienv/bin/python2 /root/deepagi/deepagienv/bin/gunicorn --workers 3 --bind unix:deepagi.sock -m 007 wsgi
└─1840 /root/deepagi/deepagienv/bin/python2 /root/deepagi/deepagienv/bin/gunicorn --workers 3 --bind unix:deepagi.sock -m 007 wsgi
Dec 02 10:49:30 DeepAGI systemd[1]: Started Gunicorn instance to serve deepagi.
Dec 02 10:49:30 DeepAGI systemd[1]: Starting Gunicorn instance to serve deepagi...
Dec 02 10:49:30 DeepAGI gunicorn[1829]: [2017-12-02 10:49:30 +0000] [1829] [INFO] Starting gunicorn 19.7.1
Dec 02 10:49:30 DeepAGI gunicorn[1829]: [2017-12-02 10:49:30 +0000] [1829] [INFO] Listening at: unix:deepagi.sock (1829)
Dec 02 10:49:30 DeepAGI gunicorn[1829]: [2017-12-02 10:49:30 +0000] [1829] [INFO] Using worker: sync
Dec 02 10:49:30 DeepAGI gunicorn[1829]: [2017-12-02 10:49:30 +0000] [1834] [INFO] Booting worker with pid: 1834
Dec 02 10:49:30 DeepAGI gunicorn[1829]: [2017-12-02 10:49:30 +0000] [1839] [INFO] Booting worker with pid: 1839
Dec 02 10:49:30 DeepAGI gunicorn[1829]: [2017-12-02 10:49:30 +0000] [1840] [INFO] Booting worker with pid: 1840
nginx status
● nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2017-12-02 11:16:18 UTC; 14min ago
Main PID: 2317 (nginx)
CGroup: /system.slice/nginx.service
├─2317 nginx: master process /usr/sbin/nginx
└─2318 nginx: worker process
Dec 02 11:16:18 DeepAGI systemd[1]: Starting The nginx HTTP and reverse proxy server...
Dec 02 11:16:18 DeepAGI nginx[2312]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Dec 02 11:16:18 DeepAGI nginx[2312]: nginx: configuration file /etc/nginx/nginx.conf test is successful
Dec 02 11:16:18 DeepAGI systemd[1]: Failed to read PID from file /run/nginx.pid: Invalid argument
Dec 02 11:16:18 DeepAGI systemd[1]: Started The nginx HTTP and reverse proxy server.
But I saw in nginx status got this kind of error message
Dec 02 11:16:18 DeepAGI systemd[1]: Failed to read PID from file /run/nginx.pid: Invalid argument
How to solve this?

Resources