Can not access stub_status page with metrics from remote machine - nginx

I have turned on stub_status on NGINX server, like this
In nginx.conf
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
location /status {
stub_status on;
}
And status.conf
server {
listen xxxxx:8080;
server_name _;
location /status {
stub_status;
}
}
This page is accessible from NGINX server:
# curl xxx.xxx.xxx:8080/status
Active connections: 5
server accepts handled requests
3914210 3914210 9979189
Reading: 0 Writing: 1 Waiting: 4
But it is not accessible from other remote machines:
~ ❯ curl -vvv xxx.xxx.xxx:8080/status
* Trying xxx.xxx.xxx:8080...
* connect to xxx.xxx.xxx port 8080 failed: Connection refused
* Failed to connect to xenoss.io port 8080 after 76 ms: Connection refused
* Closing connection 0
curl: (7) Failed to connect to xenoss.io port 8080 after 76 ms: Connection refused
I have tested it from couple of different remote machines and none of them managed to connect

Related

Nginx Certbot - certificate change, server goes on 400

I had a previous server configuration running under /etc/nginx/sites-available/mysite.conf:
# mysite.conf
# the upstream component nginx needs to connect to
upstream django_photomanager {
server unix:///path_to/mysite.sock;
#server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 443 ssl;
#timeout settings
proxy_read_timeout 3000;
proxy_connect_timeout 3000;
proxy_send_timeout 3000;
# the domain name it will serve for
server_name my_old_domain.com; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 250M; # adjust to taste
# Django media
location /media {
alias path_to/src/media; # your Django project's media files - amend as required
}
location /static {
alias path_to/src/static-collected;
}
location = /favicon.ico {
log_not_found off;
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django_mysite;
include /path_to/src/uwsgi_params; # the uwsgi_params file you installed
uwsgi_read_timeout 3000;
}
ssl_certificate /etc/letsencrypt/live/my_old_site.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/my_old_site.com/privkey.pem; # managed by Certbot
}
server {
if ($host = my_old_site.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name my_old_site.com;
return 302 https://$host$request_uri;
}
now i changed the domain site in the conf file and run the certbot command:
sudo certbot certonly --nginx
therefore the site conf changed to:
# mysite.conf
# the upstream component nginx needs to connect to
upstream django_photomanager {
server unix:///path_to/mysite.sock;
#server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 443 ssl;
#timeout settings
proxy_read_timeout 3000;
proxy_connect_timeout 3000;
proxy_send_timeout 3000;
# the domain name it will serve for
server_name my_NEW_domain.com; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 250M; # adjust to taste
# Django media
location /media {
alias path_to/src/media; # your Django project's media files - amend as required
}
location /static {
alias path_to/src/static-collected;
}
location = /favicon.ico {
log_not_found off;
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django_mysite;
include /path_to/src/uwsgi_params; # the uwsgi_params file you installed
uwsgi_read_timeout 3000;
}
ssl_certificate /etc/letsencrypt/live/my_NEW_site.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/my_NEW_site.com/privkey.pem; # managed by Certbot
}
server {
if ($host = my_NEW_site.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name my_NEW_site.com;
return 302 https://$host$request_uri;
}
basically just changed the server_name pointing to the new domain for which I obtained the certificate.
Now when I reload the server ( also tried to reboot the machine ), and I try to access my_NEW_site.com I get the following errors in /var/log/ngins/error.log:
2022/11/17 21:36:59 [info] 33146#33146: *1600 peer closed connection in SSL handshake (104: Unknown error) while SSL handshaking, client: 164.**.***.***, server: 0.0.0.0:443
2022/11/17 21:36:59 [info] 33146#33146: *1601 SSL_do_handshake() failed (SSL: error:0A0000C1:SSL routines::no shared cipher) while SSL handshaking, client: 164.********, server: 0.0.0.0:443
2022/11/17 21:36:59 [crit] 33146#33146: *1602 SSL_do_handshake() failed (SSL: error:0A00006C:SSL routines::bad key share) while SSL handshaking, client: 164.********, server: 0.0.0.0:443
tried to reinstall the certificate... but still the server does work correctly...
any suggestion?

NGINX to redirect non http (raw socket) and http(web browser) traffic

Title: NGINX to redirect non http (raw socket) and http(web browser) traffic
The server has both non http request from raw socket and http request from web browser via port 443. The idea is to check the request, and pass the non http request to port 8086 and the http request to port 81.
This is my idea of implementing:
http {
...
server {
listen 80;
# ... Redirect to port 443
}
# HTTPS server
server {
listen 443 ssl;
server_name cims2.crysberg.com;
ssl_certificate cert_chain.crt;
ssl_certificate_key private.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# The raw socket doesn't have user agent
if( $http_user_agent = "") {
####################################
### redirect location to /rawSocket
####################################
}
location / {
#pass the request to apache server
proxy_pass http://127.0.0.1:81;
}
location /rawSocket {
proxy_pass http://127.0.0.1:8068;
}
}
}
My questions are :
Do I need to listen to raw socket communication using stream {}
block?
And if the answer to question 1 is yes, can I both have
stream{} block listens to raw socket on 443 and server block listen
to http request on 443?
Is my idea of implement correct ? How can I
redirect raw socket request location to /rawSocket? I have tried
“rewrite ^(.*) $1/rawSocket; ” and test it using “openssl s_client
-connect xxx.com:443”. But it doesn't work because the raw Socket doesn't have the location/header.

nginx vhost a record

I've been using nginx with vhosts and have currently 5 config files (addresses are given as examples) in these exact order:
default
my website (www.mywebsite.com)
address given by my web hosting service (myvps.webhoster.com)
IPv4 address (i.e. : 100.0.0.1)
IPv6 address (i.e. : 2001::1)
www.mywebsite.com A record --> 100.0.0.1 &
www.mywebsite.com AAAA record --> 2001::1
When I access my website using IPv4 I get on the vhost config for 100.0.0.1,
when I access my website using IPv6 I get on the vhost config for default
when I access directly my IPv6 address I get on the default config
Vhost files :
# default file :
server {
listen 80 default_server;
listen [::]:80 default_server;
...
}
# 2nd file :
server {
listen [::]:80;
server_name mywebsite.com www.mywebsite.com;
...
}
# 4th file
server {
listen 80;
server_name 100.0.0.1;
...
}
# 5th file
server {
listen [::]:80 ipv6only=on;
server_name 2001::1;
...
}
I don't get why it goes to default ...

server_names_hash_bucket_size reported on port 80 not on port 443

for my nginx conf file looking like this
server {
...
listen 80;
name_server www.mydomain.com;
...
}
server {
...
listen 443;
server_name phpmyadmin.mydomain.com;
}
I got the error message could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
by commenting the 1st server, nginx successfully configtest the conf file. so why is it complaining against www.mydomain.com, when phpmyadmin.mydomain.com is longer !!?

Does the nginx "upstream" directive have a port setting?

I use upstream and proxy for load balancing.
The directive proxy_pass http://upstream_name uses the default port, which is 80.
However, if the upstream server does not listen on this port, then the request fails.
How do I specify an alternate port?
my configuration:
http{
#...
upstream myups{
server 192.168.1.100:6666;
server 192.168.1.101:9999;
}
#....
server{
listen 81;
#.....
location ~ /myapp {
proxy_pass http://myups:81/;
}
}
nginx -t:
[warn]: upstream "myups" may not have port 81 in /opt/nginx/conf/nginx.conf:78.
in your upstream configuration you have ports defined ( 6666 and 9999 ), those are the ports your backend servers need to listen on
the proxy_pass directive doesn't need an additional port configuration in this case.
Your nginx listens on port 81 which you've defined in the listen directive
Is this what you tried to do?
http {
#...
upstream upstream_1{
server 192.168.1.100:6666;
server 192.168.1.101:9999;
}
upstream upstream_2{
server 192.168.1.100:6661; // other backstream port if you use port 81
server 192.168.1.101:9991;
}
server {
listen 80;
#.....
location ~ /myapp {
proxy_pass http://upstream_1;
}
}
server {
listen 81;
#.....
location ~ /myapp {
proxy_pass http://upstream_2;
}
}
}

Resources