Nginx Fails to connect when Upstream resolves to IPv6 - nginx

I have configurations in nginx that perform proxy_pass to google-analytics.com. But as you know google-analytics same times resolves to ipv4 and at times to ipv6 when it does resolve to ipv6 nginx fails with this error.
connect() to [2a00:xxx:xxx:809::xxx]:443 failed (101: Network is unreachable) while connecting to upstream. ( I just obfuscated the real ip of the upstream)
upstream server temporarily disabled while connecting to upstream
Why does nginx faile with upstream in proxy_pass resolves to ipv6?
server {
server_name upstream.nmmapper.com;
location /.well-known/acme-challenge/ {
allow all;
root /var/www/letsencrypt;
try_files $uri =404;
break;
}
}
location = /analytics.js {
proxy_set_header Accept-Encoding "";
proxy_pass https://www.google-analytics.com/analytics.js;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}

Try adding ipv6 listen [::]:80 directive:
server {
listen 80;
listen [::]:80;
server_name upstream.nmmapper.com;
...
}
For ssl:
listen 443 ssl;
listen [::]:443 ssl;

To always connect over IPv4 you need to add a resolver with ipv6=off.
However, Nginx is doing DNS resolution at startup by default.
Include (an empty) variable in the hostname to force Nginx to do resolution at runtime with the specified resolver directive.
location / {
resolver 1.1.1.1 ipv6=off valid=30s;
set $empty "";
proxy_pass https://example.com$empty;
}
Source: https://serverfault.com/a/1006465/242991.
In your case this should work:
location = /analytics.js {
resolver 1.1.1.1 ipv6=off valid=30s;
set $empty "";
proxy_set_header Accept-Encoding "";
proxy_pass https://www.google-analytics.com/analytics.js$empty;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}

Related

nginx: not matching the correct way

i have the following nginx configuration
GIVES WRONG RESULTS
upstream webapp {
server webapp:8000;
}
upstream db {
server phppgadmin:80;
}
server {
listen 80;
server_name db.*;
location / {
proxy_pass http://db;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
}
server {
listen 80;
location / {
proxy_pass http://webapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
location /static {
autoindex on;
alias /staticfiles/;
}
location /media {
autoindex on;
alias /mediafiles/;
}
}
My ip address of the pc is xx.xx.xx.xx
what I observed is that
db.xx.xx.xx.xx - shows the db upstream
and also xx.xx.xx.xx - shows the db upstream
GIVES CORRECT RESULTS
where as when i change the order it shows properly
upstream webapp {
server webapp:8000;
}
upstream db {
server phppgadmin:80;
}
server {
listen 80;
location / {
proxy_pass http://webapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
location /static {
autoindex on;
alias /staticfiles/;
}
location /media {
autoindex on;
alias /mediafiles/;
}
}
server {
listen 80;
server_name db.*;
location / {
proxy_pass http://db;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
}
Now
db.xx.xx.xx.xx - shows the db upstream
and xx.xx.xx.xx - shows the webapp upstream
QUESTION
I am not able to understand in the first case how come xx.xx.xx.xx is matched by server_name db.*; Or why the second one shows the intended behaviour
note
Ofcourse in my /etc/hosts i have setup
xx.xx.xx.xx app.xx.xx.xx.xx
xx.xx.xx.xx db.xx.xx.xx.xx
Nginx selects server block by port (with IP, if given) and Host header. If there is no match, it uses a block where default_server is set. In your case there is no match by Host and neither there is a default_server so I think it just picked first. Either add server_name to the block with the webapp upstream or make it a default one:
listen 80 default_server;

nginx: [emerg] host not found in upstream with local IPs

I've read similar questions with same error, but nothing matches my problem, because my upstream servers have local IPs.
The server is a proxmox machine with some different vms.
One is for nginx reverse gateway/proxy, the other are vms with several docker containers.
I want to setup a fallback (backup) for one container.
The config of the nginx reverse gateway/proxy containing these machines is:
server {
listen 80;
server_name my-web.page;
return 301 http://www.my-web.page$request_uri;
}
server {
listen 80;
listen [::]:80;
server_name www.my-web.page;
location / {
return 301 https://www.my-web.page$request_uri;
}
}
server {
listen 443 ssl;
server_name my-web.page;
return 301 https://www.my-web.page$request_uri;
ssl_certificate /etc/ssl/my/my-web.page.chained.crt;
ssl_certificate_key /etc/ssl/my/my-web.page.key.pem;
}
upstream backend {
server 192.168.200.210:8030 max_fails=1 fail_timeout=600s;
server 192.168.200.211:8031 backup;
}
server {
listen 443 ssl;
ssl_certificate /etc/ssl/my/my-web.page.chained.crt;
ssl_certificate_key /etc/ssl/my/my-web.page.key.pem;
server_name www.my-web.page;
location ~ ^/$ {
# rewrite only the root page, other urls see next rule
return 301 https://www.my-web-page-microsite.de/;
}
location / {
resolver 127.0.0.1 valid=30s;
# pass to backend-client, failover to second container for the next 5 minutes
proxy_pass http://backend;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Server-Address $server_addr;
proxy_ssl_verify off;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
If something is wrong with my backend-client-servers, nginx won't start.
Isn't there a possibilty to override the check on starting/restarting nginx?

How to get real client ip when using upstreams on one server?

I have a vps running nginx, with ngx_stream_ssl_preread_module I have made SSL and Non-SSL protocols work on the same port.
When I checked the access.log, I found a lot of lines starting with 127.0.0.1. Obviously this is not a real client IP.
I tried to modify my nginx.conf, such as proxy_set_header, real_ip_header, set_real_ip_from 127.0.0.1, etc.,they have no effect.
This is my origianl stream configuration in nginx.conf.
stream {
server {
listen 443;
ssl_preread on;
proxy_pass $upstream;
}
map $ssl_preread_protocol $upstream {
default shadowsocks;
"TLSv1.1" https;
"TLSv1.2" https;
"TLSv1.3" https;
}
upstream shadowsocks {
server 127.0.0.1:7890;
}
upstream https {
server 127.0.0.1:8888;
}
}
I would try setting the proxy headers as follows:
server {
listen 443 ssl default_server;
ssl_preread on;
proxy_redirect off;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $remote_addr;
proxy_pass $upstream;
}
}

Nginx config to access same IP but different port?

I have one virtual IP (keepalived IP z.z.z.z) .I am creating 2 different webpages and want to access it using this virtual ip via nginx . I read that I can achieve it with different ports but I do not want to show the port . So what I want is when I hit x.x.x.x it should ask for username and password and when I enter it should take me to the respective webpage .
my current config file for 1st web page
upstream kibana {
server x.x.x.x:30001;
server y.y.y.y:30001;
}
server {
listen 80;
listen 443 ssl;
server_name z.z.z.z;
location / {
auth_basic "protect kibana";
auth_basic_user_file /etc/nginx/htpasswd.user;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://kibana;
}
}
My Edited Conf file
upstream kibana {
server x.x.x.x:30001;
server y.y.y.y:30001;
}
upstream kibana2 {
server x.x.x.x:30002;
server y.y.y.y:30002;
}
server {
listen 80;
listen 443 ssl;
server_name z.z.z.z;
location / {
auth_basic "protect kibana";
auth_basic_user_file /etc/nginx/htpasswd.user;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://kibana;
}
}
As I am using another upstream that is kibana 2 , so how I need to add another server stenza ?
Regards
VG

Setting up SSL on a load balancer

I currently have a load balancer with the NGINX setup:
upstream myapp1 {
least_conn;
server 192.168.0.20;
server 192.168.0.30;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
...
}
and on the clusters (192.168.0.20,192.168.0.30) the NGINX setup:
server {
listen 80;
root /var/www/website.co/public_html;
index index.php index.html index.htm;
server_name website.co www.website.co;
include /etc/nginx/commonStuff.conf; #php settings etc..
}
This works perfectly for http connections.
I am now wanting to set the server to work with a https connection for one domain (website.co). So I thought of adding this to the load balancers NGINX settings:
server {
listen 80;
listen 443 ssl;
server_name website.co www.website.co;
ssl on;
ssl_certificate /NAS/ssl/cert_chain_website.crt;
ssl_certificate_key /NAS/ssl/website.key;
location / {
proxy_pass https://myapp1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
and change the listening port on the clusters NGINX settings to 443 and keep everything else the same.
Now if I connect to http://website.co or infact anyother virtual domain on my server it returns
400 Bad Request
the plain HTTP request was sent to HTTPS port
So this means an issue with the redirect.
If I connect to https://website.co it returns:
404 Not Found
What am I doing wrong?

Resources