NGINX -- show cached IPs for host names in config files? - nginx

[SHORT VERSION] I understand when NGINX looks at a config file, it does DNS lookups on the hostnames in it, and then stores the results (IP addresses the hostnames should resolve to) somewhere and uses them until the next time it looks at a config file (which, to my understanding, is not until the next restart by default). Is there a way to see this hostnames-to-ips mapping that my currently-running NGINX service has? I am aware there are ways to configure my NGINX to account for changes in IPs for a hostname. I wish to see what my NGINX currently thinks it should resolve my hostname to.
[Elaborated] I'm using the DNS name of an AWS ELB (classic) as the hostname for a proxy_pass. And since both the public and private IPs of an AWS ELB can change (without notice), whatever IP(s) NGINX has mapped for that hostname at the start of its service will become outdated upon such change. I believe the IP-change just happened for me, as my NGINX service is forwarding traffic to a cluster different than what is specified in its config. Restarting the NGINX service fixes the problem. But, again, I'm looking to SEE where NGINX currently thinks it should send the traffic to, not how to fix it or prevent it (plenty of resources online for working with dynamic upstreams, which I evidently should have consumed prior to deploying my NGINX services...).
Thank you in advance!

All you need is the resolver option.
http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver
With this option nginx will lookup DNS changes without restarting. But only for proxy_pass directive. This wont work, if you are using upstream. DNS resolve of upstream servers supported only in Nginx PLUS version.
If you want to know IP of upstream server, there is few ways:
- in PLUS version you can use status module or upstream_conf module, but PLUS version is not free
- some 3rd party status modules
- write this IP to log with each request, just add $upstream_addr variable to your custom access log. $upstream_addr contains IP address of backend server used in current request. Example of config:
log_format upstreamlog '$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent $upstream_addr';
server {
...
access_log /tmp/test_access_log upstreamlog;
resolver ip.of.local.resolver;
location / {
set $pass dns_name.of.backend;
proxy_pass http://$pass;
}
}
Note: always use variable for proxy_pass - only in this case resolver will be used. Example of log:
127.0.0.1 - - [10/Jan/2017:02:12:15 +0300] "GET / HTTP/1.1" 200 503 213.180.193.3:80
127.0.0.1 - - [10/Jan/2017:02:12:25 +0300] "GET / HTTP/1.1" 200 503 213.180.193.3:80
.... IP address changed, nginx wasn't restarted ...
127.0.0.1 - - [10/Jan/2017:02:13:55 +0300] "GET / HTTP/1.1" 200 503 93.158.134.3:80
127.0.0.1 - - [10/Jan/2017:02:13:59 +0300] "GET / HTTP/1.1" 200 503 93.158.134.3:80

Related

Nginx Reverse Proxy is directing requests to a .255 address after several days

I have nginx configured to perform two functions;
1 - To serve a set of html and javascript pages. The javascript pages iteratively access an API through the Nginx Proxy (see function 2).
2 - In order to get around CORS restrictions from the client/browser, nginx acts as a proxy to the remote api.
Everything works perfectly when nginx is first started and will run for several days to a couple of weeks. At some point, the client is no longer able to get data from two of the API endpoints. The ones that continue to work are retrieved using a GET. The ones that stop working use a POST method.
I looked in the nginx access.log and found:
192.168.100.7 - - [08/Dec/2020:23:01:24 +0000] "POST /example/developer_api/v1/companies/search HTTP/1.1" 499 0 "http://192.168.100.71/example-wdc/ExampleCompanies.html" "Mozilla/5.0 (Windows NT 6.2; Win64; x64) AppleWebKit/538.1 (KHTML, like Gecko) tabprotosrv Safari/538.1"
A HTTP Error 499. Client closed request. This is 30 seconds after the previous successful GET request. I believe this is the originating client closing the connection before Nginx has received and returned data from the API.
I used wireshark on the nginx server to capture the traffic.
I found the following suspect packet:
104 6.716880257 192.168.100.71 XXX.XXX.XXX.255 TCP 66 42920 → 443 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 WS=128
I think it is weird that the nginx proxy is sending a TCP SYN request to a broadcast address. The TCP SYN retried several times without any response. This explains the 499 from the originating client since Nginx hasn't had a response within 30 seconds.
I had a theory that the IP address had changed on the remove API server which then confused nginx on where to forward the requests. I added a resolver with a timeout to nginx. This hasn't improved the situation.
So, I am stumped as to where to look next - any ideas, rabbit holes or weird theories will be appreciated.
I have included the nginx config below.
server {
charset UTF-8;
listen 80;
root /var/www/tableau-web-data-connectors/webroot/;
location /copper/ {
add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH';
add_header 'Access-Control-Allow-Origin' '192.168.100.71';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-C$
add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH';
resolver 192.168.100.10 valid=720s;
proxy_pass https://api.example.com/;
proxy_ssl_session_reuse off;
proxy_set_header Host api.prosperworks.com;
proxy_redirect off;
}
}
Nginx uses "resolver" just once on start (or receiving HUP signal) for resolving api.example.com to ip-address (or any other domain names in configuration). Dynamic resolving works only for commercial Nginx+.
There is "POST /example/developer_api/v1/companies/search" with referer "http://192.168.100.71/example-wdc/ExampleCompanies.html".
Client opened "/example-wdc/ExampleCompanies.html" then clicked on the search form but didn't wait for result and closed the page. That's how 499 appeared in access_log. This is a common situation.
Perhaps it's just that nginx does not resolve api.example.com just when "api.example.com" changes its IP for some reason and stop working on old IP. Because resolving only works when nginx is restarted.
An IP ending in 255 is not always broadcast. This may be a valid address.

reverse proxy with nginx ssl passthrough

I have several ISS Webservers hosting multiple web applications on each IIS server.
The do have a public certificate on each system.
Every IIS has an unique IP.
All IIS Server are placed in the same DMZ
I have setup an nginx System in another DMZ.
My goal is, to have nginx handle all the requests to the IIS from the Internet and JUST passthrough all the SSL and certificates checking to the IIS. So as it was before nginx. I don't want to have nginx break up the certificates, or offloads them etc.
Before I try to rumble with nginx reverse proxy to get it done (since I'm not very familiar with nginx), my question would be, if this is possible?
Believe me I've googled times and times and could not find something which answers my question(s)
Or maybe I'm too dumb google correctly. I've searched even for passthrough, or reverse proxy, offloading.
So far I've gathered, nginx needs probably some extra mods. Since I have a "apt-get" Installation, I don't even know how to add them.
nevermind I found the solution:
Issue:
Several Webservers with various applications on each are running behind a FW and responding only on Port 443
The Webservers have a wildcard Certificate, they are IIS Webservers(whoooho very brave), have public IP addresses on each
It is requested, that all webserver should not be exposed to the Internet and moved to a DMZ
Since IP4 addresses are short these days, it is not possible get more IPs addresses
Nginx should only passthrough the requests. No Certificate break, decrypt, re-encrypt between webserver and reverse proxy or whatsoever.
Solution:
All websservers should be moved to a internal DMZ
A single nginx reverse proxy should handle all requests based on the webservers DNS entries and map them. This will make the public IP4 address needs obsolete
All webservers would get a private IP
A wild certificate would be just fine to handle all aliases for DNS forwarding.
Steps to be done:
1. A single nginx RP should be placed on the external-DMZ.
2. Configure nginx:
- Install nginx on a fully patched debian with apt-get install nginx. At this Point
you'll get Version 1.14 for nginx. Of course you may compile it too
If you have installed nginx by the apt-get way, it will be configured with the following modules, which you will need later: ngx_stream_ssl_preread, ngx_stream_map, and stream. Don't worry, they are already in the package. You may check with nginx -V
4. external DNS Configuration:
- all DNS request from the Internet should point the nginx.
E.g webserver1.domain.com --> nginx
webserver2.domain.com --> nginx
webserver3.domain.com --> nginx
5. Configuration nginx reverse-proxy
CD to /etc/nginx/modules-enabled
vi a filename of your choice (e.g. passtru)
Content of this file:
enter code here
stream {
map $ssl_preread_server_name $name {
webserver01.domain.com webserver01_backend;
webserver02.domain.com webserver02_backend;
}
upstream support_backend {
server 192.168.0.1:443; # or DNS Name
}
upstream intranet_backend {
server 192.168.0.2:443; # or DNS Name
}
log_format basic '$remote_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time "$upstream_addr" '
'"$upstream_bytes_sent" "$upstream_bytes_received"
"$upstream_connect_time"';
access_log /var/log/nginx/access.log basic;
error_log /var/log/nginx/error.log;
server {
listen 443;
proxy_pass $name; # Pass allrequests to the above defined variable container $name
ssl_preread on;
}
}
6. Unlink the default virtual webserver
rm /etc/nginx/sites-enabled/default
7. Redirect all http traffic to https:
create a file vi /etc/nginx/conf.d/redirect.conf
add following code
enter code here
server {
listen 80;
return 301 https://$host$request_uri;
}
test nginx -t
reload systemctl reload nginx
Open up a browser and check the /var/log/nginx/access.log while calling the webservers
Finish

Nginx vs Apache proxy pass

I am trying to convert my apache config to nginx. For apache I have following:
<VirtualHost *:443>
ServerName loc.goout.net
<Location />
ProxyPass http://localhost:8080/ retry=0
ProxyPreserveHost On
</Location>
<Location /i/>
ProxyPass https://dev.goout.net/i/ retry=0
ProxyPreserveHost Off
</Location>
...
Like this, if I fetch:
https://loc.goout.net/i/user/606456_001_min.jpg
It correctly fetches content from:
https://dev.goout.net/i/user/606456_001_min.jpg
So for nginx I am trying this:
server {
listen 443 ssl;
server_name loc.goout.net;
proxy_buffering off;
proxy_ssl_session_reuse off;
proxy_redirect off;
proxy_set_header Host dev.goout.net;
location /i/ {
proxy_pass https://dev.goout.net:443;
}
But when I fetch the content, I will always get 502.
In nginx logs I see following:
[error] 7#7: *5 no live upstreams while connecting to upstream, client: 127.0.0.1, server: loc.goout.net, request: "GET /i/user/606456_001_min.jpg HTTP/1.1", upstream: "https://dev.goout.net/i/user/606456_001_min.jpg", host: "loc.goout.net"
Note the link: https://dev.goout.net/i/user/606456_001_min.jpg
- which works correctly. It seems to me it still doesn't connect with SSL. I also tried to define the upstream section as:
upstream backend {
server dev.goout.net:443;
}
But it had no effect.
Note the server is behind CloudFlare gateway, I hope it is not preventing the correct connection, but I guess that wouldn't work in apache either.
tl;dr: SNI is off by default in nginx, as per http://nginx.org/r/proxy_ssl_server_name, but is required by Cloudflare.
It's generally not the best idea to have home-made proxies on top of Cloudflare — it's supposed to be the other way around.
However, what you're omitting is the actual error message that results from making the request like curl -v localhost:3227/i/user/606456_001_min.jpg — of course it is TLS-related:
2018/07/07 23:18:39 [error] 33345#33345: *3 SSL_do_handshake() failed (SSL: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure) while SSL handshaking to upstream, client: 127.0.0.1, server: loc.goout.net, request: "GET /i/user/606456_001_min.jpg HTTP/1.1", upstream: "https://[2400:cb00:2048:1::6818:7303]:443/i/user/606456_001_min.jpg", host: "localhost:3227"
This is because nginx is not really intended to be used to steal someone else's sites via proxy_pass, so, some features that Cloudflare does require, are turned off by default in nginx for the general use; specifically, it's the SNI, Server Name Indication extension to TLS, that's making the difference here.
As per http://nginx.org/r/proxy_ssl_server_name, putting an extra proxy_ssl_server_name on; to your exact configuration does fix the issue (I've tested this myself — it works) — note that this requires nginx 1.7.0 or newer.
+ proxy_ssl_server_name on; # SNI is off by default
Additionally, note that you'll also have to ensure that the domain name resolution gets updated during the run-time, as, by default, it only gets resolved when the configuration is loaded or reloaded; you could use the trick of using variables within your http://nginx.org/r/proxy_pass to make it update the resolution of the host as appropriate, but then this also requires you to use the http://nginx.org/r/resolver directive to specify the server to use for DNS resolutions (during the runtime, after loading the configuration), so, your MVP would then be:
location /i/ {
resolver 1dot1dot1dot1.cloudflare-dns.com.;
proxy_ssl_server_name on; # SNI is off by default
proxy_pass https://dev.goout.net:443$request_uri;
}
If you want to specify upstream servers by hostname instead of IP then you must define a resolver directive for Nginx to do DNS look ups.
Is dev.goout.net on a different, remote machine?
Where are your proxy_ssl_certificate and proxy_ssl_certificate_key directives? That connection won't just secure itself.

GCP: Network load balancer changes HTTP version from 1.1 to 1.0

I'm using two type of load balancers: HTTP LB for front-end and Network load balancer as an internal LB. I noticed GCP's Network load balancer (L4 load balancer) changes HTTP version from 1.1 to 1.0. Is this my understanding correct? How to change the Network LB's behavior. I don't think changing the version is good.
My Environment
User --> HTTP LB --> Server A --> Network LB --> Server B
Server A's log
1xx.xxx.xxx.xxx - - [15/May/2017:15:04:41 +0900] "GET /items HTTP/1.1" 200 260 "-" "-"
Server B's log
1xx.xxx.xxx.xxx - - [15/May/2017:15:04:41 +0900] "GET /items HTTP/1.0" 200 260 "-" "-"
Update 1
It might be not GCP LB's behavior. I doubted nginx proxy setting.
I put the following setting into nginx conf. But still it does not work.
proxy_http_version 1.1;
Problem solved. The cause was our nginx setting.
We use nginx proxy. The proxy's default setting is HTTP 1.1.
We put the following line. Then fixed it.
proxy_http_version 1.1;

Error with IP and Nginx as reverse proxy

I configured my Nginx as simple reverse proxy.
I'm just using basic setting
location / {
proxy_pass foo.dnsalias.net;
proxy_pass_header Set-Cookie;
proxy_pass_header P3P;
}
The problem is that after some time (few days) the site behind nginx become unaccessible. Indead nginx try to call a bad ip (the site behind nginx is at my home behind my box and I'm a using a dyn-dns because my ip is not fixe). This dyn-dns is always valid (I can call my site directly) but for obscure reason Nginx get stuck with that..
So as said, nginx just give me 504 Gateway Time-out after some time. It looks like the error come when my ip change at home.
Here is a sample of error log:
[error] ... upstream timed out (110: Connection timed out) while connecting to upstream, client: my.current.ip, server: myreverse.server.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://my.old
.home.ip", host: "myreverse.server.com"
So do you know why nginx is using ip instead of the DN ?
If the proxy_pass value doesn't contain variables, nginx will resolve domain names to IPs while loading the configuration and cache them until you restart/reload it. This is quite understandable from a performance point of view.
But, in case of dynamic DNS record change, this may not be desired. So two options are available depending on the license you possess or not.
Commercial version (Nginx+)
In this case, use an upstream block and specify which domain name need to be resolved periodically using a specific resolver. Records TTL can be overriden using valid=time parameter. The resolve parameter of the server directive will force the DN to be resolved periodically.
http {
resolver X.X.X.X valid=5s;
upstream dynamic {
server foo.dnsalias.net resolve;
}
server {
server_name www.example.com;
location / {
proxy_pass http://dynamic;
...
}
}
}
This feature was added in Nginx+ 1.5.12.
Community version (Nginx)
In that case, you will also need a custom resolver as in the previous solution. But to workaround the unavailable upstream solution, you need to use a variable in your proxy_pass directive. That way nginx will use the resolver too, honoring the caching time specified with the valid parameter. For instance, you can use the domain name as a variable :
http {
resolver X.X.X.X valid=5s;
server {
server_name www.example.com;
set $dn "foo.dnsalias.net";
location / {
proxy_pass http://$dn;
...
}
}
}
Then, you will likely need to add a proxy_redirect directive to handle redirects.
Maybe check this out http://forum.nginx.org/read.php?2,215830,215832#msg-215832
resolver 127.0.0.1;
set $backend "foo.example.com";
proxy_pass http://$backend;
In such setup ip address of "foo.example.com" will be looked up
dynamically and result will be cached for 5 minutes.

Resources