How can I ensure nginx resolves variable DNS in proxy_pass? - nginx

I pass some requests to this location (this is a simplified version of the configuration for clarity). ONBOARDING_URL is set in the environment and replaced when the server is run by a URL. This URL is resolved to an IP once. It needs to be resolved for each request when the TTL has expired. Initially, the configuration looked like this:
Initial Configuration Template
server {
listen 443 ssl;
server_name localhost;
resolver ${AWS_DNS};
location /onboarding {
proxy_cache onboarding;
proxy_set_header Host $proxy_host;
proxy_cache_valid 200 10m;
proxy_cache_key $cacheKey;
proxy_set_header Host $proxy_host;
proxy_pass ${ONBOARDING_URL}/api-users;
}
}
I discovered this was only resolved one time. Changes to the IP underlying the DNS resolution had no effect; requests were routed to the IP that was resolved initially.
So following guidance in the proxy_pass documentation, and from other questions / sources (below), we modified the configuration.
https://serverfault.com/a/593003/310436
https://stackoverflow.com/a/26957754/2081835
https://forum.nginx.org/read.php?2,215830,215832#msg-215832
from: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass
Parameter value can contain variables. In this case, if an address is specified as a domain name, the name is searched among the described server groups, and, if not found, is determined using a resolver.
The new configuration was intended to replace the template variable, populated from the environment at runtime, with a configuration variable that will always be resolved. However, it doesn't seem to be. Once an IP is resolved, it is used for the life of the process. Any ideas how to get around this? This seems a bit complex to test, too, which further complicates the fix.
New Configuration Template
server {
listen 443 ssl;
server_name localhost;
resolver ${AWS_DNS};
set $onboardingUrl ${ONBOARDING_URL};
location ~ ^/onboarding/?(.*)$ {
proxy_cache onboarding;
proxy_set_header Host $proxy_host;
proxy_cache_valid 200 10m;
proxy_cache_key $cacheKey;
proxy_set_header Host $proxy_host;
proxy_pass $onboardingUrl/api-users/$1;
}
}
Yet still, for the life of the process only the initially resolved IP is used.

Related

Icecast2 running under nginx not able to connect

I want to start saying that I've looked all over the place to find an answer to this problem and it just seems like either nobody else ran into this problem or nobody is doing it. So, I recently install icecast2 on my Debian server, The thing is that I'm completely able to broadcast to my server from my local network connecting to its local IP on port 8000 and hear the stream over the internet on radio.example.com since I proxy it with nginx, so far no problems at all. The problem lies when I want to broadcast to the domain I gave with nginx stream.example.com
I have two theories, one is that the proxy is not giving the source IP to icecast so it thinks it's beign broadcasted from 127.0.0.1 and the other is that nginx is doing something strange with the data stream and thus not delivering the correct format to icecast.
Any thoughts? Thanks in advance!
Here is the nginx config
server {
listen 80;
listen [::]:80;
server_name radio.example.com;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_pass http://127.0.0.1:8000/radio;
subs_filter_types application/xspf+xml audio/x-mpegurl audio/x-vclt text/css text/html text/xml;
subs_filter ':80/' '/' gi;
subs_filter '#localhost' '#stream.example.com' gi;
subs_filter 'localhost' $host gi;
subs_filter 'Mount Point ' $host gi;
}
}
server {
listen 80;
listen [::]:80;
server_name stream.example.com;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_pass http://localhost:8000/;
subs_filter_types application/xspf+xml audio/x-mpegurl audio/x-vclt text/css text/html text/xml;
subs_filter ':8000/' ':80/' gi;
subs_filter '#localhost' '#stream.example.com' gi;
subs_filter 'localhost' $host gi;
subs_filter 'Mount Point ' $host gi;
}
}
And this is what I get on icecast error.log
[2018-08-10 14:15:45] INFO source/get_next_buffer End of Stream /radio
[2018-08-10 14:15:45] INFO source/source_shutdown Source from 127.0.0.1 at "/radioitavya" exiting
Not sure how much of this is directly relevant to the OP's question, but here's a few snippets from my config.
These are the basics of my block to serve streams to clients over SSL on port 443.
In the first location block any requests with a URI of anything other than /ogg, /128, /192 or /320 are rewritten to prevent clients accessing any output from the Icecast server other than the streams themselves.
server {
listen 443 ssl http2;
server_name stream.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
rewrite ~*(ogg) https://stream.example.com/ogg last;
rewrite ~*([0-1][0-5]\d) https://stream.example.com/128 last;
rewrite ~*(?|([1][6-9]\d)|([2]\d\d)) https://stream.example.com/192 last;
rewrite ~*([3-9]\d\d) https://stream.example.com/320 break;
return https://stream.example.com/320;
}
location ~ ^/(ogg|128|192|320)$ {
proxy_bind $remote_addr transparent;
set $stream_url http://192.168.100.100:8900/$1;
types { }
default_type audio/mpeg;
proxy_pass_request_headers on;
proxy_set_header Access-Control-Allow-Origin *;
proxy_set_header Host $host;
proxy_set_header Range bytes=0-;
proxy_set_header X-Real-IP $remote_addr;
proxy_buffering off;
tcp_nodelay on;
proxy_pass $stream_url;
}
}
Setting proxy_bind with the transparent flag:
allows outgoing connections to a proxied server originate from a
non-local IP address, for example, from a real IP address of a client
This addresses the issues of local IP addresses in your logs/stats instead of client IPs, for this to work you also need to reconfigure your kernel routing tables to capture the responses sent from the upstream server and route them back to Nginx.
This requires root access and a reasonable understanding of Linux networking configuration, which I appreciate not everyone has. I also appreciate not everyone who uses Icecast and might want to reverse proxy will read this. A much better solution would be making Icecast more Nginx friendly, so I had a go.
I cloned Icecast from github and had a look over the code. I've maybe missed some but these lines looked relevant to me:
./src/logging.c:159: client->con->ip,
./src/admin.c:700: xmlNewTextChild(node, NULL, XMLSTR(mode == OMODE_LEGACY ? "IP" : "ip"), XMLSTR(client->con->ip));
For servers which do not support the PROXY protocol the Nginx default method of passing the client IP upstream is via the X-Real-IP header. Icecast seems to be using the value of client->con->ip for logging listener IPs. Let's change things up a bit. I added this:
const char *realip;
realip = httpp_getvar (client->parser, "x-real-ip");
if (realip == NULL)
realip = client->con->ip;
And changed the previous lines to this:
./src/logging.c:163: realip,
./src/admin.c:700: xmlNewTextChild(node, NULL, XMLSTR(mode == OMODE_LEGACY ? "IP" : "ip"), XMLSTR(realip));
then I built Icecast from source as per the docs. The proxy_set_header X-Real-IP $remote_addr; directive in my Nginx conf is passing the client IP, if you have additional upstream servers also handling the request you will need to add some set_real_ip_from directives specifying each IP, real_ip_recursive on; and use the $proxy_add_x_forwarded_for; which will capture the IP address of each server which handles the request.
Fired up my new Icecast build and this seems to work perfectly. If the X-Real-IP header is set then Icecast logs this as the listener IP and if not then it logs the client request IP so it should work for reverse proxy and normal setups. Seems too simple, maybe I missed something #TBR?
OK so you should now have working listener streams served over SSL with correct stats/logs. You have done the hard bit. Now lets stream something to them!
Since the addition of the stream module to Nginx then handling incoming connections is simple regardless of whether of not they use PUT/SOURCE.
If you specify a server within a stream directive Nginx will simply tunnel the incoming stream to the upstream server without inspecting or modifying the packets. Nginx streams config lesson 101 is all you need:
stream {
server {
listen pub.lic.ip:port;
proxy_pass ice.cast.ip:port;
}
}
I guess one problem unsuspecting people may encounter with SOURCE connections in Nginx is specifying the wrong port in their Nginx config. Don't feel bad, Shoutcast v1 is just weird. Point to remember is:
Instead of the port you specify in the
client encoder it will actually attempt to connect to port+1
So if you were using port 8000 for incoming connections, either set the port to 7999 in client encoders using the Shoutcast v1 protocol, or set up your Nginx stream directives with 2 blocks, one for port 8000 and one for port 8001.
Your Nginx install must be built with the stream module, it's not part of the standard build. Unsure? Run:
nginx -V 2>&1 | grep -qF -- --with-stream && echo ":)" || echo ":("
If you see a smiley face you are good to go. If not you'll need to build Nginx and include it. Many repositories have an nginx-extras package which includes the stream module.
Almost finished, all that we need now is access to the admin pages. I serve these from https://example.com/icecast/ but Icecast generates all the URIs in the admin page links using the root path, not including icecast/ so they won't work. Let's fix that using the Nginx sub filter module to add icecast/ to the links in the returned pages:
location /icecast/ {
sub_filter_types text/xhtml text/xml text/css;
sub_filter 'href="/' 'href="/icecast/';
sub_filter 'url(/' 'url(/icecast/';
sub_filter_once off;
sub_filter_last_modified on;
proxy_set_header Accept-Encoding "";
proxy_pass http://ice.cast.ip:port/;
}
The trailing slash at the end of proxy_pass http://ice.cast.ip:port/; is vitally important for this to work.
If a proxy_pass directive is specified just as server:port then the full original client request URI will be appended and passed to the upstream server. If the proxy_pass has anything URI appended (even just /) then Nginx will replace the part of the client request URI which matches the location block (in this case /icecast/) with the URI appended to the proxy_pass. So by appending a slash a request to https://example.com/icecast/admin/ will be proxied to http://ice.cast.ip:port/admin/
Finally I don't want my admin pages accessible to the world, just my IP and the LAN, so I also include these in the location above:
allow 127.0.0.1;
allow 192.168.1.0/24;
allow my.ip.add.ress;
deny all;
That's it.
sudo nginx -s reload
Have fun.
tl;dr - Don't reverse proxy Icecast.
Icecast for various reasons is better not reverse proxied. It is a purpose built HTTP server and generic HTTP servers tend to have significant issues with the intricacies of continuous HTTP streaming.
This has been repeatedly answered. People like to try anyway and invariably fail in various ways.
If you need it on port 80/443, then run it on those ports directly
If you have already something running on port 80/443, then use another of the remaining 2^64 IPv6 addresses in your /64 and if you are still using legacy IP, get another address, e.g. by spinning up a virtual server in the cloud.
Need HTTPS, Icecast supports TLS (on Debian and Ubuntu make sure to install the official Xiph.org packages as distro packages come without openSSL support)
Make sure to put both private and public key into one file.
This line....
subs_filter '#localhost' '#stream.example.com' gi;
Should probably be....
subs_filter '#localhost' '#example.com' gi;
I am not familiar with nginx so my best guess would be that this line is linking radio.example.com to the main site of example.com. By adding the stream.example.com you are confusing it by directing it to a site that doesn't exist.
I got this from a config file posted here:
Anyway, it wouldn't hurt to try it.

Nginx/Pyramid custom SSL port

As a prefix, I have been using the following stack for some time with great success:
NGINX - web proxy
SSL - configured in nginx
Pyramid web application, served by gunicorn
The above combo works great, here is a working configuration.
server {
# listen on port 80
listen 80;
server_name portalapi.example.com;
# Forward all traffic to SSL
return 301 https://www.portalapi.example.com$request_uri;
}
server {
# listen on port 80
listen 80;
server_name www.portalapi.example.com;
# Forward all traffic to SSL
return 301 https://www.portalapi.example.com$request_uri;
}
#ssl server
server {
listen 443 ssl;
ssl on;
ssl_certificate /usr/local/etc/letsencrypt/live/portalapi.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/portalapi.example.com/privkey.pem;
server_name www.portalapi.example.com;
client_max_body_size 10M;
client_body_buffer_size 128k;
location ~ /.well-known/acme-challenge/ {
root /usr/local/www/nginx/portalapi;
allow all;
}
location / {
proxy_set_header Host $host;
proxy_pass http://10.1.1.16:8005;
#proxy_intercept_errors on;
allow all;
}
error_page 404 500 502 503 504 /index.html;
location = / {
root /home/luke/ecom2/dist;
}
}
Now, this is how I serve my public facing apps, it works very well. For all my internal applications, I used to simply direct users to an internal domain example: http://subdomain.company.domain , again this worked well for a long time.
Now in the wake of KRACK attack although we have some very thorough firewall rules to prevent a lot of attacks, I want to force all internal traffic through SSL, and I don't want to use a self signed certificate, I want to use lets encrypt so I can auto-renew certificates which makes administration much easier (and cheaper).
In order to use lets encrypt, I need to have a public facing DNS and server to perform the ACME challenge (for auto renewing). Now again this was a very easy thing to setup in nginx, and the below config works perfectly for serving static content:
What it does is if a user from the internet accesses intranet.example.com it simply shows a forbidden message. However, if a local user tries, they get forwarded to intranet.example.com:8002 and the port 8002 is only available locally, so there is no way external users can access a webpage on this site
geo $local_user {
192.168.155.0/24 0;
172.16.10.0/28 1;
172.16.155.0/24 1;
}
server {
listen 80;
server_name intranet.example.com;
client_max_body_size 4M;
client_body_buffer_size 128k;
# Space for lets encrypt to perform challenges
location ~ /\.well-known/ {
root /usr/local/www/nginx/intranet;
}
if ($local_user) {
# If user is local, redirect them to SSL proxy only available locally
return 301 https://intranet.example.com:8002$request_uri;
}
# Default block all non local users see
location / {
root /home/luke/forbidden_html;
index index.html;
}
# This server block is only available to local users inside geo $local_user
# this block listens on an internal port only, so it is never availble to
# external networks
server {
listen 8002 default ssl; # listen on a port only accessible locally
server_name intranet.example.com;
ssl_certificate /usr/local/etc/letsencrypt/live/intranet.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/intranet.example.com/privkey.pem;
client_max_body_size 4M;
client_body_buffer_size 128k;
location / {
allow 192.168.155.0/24;
allow 172.16.10.0/28; # also add in allow/deny rules in this block (extra security)
allow 172.16.155.0/24;
root /home/luke/ecom2/dist;
index index.html;
deny all;
}
}
Now, here comes the pyramid/nginx marrying problem, if I use the same above configuration, but have the below settings for my server on 8002:
server {
listen 8002 default ssl; # listen on a port only accessible locally
server_name intranet.example.com;
ssl_certificate /usr/local/etc/letsencrypt/live/intranet.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/intranet.example.com/privkey.pem;
client_max_body_size 4M;
client_body_buffer_size 128k;
location / {
allow 192.168.155.0/24;
allow 172.16.10.0/28; # also add in allow/deny rules in this block (extra security)
allow 172.16.155.0/24;
# Forward all requests to python application server
proxy_set_header Host $host;
proxy_pass http://10.1.1.16:6543;
proxy_intercept_errors on;
deny all;
}
}
I run into all sorts of problems, first off inside pyramid I was using the following code in my views/templates
request.route_url # get route url for desired function
Now using request.route_url with the above settings should cause https://intranet.example.com:8002/login to route tohttps://intranet.example.com:8002/welcome but in reality, this setup would forward a user to: http://intranet.example.com/welcome Again this is not correct.
And if I use route_url with the NGINX proxy setting:
proxy_set_header Host $http_host;
I get the error: NGINX to return a 400 error:
400: The plain HTTP request was sent to HTTPS port
And a request to: https://intranet.example.com:8002/ gets reverted to: http://intranet.example.com/login (omitting port and https)
Then I used the same nginx settings (header $htto), but thought I would change to using:
request.route_path
My theory was this should force everything to stay on the same url prefix, and just forward a user from https://intranet.example.com:8002/login to https://intranet.example.com:8002/welcome but in reality, this setup performed the same way as using route_url.
proxy_set_header Host $http_host;
I then get an error when navigating to https://intranet.example.com:8002
400: The plain HTTP request was sent to HTTPS port
And a request to: https://intranet.example.com:8002/ gets reverted to: http://intranet.example.com/login (omitting port and https)
Can anyone assist with the correct setup in order for me to serve my application on https://intranet.example.com:8002
EDIT:
Have also tried:
location / {
allow 192.168.155.0/24;
allow 172.16.10.0/28; # also add in allow/deny rules in this block (extra security)
allow 172.16.155.0/24;
# Forward all requests to python application server
proxy_set_header Host $host:$server_port;
proxy_pass http://10.1.1.16:8002;
proxy_intercept_errors on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# root /home/luke/ecom2/dist;
# index index.html;
deny all;
}
Which gives the same result.
I’ve checked a similar configuration and your last example seems correct,
at least for a simplistic gunicorn/pyramid app combination.
Seems something is missing in your puzzle )
Here’s my code (I’m new to Pyramid so something might be done better)
helloworld.py
from pyramid.config import Configurator
from pyramid.renderers import render_to_response
def main(request):
return render_to_response('templates:test.pt', {}, request=request)
with Configurator() as config:
config.add_route('main', '/')
config.add_view(main, route_name='main')
config.include('pyramid_chameleon')
app = config.make_wsgi_app()
templates/test.pt
<html>
<body>
Route url: ${request.route_url('main')}
</body>
</html>
My nginx config
server {
listen 80;
server_name pyramid.lan;
location / {
return 301 https://$server_name:8002$request_uri;
}
}
server {
listen 8002;
server_name pyramid.lan;
ssl on;
ssl_certificate /usr/local/etc/nginx/cert/server.crt;
ssl_certificate_key /usr/local/etc/nginx/cert/server.key;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://127.0.0.1:5678;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This is how I run gunicorn:
gunicorn -w 1 -b 127.0.0.1:5678 helloworld:app
And yes, it works:
$ curl --insecure https://pyramid.lan:8002/
<html>
<body>
Route url: https://pyramid.lan:8002/
</body>
</html>
$ curl -D - http://pyramid.lan
HTTP/1.1 301 Moved Permanently
Server: nginx/1.12.2
Date: Thu, 02 Nov 2017 20:41:50 GMT
Content-Type: text/html
Content-Length: 185
Connection: keep-alive
Location: https://pyramid.lan:8002/
Lets figure out what might go wrong in your case
http 400 usually pops up when you go over httP instead of httpS to a server awaiting httpS requests. If there’s no typo in the post and it indeed occurs when you navigate to https://intranet.example.com:8002 it would be nice to see a curl request showing this and a tcpdump showing what’s happening. Actually you can easily reproduce it by simply typing http://intranet.example.com:8002
another idea is that you’re doing a redirect from your app and the link gets broken when the redirect occurs. I better description on how the user may navigate from https://intranet.example.com:8002/login to .../welcome would be helpful
one more idea is that your app is not that simple and you use some middlewares / customization that makes the default logic work differently and your X-Forwarded-Proto header gets ignored - in this case the behavior would be just as you described
The issue here is, obviously, the missing port within the Location directives that your backend produces.
Now, why is the port missing? Most certainly, because of the following code:
proxy_set_header Host $host;
Note that $host itself does not contain $server_port, unlike $http_host, so, your backend would have no way of knowing which port you meant if you just use $host all by itself.
Note that proxy_redirect default of default expects Location to correspond with the value from proxy_pass in order to do its magic (according to documentation), so, your explicit header setting likely interferes with such logic.
As such, from the nginx point of view, I see multiple possible independent solutions:
remove proxy_set_header Host, and let proxy_redirect do its magic;
set proxy_set_header Host appropriately, to include the port number, e.g., using $host:$server_port or $http_host as you see fit (if that doesn't work, then perhaps the deficiency is actually within your upstream app itself, but fear not -- read below);
provide a custom proxy_redirect setting, e.g., proxy_redirect https://pyramid.lan/ / (equivalent to proxy_redirect https://pyramid.lan/ https://pyramid.lan:8002/), which will ensure that all the Location responses will have the proper port; the only way this wouldn't work is if your upstream does non-HTTP redirects with the missing port.

proxy_pass doesn't seem to resolve to upstream

I use this configuration below but somehow it doesn't proxy to the upstream.
upstream backend {
ip_hash;
server s1.example.com;
server s2.example.com;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
location / {
proxy_pass http://backend;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
However, if i replace to proxy_pass http://s1.example.com, it proxy successfully to the s1.example.com server.
How do I fix this?
The NGINX config says about proxy_pass:
Parameter value can contain variables. In this case, if an address is specified as a domain name, the name is searched among the described server groups, and, if not found, is determined using a resolver.
It does NOT state how the parameter handles a hostname when it's a constant/statically defined. According to this NGINX forum post:
domain names statically configured in config are only looked
up once on startup (or configuration reload).
Using an upstream block (as in your example) won't save you from this problem ("statically defined hostname is only looked up once after start/reload") unless you are using (the pay for) NGINX Plus and then you can add the resolve argument to the end of the nested server parameter.
The good news is that even in the free NGINX you can workaround the lookup problem by using variables like so:
upstream backend { ... }
server {
...
set $backend "backend";
proxy_pass http://$backend;
...
}
but this introduces its own caveats. See https://github.com/DmitryFillo/nginx-proxy-pitfalls for a thorough discussion of workarounds and pitfalls to the "NGINX proxy repeat DNS resolution" problem.

All hosts redirecting to single nginx proxy_pass

I have the following in my .conf file:
server {
listen 80;
server_name mydomain.net;
access_log /var/log/nginx/mydomain.net.access.log main;
location / {
proxy_pass http://127.0.0.1:9000;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Which works just fine... except everything that hits the server is getting fed to this server block. My IP, another domain pointing at this block, and the actual mydomain.net all point to what only mydomain.net is pointing to.
As the documentation states:
In this configuration nginx tests only the request’s header field “Host” to determine which server the request should be routed to. If its value does not match any server name, or the request does not contain this header field at all, then nginx will route the request to the default server for this port. In the configuration above, the default server is the first one — which is nginx’s standard default behaviour.
This was the case here. I performed the suggested step to drop undefined hosts:
server {
listen 80 default_server;
server_name "";
return 444;
}
Which solved my issue.

Nginx proxy_pass with $remote_addr

I'm trying to include $remote_addr or $http_remote_addr on my proxy_pass without success.
The rewrite rule works
location ^~ /freegeoip/ {
rewrite ^ http://freegeoip.net/json/$remote_addr last;
}
The proxy_pass without the $remote_addr works, but freegeoip does not read the x-Real-IP
location ^~ /freegeoip/ {
proxy_pass http://freegeoip.net/json/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
Then, I'm adding the ip to the end of the request, like this:
location ^~ /freegeoip/ {
proxy_pass http://freegeoip.net/json/$remote_addr;
}
but nginx report this error: no resolver defined to resolve freegeoip.net
If the proxy_pass statement has no variables in it, then it will use the "gethostbyaddr" system call during start-up or reload and will cache that value permanently.
if there are any variables, such as using either of the following:
set $originaddr http://origin.example.com;
proxy_pass $originaddr;
# or even
proxy_pass http://origin.example.com$request_uri;
Then nginx will use a built-in resolver, and the "resolver" directive must be present. "resolver" is probably a misnomer; think of it as "what DNS server will the built-in resolver use". Since nginx 1.1.9 the built-in resolver will honour DNS TTL values. Before then it used a fixed value of 5 minutes.
It seems a bit strange that nginx is failing to resolve the domain name at runtime rather than at configuration time (since the domain name is hard coded). Adding a resolver declaration to the location block usually fixes dns issues experienced at runtime. So your location block might look like:
location ^~ /freegeoip/ {
#use google as dns
resolver 8.8.8.8;
proxy_pass http://freegeoip.net/json/$remote_addr;
}
This solution is based on an article I read a while back - Proxy pass and resolver. Would be worth a read.
If anyone is stll experiencing trouble, for me it helped to move the proxy_pass host to a seperate upstream, so I come up with something like this
upstream backend-server {
server backend.service.consul;
}
server {
listen 80;
server_name frontend.test.me;
location ~/api(.*)$ {
proxy_pass http://backend-server$1;
}
location / {
# this works mystically! backend doesn't...
proxy_pass http://frontend.service.consul/;
}
}
Another way is that you can provide your host ip address and port number instead of host URL in proxy_pass like below:
Your Code:
proxy_pass http://freegeoip.net/json/;
Updated Code
proxy_pass http://10.45.45.10:2290/json/;
So you want this to work:
location ^~ /freegeoip/ {
proxy_pass http://freegeoip.net/json/$remote_addr;
}
but as pointed out by Chris Cogdon, it fails because at run-time, nginx does not have the domain name resolved, and so proxy_pass requires a resolver (see also, proxy_pass docs).
Something that seems to work (though may be a hack!) is to do something that triggers nginx to resolve the domain name at start-up and cache this.
So, change the config to include a location (for the same host) without any variables (as well as your location with variables):
location = /freegeoip/this-location-is-just-a-hack {
# A dummy location. Use any path, but keep the hostname.
proxy_pass http://freegeoip.net/this-path-need-not-exist;
}
location ^~ /freegeoip/ {
proxy_pass http://freegeoip.net/json/$remote_addr;
}
The host will now be resolved at start-up (and its cached value will be used in the second location at run-time). You can verify this start-up resolution behavior by changing hostname (freegeoip.net) to something that doesn't exist, and when you run nginx -t or nginx -s reload you'll have an emergency error ([emerg] host not found in upstream). And of course you can verify that the cached value is used at run-time by leaving out any resolver, hitting your location block, and observing no errors in the logs.
You could also mention your nginx server port in the proxy_pass uri. This solved the issue for me.
Try to use dig / nslookup to make sure this is still nginx issue.
In my case the issue was that one of dns server between my nginx and root dns was not respecting TTL values that are as small as 1 second.
Nginx does not re-resolve DNS names in Docker

Resources