Nginx - service crashes if upstream backup servers' hostname cannot be resolved - nginx

Setup:
Docker compose
Nginx v1.15
Config file:
http {
upstream backend {
server my_production_server1:80 fail_timeout=5s max_fails=3;
server my_production_server2:80 backup;
}
...
}
Problem: if my_production_server2 container goes down, and nginx container is restarted, then nginx container cannot come up because "my_production_server2" cannot be resolved.
how to fix this?
NOTE: my_production_server1 and my_production_server2 are container / service names, not IP addresses?
seems that this: https://serverfault.com/questions/480241/nginx-failover-without-load-balancing
only works with IP addresses ... why cant it work with hostnames too ?

Related

Nginx resolver for Kubernetes with skydns

I can't find a way to make an nginx pod resolve another kubernetes services URLs.
I am NOT using kube-dns , we are using kube2sky solely and we are not going to implement kube-dns yet, so I need to fix in this scenario.
For example, I want nginx to resolve a service URL app.mynamespace.svc.skydns.local but if I run a ping to that URL it resolves successfully.
My nginx config part is:
location /api/v1/namespaces/mynamespace/services/app/proxy/ {
resolver 127.0.0.1
set \$endpoint \"http://app.mynamespace.svc.skydns.local/\";
proxy_pass \$endpoint;
proxy_http_version 1.1;
proxy_set_header Connection \"upgrade\";
}
I need to specify the target upstream in a variable because I want nginx to starts even if the target is not available, if I don't specify in variable nginx crashes when starting up because the upstream needs to be available and resolvable.
The problem I think is the resolver value, I've tried with 127.0.0.1, with 127.0.0.11, and with the skydns IP specified in configuration 172.40.0.2:53:
etcdctl get /skydns/config
{"dns_addr":"0.0.0.0:53","ttl":4294967290,"nameservers":["172.40.0.2:53"]}
But nginx cannot resolve the URL yet.
What IP should I specify in the resolver field in nginx config for kubernetes and skydns config?
Remember that we don't have kube-dns.
Thank you.
I don't think resolving app.mynamespace.svc.skydns.local has anything to do with configuring the upstream DNS servers. Generally, for that, you configure a well-known DNS server like 8.8.8.8 or your cloud infrastructure DNS server which would be perhaps 172.40.0.2. For example as described in the docs:
$ curl -XPUT http://127.0.0.1:4001/v2/keys/skydns/config \
-d value='{"dns_addr":"127.0.0.1:5354","ttl":3600, "nameservers": ["8.8.8.8:53","8.8.4.4:53"]}'
You might want to check the logs of your kube2sky2 pod, for any guidance and that all the config options are specified like --kube-master-url, --etcd-server. Maybe it can't talk to the Kubernetes master and receive updates of running pods so that the SRV entries get updates.

How to set up nginx setting to distribute different servers from request pointing different domain?

I would like to set up nginx to distribute different servers from request pointing dirrerent domain.
The nginx server environment is below.
CentOS Linux release 7.3.1611 (Core)
nginx 1.11.8
* in configure with --with-stream parameter. build & install from source.
My image is.
server1.testdomain.com ssh request ->(global IP) *nginx server -> (local IP)192.168.1.101 server
server2.testdomain.com ssh request ->(global IP) *nginx server -> (local IP)192.168.1.102 server
nginx server is same glocal IP and same server.
nginx.conf is ...
stream {
error_log /usr/local/nginx/logs/stream.log info;
upstream server1 {
server 192.168.1.101:22;
}
upstream server2 {
server 192.168.1.102:22;
}
server {
listen 22 server1.testdomain.com;
proxy_pass server1;
}
server {
listen 22 server2.testdomain.com;
proxy_pass server2;
}
}
But...
nginx: [emerg] the invalid "server1.testdomain.com" parameter in・・
error occurred. It seems like impossilbe to execute such as listen "22 server1.testdomain.com".
And,
I tried to write "server_name" in "server".
nginx: [emerg] "server_name" directive is not allowed here in・・・
don't permit to use "server_name" in "server".
How do I write config file to distribute difference server for difference domain request?
If you have a idea or information, could you teach me?
Its not possible with nginx because stream module is L3 balancer. SSH protocol works at L5/7.
Its not possible at all because ssh negotiation does not include destination host name.
You can do what you want only using two different IP or using two different ports. In both cases nginx can forward connection, but much better to use iptables in this case.

Docker Network Nginx Resolver

I am trying to get rid of deprecated Docker links in my configuration. What's left is getting rid of those Bad Gateway nginx reverse proxy errors when I recreated a container.
Note: I am using Docker networks in bridge mode. (docker network create nettest)
I am using the following configuration snippet inside nginx:
location / {
resolver 127.0.0.1 valid=30s;
set $backend "http://confluence:8090";
proxy_pass $backend;
I started a container with hostname confluence on my Docker network with name nettest.
Then I started the nginx container on network nettest.
I can ping confluence from inside the nginx container
confluence is listed inside the nginx container's /etc/hosts file
nginx log says send() failed (111: Connection refused) while resolving, resolver: 127.0.0.1:53
I tried the docker network default dns resolver 127.0.0.11 from /etc/resol.conf
nginx log says confluence could not be resolved (3: Host not found)
Anybody knows how to configure nginx resolver with Docker Networks or an alternative on how to force Nginx to correctly resolve the Docker network hostname?
First off, you should be using the Docker embedded DNS server at 127.0.0.11.
Your problem could be caused by 1 of the following:
nginx is trying to use IPv6 (AAAA record) for the DNS queries.
See https://stackoverflow.com/a/35516395/1529493 for the solution.
Basically something like:
http {
resolver 127.0.0.11 ipv6=off;
}
This is probably no longer a problem with Docker 1.11:
Fix to not forward docker domain IPv6 queries to external servers
(#21396)
Take care that you don't accidentally override the resolver configuration directive. In my case I had in the server block resolver 8.8.8.8 8.8.4.4; from Mozilla's SSL Configuration Generator, which was overriding the resolver 127.0.0.11; in the http block. That had me scratching my head for a long time...
Maybe you should check your container's /etc/resolv.conf
It shows your container's correct DNS config and then use that DNS server IP for resolver.
127.0.0.11 does not works in Rancher
I was running "node:12.18-alpine" with angular frontend and hit the same problem with proxy_pass.
Locally it was working with:
resolver 127.0.0.11;
As simple as that! Just execute:
$ cat /etc/resolv.conf | grep nameserver
In your container to get this ip address.
However, when deploying to kubernetes (AWS EKS) I got the very same error:
failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53
Solution:
First solution was to find out the IP of the kube-dns service like below:
$ kubectl get service kube-dns -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 178d
Simple replacing IP for CLUSTER-IP worked like a charm.
Later, after some more doc digging, I find out that I could reference the service by name (which is little bit more elegant and resilient):
resolver kube-dns.kube-system valid=10s;
My problem was $request_uri at the end. After adding it at the end of uri and changing the 127.0.0.1 to 127.0.0.11 solved my issue. I hope it will help people to not spend hours on this.
location /products {
resolver 127.0.0.11;
proxy_pass http://products:3000$request_uri;
}
In several cases where I had this error, adding resolver_timeout 1s; to the Nginx config solved the issue. Most of the time I don't have a resolver entry.
Edit: what also worked for containers where I could explicitly define a nameserver: resolver DNS-IP valid=1s;
We hit this with docker containers on windows trying to lookup host.docker.internal using the docker internal resolver at 127.0.0.11. All queries would resolve correctly except host.docker.internal. Fix was to add the ipv6=off flag to the resolver line in nginx.conf.
I solved this problem with the following way:
docker run --rm -d --network host --name "my_domain" nginx
https://docs.docker.com/network/network-tutorial-host/
You need a local dns server like dnsmasq to resolve using 127.0.0.1. Try installing it using apk add --update dnsmasq and set it up if you're using an alpine (nginx:alpine) variant.

Restarting Containers When Using Docker and Nginx proxy_pass

I have an nginx docker container and a webapp container successfully running and talking to eachother.
The nginx container listens on port 80, and uses proxy_pass to direct traffic to the IP of the webapp container.
upstream app_humansio {
server humansio:8080 max_fails=3 fail_timeout=30s;
}
"humansio" is set in the /etc/hosts file by docker because I've started nginx with --link humansio:humansio. The webapp container (humansio) is always exposing 8080.
The problem is, when I reload the webapp container, the link to the nginx container breaks and I need to restart that as well. Is there any way I can do this differently so I don't need to restart the nginx container when the webapp container reloads?
--
I've tried to do something like connecting them manually by using a common port (8001 on both), but since they actually reserve that port, the 2nd container cannot use it as well.
Thanks!
I prefer to run the proxy (nginx of haproxy) directly on the host for this reason.
But an option is to "Link via an Ambassador Container" https://docs.docker.com/articles/ambassador_pattern_linking/
https://www.digitalocean.com/community/tutorials/how-to-use-the-ambassador-pattern-to-dynamically-configure-services-on-coreos
If you don't want to restart your proxy container whenever you have to restart one of the proxied ones (e.g. fig), you could take a look at the autoupdated proxy configuration approach: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
if u use some modern version of docker the links in nginx container to your web service probably get updated (u can check it with docker exec -ti nginx bash - then cat /etc/hosts) - problem is nginx doesnt' use /etc/hosts every time - it caches the ip and when it changes - he gets lost. 'docker kill -s HUP nginx' which makes nginx reload its config without restart helps too.
I have the same problem. I used to start my services with systemd unit files - and when u make one service (nginx) dependant on other (webapp) and then restart the webapp - systemd is smart enough to restart the nginx as well. Now I'm trying my luck with docker-compose and restarting webapp container confuses nginx.

Configuring Nginx to proxy two virtual private servers

I am setting up a Nginx front end server to server up two node apps running on separate ports (e.g. 8080 and 8081). I was following the Nginx docs for configuring multiple virtual private servers, however Nginx throws error when I place the http {} syntax.
Restarting nginx nginx
...fail!
Here's the code:
http {
# Configuration specific to HTTP and affecting all virtual servers
server {
# configuration of virtual server 1
location /one {
# configuration for processing URIs with '/one'
}
location /two {
# configuration for processing URIs with '/two'
}
}
server {
# configuration of virtual server 2
}
}
When the http {} is removed nginx does not produce any errors.
Restarting nginx nginx
...done
I am currently running nginx version: nginx/1.4.6 (Ubuntu). Can someone explain why this occurs?

Resources