Nginx resolver for Kubernetes with skydns - nginx

I can't find a way to make an nginx pod resolve another kubernetes services URLs.
I am NOT using kube-dns , we are using kube2sky solely and we are not going to implement kube-dns yet, so I need to fix in this scenario.
For example, I want nginx to resolve a service URL app.mynamespace.svc.skydns.local but if I run a ping to that URL it resolves successfully.
My nginx config part is:
location /api/v1/namespaces/mynamespace/services/app/proxy/ {
resolver 127.0.0.1
set \$endpoint \"http://app.mynamespace.svc.skydns.local/\";
proxy_pass \$endpoint;
proxy_http_version 1.1;
proxy_set_header Connection \"upgrade\";
}
I need to specify the target upstream in a variable because I want nginx to starts even if the target is not available, if I don't specify in variable nginx crashes when starting up because the upstream needs to be available and resolvable.
The problem I think is the resolver value, I've tried with 127.0.0.1, with 127.0.0.11, and with the skydns IP specified in configuration 172.40.0.2:53:
etcdctl get /skydns/config
{"dns_addr":"0.0.0.0:53","ttl":4294967290,"nameservers":["172.40.0.2:53"]}
But nginx cannot resolve the URL yet.
What IP should I specify in the resolver field in nginx config for kubernetes and skydns config?
Remember that we don't have kube-dns.
Thank you.

I don't think resolving app.mynamespace.svc.skydns.local has anything to do with configuring the upstream DNS servers. Generally, for that, you configure a well-known DNS server like 8.8.8.8 or your cloud infrastructure DNS server which would be perhaps 172.40.0.2. For example as described in the docs:
$ curl -XPUT http://127.0.0.1:4001/v2/keys/skydns/config \
-d value='{"dns_addr":"127.0.0.1:5354","ttl":3600, "nameservers": ["8.8.8.8:53","8.8.4.4:53"]}'
You might want to check the logs of your kube2sky2 pod, for any guidance and that all the config options are specified like --kube-master-url, --etcd-server. Maybe it can't talk to the Kubernetes master and receive updates of running pods so that the SRV entries get updates.

Related

Route external traffic from a standalone nginx service to kubernetes nodeport service

GOAL
I want to get access to kubernetes dashboard with a standalone nginx service and a microk8s nodeport service.
CONTEXT
I have a linux server.
On this server, there are several running services such as:
microk8s
nginx (note: I am not using ingress, nginx service works independently from microk8s).
Here is the workflow that I am looking for:
http:// URL /dashboard
NGINX service (FROM http:// URL /dashboard TO nodeIpAddress:nodeport)
nodePort service
kubernetes dashboard service
ISSUE:
However, each time I request http:// URL /dashboard I receive a 502 bad request answer, what I am missing?
CONFIGURATION
Please find below, nginx configuration, node port service configuration and the status of microk8s cluster:
nginx configuration: /etc/nginx/site-availables/default
node-port-service configuration
node ip address
microk8s namespaces
Thank you very much for your helps.
I'll summarize the whole problem and solutions here.
First, the service which needs to expose the Kubernetes Dashboard needs to point at the right target port, and also needs to select the right Pod (the kubernetes-dashboard Pod)
If you check your service with a:
kubectl desribe service <service-name>
You can easily see if its selecting a Pod (or more than one) or nothing, by looking at the Endpoints section. In general, your service should have the same selector, port, targetPort and so on of the standard kubernetes-dashboard service (which expose the dashboard but only internally to the cluster)
Second, your NGINX configuration proxy the location /dashboard to the service, but the problem is that the kubernetes-dashboard Pod is expecting requests to reach / directly, so the path /dashboard means nothing to it.
To solve the second problem, there are a few ways, but they all lay in the NGINX configuration. If you read the documentation of the module proxy (aka http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass) you can see that the solution is to add an URI in the configuration, something like this:
proxy_pass https://51.68.123.169:30000/
Notice the trailing slash, that is the URI, which means that the location matching the proxy rule is rewritten into /. This means that your_url/dashboard will just become your_url/
Without the trailing slash, your location is passed to the target as it is, since the target is only and endpoint.
If you need more complex URI changes, what you're searching is a rewrite rule (they support regex and lots more) but adding the trailing slash should solve your second problem.
Indeed #AndD, you advice me to execute this command :
sudo microk8s kubectl describe service -n kube-system kubernetes-dashboard
In order to get these below information
Labels:k8s-app=kubernetes-dashboard
TargetPort:8443/TCP
thanks to this above information, I could fix the nodePort service, you can find below a snippet:
spec:
type: NodePort
k8s-app: 'kubernetes-dashboard'
ports:
- protocol: TCP
port: 8443
targetPort: 8443
nodePort: 30000
However, I did change the nginx configuration to
proxy_pass https://51.68.123.169:30000/
I do receive a successful response (html), then all remaining requests have a 404 status (js, css, assets).
http 404
edit
the html file contains a set of dependencies (js/img/css)
<link rel="stylesheet" href="styles.3aaa4ab96be3c2d1171f.css"></head>
...
<script src="runtime.3e2867321ef71252064e.js" defer></script>
So it tries to get these assets with these URL:
https::// URL/styles.3aaa4ab96be3c2d1171f.css
https::// URL/runtime.3e2867321ef71252064e.js
instead of using:
https::// URL/dashboard/styles.3aaa4ab96be3c2d1171f.css
https::// URL/dashboard/runtime.3e2867321ef71252064e.js
edit #2
I just changed again the subpath => dashboad/ to dash/
new nginx conf
And it works with chromium.
But it doesn't with firefox. (not a big deal)
thank you very much AndD!
Besides, I got a similar issue with jenkins, however jenkins image contains a parameter that fixes the issue.
docker run --publish 8080:8080 --env JENKINS_OPTS="--prefix=/subpath" jenkins/jenkins
I was expecting to find something similar with kubernetesui/dashboard but I haven't found anything
https://hub.docker.com/r/kubernetesui/dashboard
https://github.com/kubernetes/dashboard
Well, I do not know how to configure very well nginx in order to display correctly the dashboard in a subpath and I didn't find any parameter in the kubernetes\dashboard image to handle the subpath.

How can I dynamically reconfigure upstream servers on nginx OSS?

I have multiple upstream servers from an nginx load balancer:
upstream app {
# Make each client IP address stick to the same server
# See http://nginx.org/en/docs/http/load_balancing.html
ip_hash;
# Use IP addresses: see recommendation at https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
server 1.1.1.1:6666; # app-server-a
server 2.2.2.2:6666; # app-server-a
}
Right now I ue the servers in an active/passove configuration by taking down each servers (eg systemctl myapp stop) then letting nginx detect the server is down.
However I'd like to be able to change the upstream server dyamically, without having to take either app server or nginx OSS down. I'm aware of the proprietary upstream_conf module for nginx Plus but am using nginx OSS.
How can I dynamically dynamically reconfigure the upstream server on nginx OSS?
You can use:
openresty an OSS nginx bundle with lua scripting ability
nginx with lua scripting (you can configure it by yourself using nginx OSS and luajit) to achieve this.
dynx can achieve exactly what you are looking for, it's still work in progress but the dynamic upstream functionality is there and it's configurable through a rest API.
I'm adding the details on how to deploy and configure dynx:
you need to have a docker swarm up and running (for testing purpose
you can have a 1 swarm machine), follow the docker documentation to do that.
after you need to deploy the stack, for example, with this command (you need to be on the dynx git root):
docker stack deploy -c docker-compose.yml dynx
To check if the application deployed correctly, you can use this command:
docker stack services dynx
To configure an location you can use through the api you can for instance do:
curl -v "http://localhost:8888/configure?location=/httpbin&upstream=http://www.httpbin.org/anything&ttl=10"
To test if it works:
curl -v http://localhost:8666/httpbin
Do not hesitate to contact me or open an issue on github if you are not able to get it to work

Docker Network Nginx Resolver

I am trying to get rid of deprecated Docker links in my configuration. What's left is getting rid of those Bad Gateway nginx reverse proxy errors when I recreated a container.
Note: I am using Docker networks in bridge mode. (docker network create nettest)
I am using the following configuration snippet inside nginx:
location / {
resolver 127.0.0.1 valid=30s;
set $backend "http://confluence:8090";
proxy_pass $backend;
I started a container with hostname confluence on my Docker network with name nettest.
Then I started the nginx container on network nettest.
I can ping confluence from inside the nginx container
confluence is listed inside the nginx container's /etc/hosts file
nginx log says send() failed (111: Connection refused) while resolving, resolver: 127.0.0.1:53
I tried the docker network default dns resolver 127.0.0.11 from /etc/resol.conf
nginx log says confluence could not be resolved (3: Host not found)
Anybody knows how to configure nginx resolver with Docker Networks or an alternative on how to force Nginx to correctly resolve the Docker network hostname?
First off, you should be using the Docker embedded DNS server at 127.0.0.11.
Your problem could be caused by 1 of the following:
nginx is trying to use IPv6 (AAAA record) for the DNS queries.
See https://stackoverflow.com/a/35516395/1529493 for the solution.
Basically something like:
http {
resolver 127.0.0.11 ipv6=off;
}
This is probably no longer a problem with Docker 1.11:
Fix to not forward docker domain IPv6 queries to external servers
(#21396)
Take care that you don't accidentally override the resolver configuration directive. In my case I had in the server block resolver 8.8.8.8 8.8.4.4; from Mozilla's SSL Configuration Generator, which was overriding the resolver 127.0.0.11; in the http block. That had me scratching my head for a long time...
Maybe you should check your container's /etc/resolv.conf
It shows your container's correct DNS config and then use that DNS server IP for resolver.
127.0.0.11 does not works in Rancher
I was running "node:12.18-alpine" with angular frontend and hit the same problem with proxy_pass.
Locally it was working with:
resolver 127.0.0.11;
As simple as that! Just execute:
$ cat /etc/resolv.conf | grep nameserver
In your container to get this ip address.
However, when deploying to kubernetes (AWS EKS) I got the very same error:
failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53
Solution:
First solution was to find out the IP of the kube-dns service like below:
$ kubectl get service kube-dns -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 178d
Simple replacing IP for CLUSTER-IP worked like a charm.
Later, after some more doc digging, I find out that I could reference the service by name (which is little bit more elegant and resilient):
resolver kube-dns.kube-system valid=10s;
My problem was $request_uri at the end. After adding it at the end of uri and changing the 127.0.0.1 to 127.0.0.11 solved my issue. I hope it will help people to not spend hours on this.
location /products {
resolver 127.0.0.11;
proxy_pass http://products:3000$request_uri;
}
In several cases where I had this error, adding resolver_timeout 1s; to the Nginx config solved the issue. Most of the time I don't have a resolver entry.
Edit: what also worked for containers where I could explicitly define a nameserver: resolver DNS-IP valid=1s;
We hit this with docker containers on windows trying to lookup host.docker.internal using the docker internal resolver at 127.0.0.11. All queries would resolve correctly except host.docker.internal. Fix was to add the ipv6=off flag to the resolver line in nginx.conf.
I solved this problem with the following way:
docker run --rm -d --network host --name "my_domain" nginx
https://docs.docker.com/network/network-tutorial-host/
You need a local dns server like dnsmasq to resolve using 127.0.0.1. Try installing it using apk add --update dnsmasq and set it up if you're using an alpine (nginx:alpine) variant.

RabbitMQ connection through Nginx

I am trying to setup rabbitmq it can be accessed externally (from non-localhost) through nginx.
nginx-rabbitmq.conf:
server {
listen 5672;
server_name x.x.x.x;
location / {
proxy_pass http://localhost:55672/;
}
}
rabbitmq.conf:
[
{rabbit,
[
{tcp_listeners, [{"127.0.0.1", 55672}]}
]
}
]
By default guest user can only interact from localhost, so we need to create another user with required permissions, like so:
sudo rabbitmqctl add_user my_user my_password
sudo rabbitmqctl set_permissions my_user ".*" ".*" ".*"
However, when I attempt a connection to rabbitmq through pika I get ConnectionClosed exception
import pika
credentials = pika.credentials.PlainCredentials('my_username', 'my_password')
pika.BlockingConnection(
pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials)
)
--[raises ConnectionClosed exception]--
If I use the same parameters but change host to localhost and port to 5672 then I connect ok:
pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials)
I have opened port 5672 on the GCE web console, and communication through nginx is happening: nginx access.log file shows
[30/Apr/2014:22:59:41 +0000] "AMQP\x00\x00\x09\x01" 400 172 "-" "-" "-"
Which shows a 400 status code response (bad request).
So by the looks the request fails when going through nginx, but works when we request rabbitmq directly.
Has anyone else had similar problems/got rabbitmq working for external users through nginx? Is there a rabbitmq log file where I can see each request and help further troubleshooting?
Since nginx 1.9 there is stream module for the tcp or udp (not compiled with by default).
I configured my nginx (1.13.3) with ssl stream
stream {
upstream rabbitmq_backend {
server rabbitmq.server:5672
}
server {
listen 5671 ssl;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_handshake_timeout 30s;
ssl_certificate /path/to.crt;
ssl_certificate_key /path/to.key;
proxy_connect_timeout 1s;
proxy_pass rabbitmq_backend;
}
}
https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-tcp/
You have configured nginx as an HTTP reverse proxy, however rabbitmq is configured to use the AMQP protocol (see description of tcp_listeners at https://www.rabbitmq.com/configure.html)
In order for nginx to do anything meaningful you will need to reconfigure rabbitmq to use HTTP - for example http://www.rabbitmq.com/web-stomp.html.
Of course, this may have a ripple effect because any clients that are accessing rabbitmq via AMQP must be reconfigured/redesigned to use HTTP.
You can try and proxy to tcp, installing a tcp-proxy module for nginx to work with AMQP.
https://github.com/yaoweibin/nginx_tcp_proxy_module
Give it a go.
Nginx was originally only HTTP server, I also suggest looking into that above referred tcp proxy module, but if you would like to have proven load-balancer which is general TCP reverse proxy (not just HTTP, but can handle any protocol in general), you might consider using HAproxy.
since amqp is on tcp/udp level you need to configure nginx for tcp/udp connection
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer
I might be late to the party, but I am very much sure that my article will surely help a lot of people in the upcoming days.
In the article I have explained how to install Letsencrypt certificate for RabbitMQ Management GUI with NGINX as reverse proxy on Port: 15672 which runs on HTTP protocol.
I have also used the same SSL certificates to power up the RabbitMQ Server that runs on AMQP protocol.
Kindly go through the following article for detailed description:
https://stackcoder.in/posts/install-letsencrypt-ssl-certificate-for-rabbitmq-server-and-rabbitmq-management-tool
NOTE: Don't configure RabbitMQ Server running on port 5672 as a reverse proxy. Even if you do then kindly use NGINX streams. But I
highly recommend sticking with adding certificate paths in
rabbitmq.conf file as RabbitMQ works on TCP/UDP

Assigning vhosts to Docker ports

I have a wildcard DNS set up so that all web requests to a custom domain (*.foo) map to the IP address of the Docker host. If I have multiple containers running Apache (or Nginx) instances, each container maps the Apache port (80) to some external inbound port.
What I would like to do is make a request to container-1.foo, which is already mapped to the correct IP address (of the Docker host) via my custom DNS server, but proxy the default port 80 request to the correct Docker external port such that the correct Apache instance from the specified container is able to respond based on the custom domain. Likewise, container-2.foo would proxy to a second container's apache, and so on.
Is there a pre-built solution for this, is my best bet to run an Nginx proxy on the Docker host, or should I write up a node.js proxy with the potential to manage Docker containers (start/stop/reuild via the web), or...? What options do I have that would make using the Docker containers more like a natural event and not something with extraneous ports and container juggling?
This answer might be a bit late, but what you need is an automatic reverse proxy. I have used two solutions for that:
jwilder/nginx-proxy
Traefik
With time, my preference is to use Traefik. Mostly because it is well documented and maintained, and comes with more features (load balancing with different strategies and priorities, healthchecks, circuit breakers, automatic SSL certificates with ACME/Let's Encrypt, ...).
Using jwilder/nginx-proxy
When running a Docker container Jason Wilder's nginx-proxy Docker image, you get a nginx server set up as a reverse proxy for your other containers with no config to maintain.
Just run your other containers with the VIRTUAL_HOST environment variable and nginx-proxy will discover their ip:port and update the nginx config for you.
Let say your DNS is set up so that *.test.local maps to the IP address of your Docker host, then just start the following containers to get a quick demo running:
# start the reverse proxy
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy
# start a first container for http://tutum.test.local
docker run -d -e "VIRTUAL_HOST=tutum.test.local" tutum/hello-world
# start a second container for http://deis.test.local
docker run -d -e "VIRTUAL_HOST=deis.test.local" deis/helloworld
Using Traefik
When running a Traefik container, you get a reverse proxy server set up which will reconfigure its forwarding rules given docker labels found on your containers.
Let say your DNS is set up so that *.test.local maps to the IP address of your Docker host, then just start the following containers to get a quick demo running:
# start the reverse proxy
docker run --rm -it -p 80:80 -v /var/run/docker.sock:/var/run/docker.sock traefik:1.7 --docker
# start a first container for http://tutum.test.local
docker run -d -l "traefik.frontend.rule=Host:tutum.test.local" tutum/hello-world
# start a second container for http://deis.test.local
docker run -d -l "traefik.frontend.rule=Host:deis.test.local" deis/helloworld
Here are two possible answers: (1) setup ports directly with Docker and use Nginx/Apache to proxy the vhosts, or (2) use Dokku to manage ports and vhosts for you (which is how I learned to do Method 1).
Method 1a (directly assign ports with docker)
Step 1: Setup nginx.conf or Apache on the host, with the desired port number assignments. This web server, running on the host, will do the vhost proxying. There's nothing special about this with regard to Docker - it is normal vhost hosting. The special part comes next, in Step 2, to make Docker use the correct host port number.
Step 2: Force port number assignments in Docker with "-p" to set Docker's port mappings, and "-e" to set custom environment variables within Docker, as follows:
port=12345 # <-- the vhost port setting used in nginx/apache
IMAGE=myapps/container-1
id=$(docker run -d -p :$port -e PORT=$port $IMAGE)
# -p :$port will establish a mapping of 12345->12345 from outside docker to
# inside of docker.
# Then, the application must observe the PORT environment variable
# to launch itself on that port; This is set by -e PORT=$port.
# Additional goodies:
echo $id # <-- the running id of your container
echo $id > /app/files/CONTAINER # <-- remember Docker id for this instance
docker ps # <-- check that the app is running
docker logs $id # <-- look at the output of the running instance
docker kill $id # <-- to kill the app
Method 1b Hard-coded application port
...if you're application uses a hardcoded port, for example port 5000 (i.e. cannot be configured via PORT environment variable, as in Method 1a), then it can be hardcoded through Docker like this:
publicPort=12345
id=$(docker run -d -p $publicPort:5000 $IMAGE)
# -p $publicPort:5000 will map port 12345 outside of Docker to port 5000 inside
# of Docker. Therefore, nginx/apache must be configured to vhost proxy to 12345,
# and the application within Docker must be listening on 5000.
Method 2 (let Dokku figure out the ports)
At the moment, a pretty good option for managing Docker vhosts is Dokku. An upcoming option may be to use Flynn, but as of right now Flynn is just getting started and not quite ready. Therefore we go with Dokku for now: After following the Dokku install instructions, for a single domain, enable vhosts by creating the "VHOST" file:
echo yourdomain.com > /home/git/VHOST
# in your case: echo foo > /home/git/VHOST
Now, when an app is pushed via SSH to Dokku (see Dokku docs for how to do this), Dokku will look at the VHOST file and for the particular app pushed (let's say you pushed "container-1"), it will generate the following file:
/home/git/container-1/nginx.conf
And it will have the following contents:
upstream container-1 { server 127.0.0.1:49162; }
server {
listen 80;
server_name container-1.yourdomain.com;
location / {
proxy_pass http://container-1;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
When the server is rebooted, Dokku will ensure that Docker starts the application with the port mapped to its initially deployed port (49162 here), rather than getting assigned randomly another port. To achieve this deterministic assignment, Dokku saves the initially assigned port into /home/git/container-1/PORT and on the next launch it sets the PORT environment to this value, and also maps Docker's port assignments to be this port on both the host-side and the app-side. This is opposed to the first launch, when Dokku will set PORT=5000 and then figure out whatever random port Dokku maps on the VPS side to 5000 on the app side. It's round about (and might even change in the future), but it works!
The way VHOST works, under the hood, is: upon doing a git push of the app via SSH, Dokku will execute hooks that live in /var/lib/dokku/plugins/nginx-vhosts. These hooks are also located in the Dokku source code here and are responsible for writing the nginx.conf files with the correct vhost settings. If you don't have this directory under /var/lib/dokku, then try running dokku plugins-install.
With docker, you want the internal ips to remain normal (e.g. 80) and figure out how to wire up the random ports.
One way to handle them, is with a reverse proxy like hipache. Point your dns at it, and then you can reconfigure the proxy as your containers come up and down. Take a look at http://txt.fliglio.com/2013/09/protyping-web-stuff-with-docker/ to see how this could work.
If you're looking for something more robust, you may want to take a look at "service discovery." (a look at service discovery with docker: http://txt.fliglio.com/2013/12/service-discovery-with-docker-docker-links-and-beyond/)

Resources