nginx reverse proxy for docker service - nginx

I have a simple reverse proxy nginx.conf:
events {
worker_connections 1024;
}
http {
gzip on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host:$server_port
server {
listen 80;
server_name app.local;
location / {
proxy_pass http://localhost:3000;
}
}
}
localhost:3000 is a docker swarm (1.13) service node app. Everything works great initially, when I request app.local. However whenever I update a service (containers are redeployed):
docker service update --force app
Nginx will think something is wrong (temporarily), and doesn't respond to requests to app.local for 30 seconds or so. This is all running on a CentOS 7 server.
I've configured my docker service to redeploy via rolling updates, so from the outside, 3000 never appears to go down. I can continually request app.local:3000, bypassing nginx with out any perceived downtime.
Nginx is NOT running in a docker container. I've gotta be missing some sort of configuration option.

Related

Configuring a Nginx in front of my front and back end on Kubernetes

I have been having problems trying to deploy my web app in kubernetes.
I wanted to mimic old deploy with nginx working as reverse proxy in front of my back and front end services.
I have 3 pieces in my system, nginx, front and back. I built 3 deploys, 3 services and exposed only my nginx service using nodePort: 30050.
Without further delays, this is my nginx.conf:
upstream my-server {
server myserver:3000;
}
upstream my-front {
server myfront:4200;
}
server {
listen 80;
server_name my-server.com;
location /api/v1 {
proxy_pass http://my-server;
}
location / {
proxy_pass http://my-front;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
I tried to install curl and nslookup inside one of the pods and tried to do manual request on cluster internal endpoints... tears came to my eyes, everything is working...i am almost a developer worthy of the cloud.
Everything is working smoothly...everything but nginx DNS resolution.
If i do kubectl exec -it my-nginx-pod -- /bin/bash and try to curl 1 of the other 2 services: curl myfront:4200 it works properly.
If i try to nslookup one of them it works as well.
After this i tried to replace, in nginx.conf, the service names with the pods IPs. After restarting the nginx service everything was working.
Why doesn't nginx resolve the upstream names properly?
I am going nuts over this.
Nginx caches the resolved IPs. To force Nginx to resolve DNS, you can introduce a variable:
location /api/v1 {
set $url "http://my-server";
proxy_pass $url;
}
More details can be found in this related this answer.
As it is likely a caching in Nginx, what you describe, it would also explain why restarting (or reloading) Nginx will fix the problem. At least for a while until the DNS entry changes, again.
I think, it is not related to Kubernetes. I had the same problem a while ago when Nginx cached DNS entries of AWS ELBs, which frequently change IPs.

How to create Kubernetes cluster serving its own container with SSL and NGINX

I'm trying to build a Kubernetes cluster with following services inside:
Docker-registry (which will contain my django Docker image)
Nginx listenning both on port 80 and 443
PostgreSQL
Several django applications served with gunicorn
letsencrypt container to generate and automatically renew signed SSL certificates
My problem is a chicken and egg problem that occurs during the creation of the cluster:
My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80)
So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed...
So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443
-> This kind of look like a waste of resources in my opinion, but why not.
Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https.
So in my nginx configuration, I'll have a docker-registry.conf file looking like:
upstream docker-registry {
server registry:5000;
}
server {
listen 443;
server_name docker.thedivernetwork.net;
# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
The important part is the proxy_pass that redirect toward the registry container.
The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf:
upstream django {
server django:5000;
}
server {
listen 443 ssl;
server_name example.com;
charset utf-8;
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 20M;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_django;
}
location #proxy_to_django {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
#proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 65;
proxy_read_timeout 65;
proxy_pass http://django;
}
}
So nginx will successfully start only under 3 conditions:
secret is mounted (this could be addressed by splitting Nginx into 2 separate containers)
registry service is started
django service is started
The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again.
I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them
The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations:
I start docker registry service
I start Nginx with only the registry.conf
I create my django rc and service
I reload nginx with both registry.conf and django.conf
If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well.
How can I cleanly achieve this setup?
Thanks for your help
Thibault
Are you using Kubernetes Services for your applications?
With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned.
So you start the Services, then start nginx and whatever Pod you want in the order you want.

Websockets on ElasticBeanstalk giving 404

I'm trying to deploy a websocket server to Elastic Beanstalk.
I have a Docker container that contains both nginx and a jar server, with nginx just doing forwarding. The nginx.conf is like this:
listen 80;
location /ws/ { # <-- this part only works locally
proxy_pass http://127.0.0.1:8090/; # jar handles websockets on port 8090
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / { # <-- this part works locally and on ElasticBeanstalk
proxy_pass http://127.0.0.1:8080/; # jar handles http requests on port 8080
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
I can run this docker locally and everything works fine - http requests are served, and I can connect websockets using ws://localhost:80/ws/ However, when I deploy to Elastic Beanstalk, http requests are still ok, but trying to connect websockets on ws://myjunk.elasticbeanstalk.com:80/ws/ gives a 404 error. Do I need something else to allow websockets on Elastic Beanstalk?
Ok, got it working. I needed the ElasticBeanstalk load balancer to use TCP instead of HTTP.
To do this from the AWS console (as it's laid out on 5/16/2015), go to your ElasticBeanstalk environment, choose "Configuration" on the left menu, under "Network Tier" there's a "Load Balancing" pane. Click its cog wheel, then you can change the load balancer protocol from http to tcp.

Spawning node.js server with nginx

So this is something new to me. I have a node.js Application running on my server on Port 3000. I have an nginx Proxy:
upstream websli_nodejs_app_2 {
server 127.0.0.1:3000;
}
# this is the dynamic websli server
server {
listen 80;
server_name test.ch;
#root /var/www/websli/public;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://websli_nodejs_app_2/;
proxy_redirect off;
}
}
This works like a charm. Now I'm spawning (at least I believe that's what it's called). More node application. That's where Wintersmith comes in place: I'm running wintersmith preview
On my localhost that will result in another node.js server on localhost:8000. When I then go to localhost:8000 in my browser I'm getting the expected result, but then on my localhost I don't have the nginx proxy setup.
The issue:
Now on my production setup with nginx, I'm a bit stuck because I obviously cannot access localhost:8000
I have tried to add another upstream server, but this didn't really worked out. I have then also tried to spawn on something like dev.test.ch:8000, but that would result in something like error listen EADDRNOTAVAIL
What I'm looking for
The goal is to start another server from inside my main node.js server and make it accessible from a browser. Any input is highly welcomed.

How does supervisord and nginx handle what tornado port is used?

I am using supervisord to spool 2 instances of tornado on different ports and I use nginx as a reverse proxy to these ports. I have noticed that all traffic is directing to only one port. How does supervisord or nginx decide which instance of tornado is used when a user makes a request from the web service?
nginx config:
http {
upstream frontends {
server xx.xxx.x.xxx:8001;
server xx.xxx.x.xxx:8002;
}
server {
listen 80;
server_name xx.xxx.x.xxx;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://frontends;
}
}
}
From the nginx docs:
Requests are distributed according to the servers in round-robin manner with respect of the server weight.
By default, servers are given equal weight. Are you sure all requests are going to one port?
Also note that supervisord's role is simply process management - only nginx decides how to distribute traffic to the ports you've configured.

Resources