Allow nginx to read docker.sock - nginx

In order to monitor my docker containers, I've decided to expose docker remote API through nginx by the following rule:
server {
listen 1234;
server_name xxx.xxx.xxx.xxx;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unix:/var/run/docker.sock;
}
}
But in the nginx.error file, I get the following error:
connect() to unix:/var/run/docker.sock failed (13: Permission denied
The reason is that docker.sock is under the ownership of docker group while nginx is running in www-data group.
What is the best way to solve this problem?

The reason is that docker.sock is under the ownership of docker group
while nginx is running in www-data group.
What is the best way to solve this problem?
For this issue you can add user under which nginx running to docker group.
usermod -a -G www-data,docker user

You can open the http socket in the daemon by adding the following text to the ExecStart in the daemon config file:
-H <ip address>:2375
You can find the location of the configuration file in the output of the command:
systemctl status docker

You can set permission for docker socket:
sudo chmod a+r /var/run/docker.sock

Related

multiple site on localhost served by nginx without domain name

I'm on ubuntu 20.04 rpi4 and I like to write some www site for testing.
Is quite simple configure nginx using some server blocks and server_name inside the server blocs to point to some virtual domain not existing and then set this domain to point to localhost in /etc/hosts:
# /etc/hosts
127.0.0.1 adminer
127.0.0.1 pippo
127.0.0.1 pluto
to have some site like this:
http://adminer
http://pippo
http://pluto
But I like to avoid /etc/hosts setting.
what I like is:
http://localhost/adminer
http://localhost/pippo
http://localhost/pluto
...
to point to 3 different site adminer, pippo and pluto.
It is possible?
what configuration have to use?
can I use one server block for one site or have I to use one server block to all 3 sites?
I'm new on nginx ...
best regards,
Leonardo
I just came across the same issue and I used ports to achieve that.
This solution worked for me on a local machine and home network and probably works on any VPS without domain.
WEB SERVER 1
Open your firewall, example port 81
sudo ufw allow 81
Create your 1st web directory
sudo mkdir -p /var/www/web-folder-name1
Create test content in your web-folder
sudo nano /var/www/web-folder-name1/index.html
and paste any content here to test
Hello World 1!
Create a virtual host file in Nginx
sudo nano /etc/nginx/sites-available/web-folder-name1
and paste the following content
server {
listen 81; # the port is important
server_name _; # underscore is ok as you don't have a domain
root /var/www/web-folder-name1;
index index.html;
}
Enable your web server
sudo ln -s /etc/nginx/sites-available/web-folder-name1 /etc/nginx/sites-enabled/
WEB SERVER 2
Open your firewall, example port 82
sudo ufw allow 82
Create your 2nd web directory
sudo mkdir -p /var/www/web-folder-name2
Create test content in your web-folder
sudo nano /var/www/web-folder-name2/index.html
and paste any content here to test
Hello World 2!
Create a virtual host file in Nginx
sudo nano /etc/nginx/sites-available/web-folder-name2
and paste the following content
server {
listen 82;
server_name _;
root /var/www/web-folder-name2;
index index.html;
}
Enable your web server
sudo ln -s /etc/nginx/sites-available/web-folder-name2 /etc/nginx/sites-enabled/
Restart Nginx
sudo systemctl restart nginx
Test in your browser
127.0.0.1:81
127.0.0.1:82
# or
localhost:81
localhost:82
# or if you're on a network
static-ip:81
static-ip:82
Thanks #lotfio.
if server_name is the same cannot be other server blocks using the same server_name, I suppose.
to avoid setting on /etc/hosts we can be do:
on /etc/nginx/sites-available/default:
server {
#... normal default stuff conf
include /etc/nginx/sites-avilable/localhost_adminer.inc;
incluse /etc/nginx/sites-avilable/localhost_pippo.inc;
#...
#... normal default stuff conf
}
if you like to do a reverse proxy on apache2 for adminer like my first try to move from apache2 to nginx you have to configure apache2 to Listen on other port (I choose 8181):
in /etc/nginx/sites-avilable/localhost_adminer.inc
location /adminer/ {
index conf.php;
alias /etc/adminer/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8181/adminer/;
}
(I'm on ubuntu so adminer package is configured to start on /etc/adminer/)
and so on for pippo, pluto sites etc.
best regards,
Leonardo

Docker Nginx Proxy: how to route traffic to different container using path and not hostname

lets say that now I have different app running on the same server on different path:
10.200.200.210/app1
10.200.200.210/app2
10.200.200.210/app3
I want to run each app on a different Docker container using nginx as a proxy.
I tried jwilder/nginx-proxy and works great if I use different domain names (app1.domain.com, app2.domain.com, etc), but I'm not able to use domains, I need to use the same IP.
also I can't use different ports like:
10.200.200.210:81/app1
10.200.200.210:82/app2
10.200.200.210:83/app3
all must work on port 80.
Is there a way to configure jwilder/nginx-proxy to do this?
Is there another Docker image like jwilder/nginx-proxy that make it.
or pls could you give me some hint to build an nginx docker container by myself?
In case if somebody is still looking for the answer.
jwilder/nginx-proxy allows you to use custom Nginx configuration either a proxy-wide or per-VIRTUAL_HOST basis.
Here's how can you do it with Per-VIRTUAL_HOST location configuration.
Inside your poject folder create another folder - "vhost.d".
Create file "whoami.local" with custom nginx configuration inside "vhost.d" folder. This file must have the same name as VIRTUAL_HOST!
./vhost.d/whoami.local
location /app1 {
proxy_pass http://app1:8000;
}
location /app2 {
proxy_pass http://app2:8000;
}
Create docker-compose.yml file.
./docker-compose.yml
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "8080:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /path/to/vhost.d:/etc/nginx/vhost.d:ro
gateway:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=whoami.local
app1:
image: jwilder/whoami
app2:
image: jwilder/whoami
Run docker-compose up
Check configuration
In bash run:
$ curl -H "Host: whoami.local" localhost:8080
I'm 1ae273bce7a4
$ curl -H "Host: whoami.local" localhost:8080/app1
I'm 52b1a7b1992a
$ curl -H "Host: whoami.local" localhost:8080/app2
I'm 4adbd3f9e7a0
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6a659a4d4b0a jwilder/nginx-proxy "/app/docker-entrypo…" 54 seconds ago Up 53 seconds 0.0.0.0:8080->80/tcp nginxreverseproxy_nginx-proxy_1
4adbd3f9e7a0 jwilder/whoami "/app/http" 54 seconds ago Up 53 seconds 8000/tcp nginxreverseproxy_app2_1
52b1a7b1992a jwilder/whoami "/app/http" 54 seconds ago Up 53 seconds 8000/tcp nginxreverseproxy_app1_1
1ae273bce7a4 jwilder/whoami "/app/http" 54 seconds ago Up 53 seconds 8000/tcp nginxreverseproxy_gateway_1
You can also add "whoami.local" domain to /etc/hosts file and make calls to this domain directly.
/etc/hosts
...
127.0.0.1 whoami.local
...
Result:
$ curl whoami.local:8080
I'm 52ed6da1e86c
$ curl whoami.local:8080/app1
I'm 4116f51020da
$ curl whoami.local:8080/app2
I'm c4db24012582
Just use nginx image to create container,**do remember set net "host" **which will make your container share same address and port with host machine.mount nginx.conf file and config proxy table.for example:
docker command:
docker run --name http-proxy -v /host/nginx.conf:/etc/nginx/nginx.conf --net host -itd --restart always nginx
nginx.conf:
server {
listen 80;
location /app1 {
proxy_pass YOUR_APP1_URL;
}
location /app2 {
proxy_pass YOUR_APP2_URL;
}
}
Here is a full nginx.conf
It redirects all to root, and only /api to a different container.
Source and an example container using it
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
proxy_pass http://frontend:3000/;
}
location /api {
proxy_pass http://backend/api;
}
}
}
just put this inside /etc/nginx/nginx.conf
worker_processes 1;
error_log /var/log/nginx/error.log;
events {
worker_connections 1024;
}
http {
server {
listen 80;
location /api {
proxy_pass http://awesome-api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Default bridge network has gateway on 172.17.0.1. You can use this IP address in your nginx.conf
server {
listen 80;
server_name example.com;
location /app1 {
proxy_pass http://172.17.0.1:81;
}
location /app2 {
proxy_pass http://172.17.0.1:82;
}
}
They will be accessible using port 80 from outside
You can check your bridge gateway IP address by running command docker network inspect bridge
My situation is a little bit different. I'm working on a project where Django and a couple of other apps sit behind nginx (acting as a reverse proxy)
The accepted solution did not work for me, and I think it is because the various apps do not serve files (i.e. /app1-uri/bla/blo/bli/ and /app1-uri/bla/blo/bli are exactly equivalent). All static files are gathered for nginx to serve.
The 'problematic' behavior is explained in the docs here. In essence, nginx picks up the uris without trailing slash and try to resolve them as resources, if it can't, it then redirects to /bla/blo/bli/ instead of /app1-uri/bla/blo/bli/
This is what finally worked for me. In this example app1-uri is repairapp.
server {
listen 80;
listen 443 ssl http2;
...
# This is the line that fixes the issue.
rewrite ^/repairapp/([^static].*[^/])$ /repairapp/$1/ permanent;
# This is nginx serving my static files
location /repairapp/static/ {
alias /var/www/repairapp/static/;
}
# This is the uri that maps to my app (no file serving here)
location /repairapp/ {
proxy_pass http://repairappcontainer:8000/;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
...
Note the line rewrite ^/repairapp/([^static].*[^/])$ /repairapp/$1/ permanent;
It rewrites and adds a trailing forward slash to any (of my app) uris that misses it; except for those that start with /repairapp/static. Those uris map to resources that nginx will then serve.
To debug, open a shell in the nginx container and run curl -IL http://[server-name]/repairapp/[string-of-uris-without-trailing-slash] to see exactly what happens.

How to resolve Nginx "proxy_pass 502 Bad Gateway" error

I have trying to add proxy_set_header in my nginx.conf file. When I try to add proxy_pass and invoke the URL it throws 502 Bad Gateway nginx/1.11.1 error.
Not sure how to resolve this error:
upstream app-server {
# connect to this socket
server unix:///tmp/alpasso-wsgi.sock; # for a file socket
}
server {
server_name <name>;
listen 80 default_server;
# Redirect http to https
rewrite ^(.*) https://$host$1 permanent;
}
server {
server_name <name>;
listen 443 ssl default_server;
recursive_error_pages on;
location /azure{
proxy_pass http://app-server;
}
ssl on;
ssl_certificate /etc/nginx/server.crt;
ssl_certificate_key /etc/nginx/server.key;
ssl_client_certificate /etc/nginx/server.crt;
ssl_verify_client optional;
}
Had similar problem with proxy_pass, if your Linux server is using SELINUX then you may want to try this.
$ setsebool -P httpd_can_network_connect true
Refer to Warren's answer: https://unix.stackexchange.com/questions/196907/proxy-nginx-shows-a-bad-gateway-error
502 is sent when your upstream is not reachable.
Try to switch on error log and you might see failed to connect to upstream,
for this you need to check whether your upstream server is running or not, sudo service upstream status, and try to switch that on.
Nginx proxy with unix socket troubleshooting:
Check nginx conf:
nginx -t
Check socket:
netstat --protocol=unix -nlp | grep alpasso-wsgi.socket
Check is app working:
curl --unix-socket /tmp/alpasso-wsgi.sock http:/your-path-on-app
(Must be html code on screen output)
If not, check your app. If yes:
Check nginx error log
sudo tail -f /var/log/nginx/error.log
In case you get a nginx permissions error, check nginx user rights for socket:
Determine which username nginx use:
ps aux | grep nginx
And, for example, if nginx user is www-data, give to www-data user required rights. Add www-data user to required group:
sudo usermod -a -G your-socket-file-group www-data
and check permissions of a socket file,
or use ACL:
sudo setfacl -R -m u:www-data:rwX /path-to-your-unix-socket
sudo setfacl -Rd -m u:www-data:rwX /path-to-your-unix-socket
Im my opinion, ACL is better for security. Because you give rights to nginx only to one file, not for all files which belongs to group.

nginx proxy_pass and docker - I don't want port number in address bar

On mywebsite.com, I have running docker container with wordpress.
I started it as
docker run -p 8000:80 --name docker-wordpress-nginx -d
and
docker ps
shows
0.0.0.0:8000->80/tcp
and on my host I have nginx running with
server {
listen 80;
...
server_name mywebsite.com www.mywebsite.com;
...
location / {
proxy_pass http://localhost:8000/;
proxy_set_header Host $host;
}
when i go here
mywebsite.com
It brings wordpress index page of my site, but address in browser is now
mywebsite.com:8000
instead of
mywebsite.com
which I expected.
Everything looks as i wanted except that I always get that port number in address
http://mywebsite.com:8000/2015/08/01/hello-world/
Instead, I wanted
http://mywebsite.com/2015/08/01/hello-world/
i mean, in general, instead of
http://mywebsite.com:8000/some_blog/
i want
http://mywebsite.com/some_blog/
Any ideas?

How to configure Docker port mapping to use Nginx as an upstream proxy?

Update II
It's now July 16th, 2015 and things have changed again. I've discovered this automagical container from Jason Wilder: https://github.com/jwilder/nginx-proxy and it solves this problem in about as long as it takes to docker run the container. This is now the solution I'm using to solve this problem.
Update
It's now July of 2015 and things have change drastically with regards to networking Docker containers. There are now many different offerings that solve this problem (in a variety of ways).
You should use this post to gain a basic understanding of the docker --link approach to service discovery, which is about as basic as it gets, works very well, and actually requires less fancy-dancing than most of the other solutions. It is limited in that it's quite difficult to network containers on separate hosts in any given cluster, and containers cannot be restarted once networked, but does offer a quick and relatively easy way to network containers on the same host. It's a good way to get an idea of what the software you'll likely be using to solve this problem is actually doing under the hood.
Additionally, you'll probably want to also check out Docker's nascent network, Hashicorp's consul, Weaveworks weave, Jeff Lindsay's progrium/consul & gliderlabs/registrator, and Google's Kubernetes.
There's also the CoreOS offerings that utilize etcd, fleet, and flannel.
And if you really want to have a party you can spin up a cluster to run Mesosphere, or Deis, or Flynn.
If you're new to networking (like me) then you should get out your reading glasses, pop "Paint The Sky With Stars — The Best of Enya" on the Wi-Hi-Fi, and crack a beer — it's going to be a while before you really understand exactly what it is you're trying to do. Hint: You're trying to implement a Service Discovery Layer in your Cluster Control Plane. It's a very nice way to spend a Saturday night.
It's a lot of fun, but I wish I'd taken the time to educate myself better about networking in general before diving right in. I eventually found a couple posts from the benevolent Digital Ocean Tutorial gods: Introduction to Networking Terminology and Understanding ... Networking. I suggest reading those a few times first before diving in.
Have fun!
Original Post
I can't seem to grasp port mapping for Docker containers. Specifically how to pass requests from Nginx to another container, listening on another port, on the same server.
I've got a Dockerfile for an Nginx container like so:
FROM ubuntu:14.04
MAINTAINER Me <me#myapp.com>
RUN apt-get update && apt-get install -y htop git nginx
ADD sites-enabled/api.myapp.com /etc/nginx/sites-enabled/api.myapp.com
ADD sites-enabled/app.myapp.com /etc/nginx/sites-enabled/app.myapp.com
ADD nginx.conf /etc/nginx/nginx.conf
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80 443
CMD ["service", "nginx", "start"]
And then the api.myapp.com config file looks like so:
upstream api_upstream{
server 0.0.0.0:3333;
}
server {
listen 80;
server_name api.myapp.com;
return 301 https://api.myapp.com/$request_uri;
}
server {
listen 443;
server_name api.mypp.com;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_pass http://api_upstream;
}
}
And then another for app.myapp.com as well.
And then I run:
sudo docker run -p 80:80 -p 443:443 -d --name Nginx myusername/nginx
And it all stands up just fine, but the requests are not getting passed-through to the other containers/ports. And when I ssh into the Nginx container and inspect the logs I see no errors.
Any help?
#T0xicCode's answer is correct, but I thought I would expand on the details since it actually took me about 20 hours to finally get a working solution implemented.
If you're looking to run Nginx in its own container and use it as a reverse proxy to load balance multiple applications on the same server instance then the steps you need to follow are as such:
Link Your Containers
When you docker run your containers, typically by inputting a shell script into User Data, you can declare links to any other running containers. This means that you need to start your containers up in order and only the latter containers can link to the former ones. Like so:
#!/bin/bash
sudo docker run -p 3000:3000 --name API mydockerhub/api
sudo docker run -p 3001:3001 --link API:API --name App mydockerhub/app
sudo docker run -p 80:80 -p 443:443 --link API:API --link App:App --name Nginx mydockerhub/nginx
So in this example, the API container isn't linked to any others, but the
App container is linked to API and Nginx is linked to both API and App.
The result of this is changes to the env vars and the /etc/hosts files that reside within the API and App containers. The results look like so:
/etc/hosts
Running cat /etc/hosts within your Nginx container will produce the following:
172.17.0.5 0fd9a40ab5ec
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3 App
172.17.0.2 API
ENV Vars
Running env within your Nginx container will produce the following:
API_PORT=tcp://172.17.0.2:3000
API_PORT_3000_TCP_PROTO=tcp
API_PORT_3000_TCP_PORT=3000
API_PORT_3000_TCP_ADDR=172.17.0.2
APP_PORT=tcp://172.17.0.3:3001
APP_PORT_3001_TCP_PROTO=tcp
APP_PORT_3001_TCP_PORT=3001
APP_PORT_3001_TCP_ADDR=172.17.0.3
I've truncated many of the actual vars, but the above are the key values you need to proxy traffic to your containers.
To obtain a shell to run the above commands within a running container, use the following:
sudo docker exec -i -t Nginx bash
You can see that you now have both /etc/hosts file entries and env vars that contain the local IP address for any of the containers that were linked. So far as I can tell, this is all that happens when you run containers with link options declared. But you can now use this information to configure nginx within your Nginx container.
Configuring Nginx
This is where it gets a little tricky, and there's a couple of options. You can choose to configure your sites to point to an entry in the /etc/hosts file that docker created, or you can utilize the ENV vars and run a string replacement (I used sed) on your nginx.conf and any other conf files that may be in your /etc/nginx/sites-enabled folder to insert the IP values.
OPTION A: Configure Nginx Using ENV Vars
This is the option that I went with because I couldn't get the
/etc/hosts file option to work. I'll be trying Option B soon enough
and update this post with any findings.
The key difference between this option and using the /etc/hosts file option is how you write your Dockerfile to use a shell script as the CMD argument, which in turn handles the string replacement to copy the IP values from ENV to your conf file(s).
Here's the set of configuration files I ended up with:
Dockerfile
FROM ubuntu:14.04
MAINTAINER Your Name <you#myapp.com>
RUN apt-get update && apt-get install -y nano htop git nginx
ADD nginx.conf /etc/nginx/nginx.conf
ADD api.myapp.conf /etc/nginx/sites-enabled/api.myapp.conf
ADD app.myapp.conf /etc/nginx/sites-enabled/app.myapp.conf
ADD Nginx-Startup.sh /etc/nginx/Nginx-Startup.sh
EXPOSE 80 443
CMD ["/bin/bash","/etc/nginx/Nginx-Startup.sh"]
nginx.conf
daemon off;
user www-data;
pid /var/run/nginx.pid;
worker_processes 1;
events {
worker_connections 1024;
}
http {
# Basic Settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 33;
types_hash_max_size 2048;
server_tokens off;
server_names_hash_bucket_size 64;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging Settings
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip Settings
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 3;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/xml text/css application/x-javascript application/json;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
# Virtual Host Configs
include /etc/nginx/sites-enabled/*;
# Error Page Config
#error_page 403 404 500 502 /srv/Splash;
}
NOTE: It's important to include daemon off; in your nginx.conf file to ensure that your container doesn't exit immediately after launching.
api.myapp.conf
upstream api_upstream{
server APP_IP:3000;
}
server {
listen 80;
server_name api.myapp.com;
return 301 https://api.myapp.com/$request_uri;
}
server {
listen 443;
server_name api.myapp.com;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_pass http://api_upstream;
}
}
Nginx-Startup.sh
#!/bin/bash
sed -i 's/APP_IP/'"$API_PORT_3000_TCP_ADDR"'/g' /etc/nginx/sites-enabled/api.myapp.com
sed -i 's/APP_IP/'"$APP_PORT_3001_TCP_ADDR"'/g' /etc/nginx/sites-enabled/app.myapp.com
service nginx start
I'll leave it up to you to do your homework about most of the contents of nginx.conf and api.myapp.conf.
The magic happens in Nginx-Startup.sh where we use sed to do string replacement on the APP_IP placeholder that we've written into the upstream block of our api.myapp.conf and app.myapp.conf files.
This ask.ubuntu.com question explains it very nicely:
Find and replace text within a file using commands
GOTCHA
On OSX, sed handles options differently, the -i flag specifically.
On Ubuntu, the -i flag will handle the replacement 'in place'; it
will open the file, change the text, and then 'save over' the same
file.
On OSX, the -i flag requires the file extension you'd like the resulting file to have. If you're working with a file that has no extension you must input '' as the value for the -i flag.
GOTCHA
To use ENV vars within the regex that sed uses to find the string you want to replace you need to wrap the var within double-quotes. So the correct, albeit wonky-looking, syntax is as above.
So docker has launched our container and triggered the Nginx-Startup.sh script to run, which has used sed to change the value APP_IP to the corresponding ENV variable we provided in the sed command. We now have conf files within our /etc/nginx/sites-enabled directory that have the IP addresses from the ENV vars that docker set when starting up the container. Within your api.myapp.conf file you'll see the upstream block has changed to this:
upstream api_upstream{
server 172.0.0.2:3000;
}
The IP address you see may be different, but I've noticed that it's usually 172.0.0.x.
You should now have everything routing appropriately.
GOTCHA
You cannot restart/rerun any containers once you've run the initial instance launch. Docker provides each container with a new IP upon launch and does not seem to re-use any that its used before. So api.myapp.com will get 172.0.0.2 the first time, but then get 172.0.0.4 the next time. But Nginx will have already set the first IP into its conf files, or in its /etc/hosts file, so it won't be able to determine the new IP for api.myapp.com. The solution to this is likely to use CoreOS and its etcd service which, in my limited understanding, acts like a shared ENV for all machines registered into the same CoreOS cluster. This is the next toy I'm going to play with setting up.
OPTION B: Use /etc/hosts File Entries
This should be the quicker, easier way of doing this, but I couldn't get it to work. Ostensibly you just input the value of the /etc/hosts entry into your api.myapp.conf and app.myapp.conf files, but I couldn't get this method to work.
UPDATE:
See #Wes Tod's answer for instructions on how to make this method work.
Here's the attempt that I made in api.myapp.conf:
upstream api_upstream{
server API:3000;
}
Considering that there's an entry in my /etc/hosts file like so: 172.0.0.2 API I figured it would just pull in the value, but it doesn't seem to be.
I also had a couple of ancillary issues with my Elastic Load Balancer sourcing from all AZ's so that may have been the issue when I tried this route. Instead I had to learn how to handle replacing strings in Linux, so that was fun. I'll give this a try in a while and see how it goes.
I tried using the popular Jason Wilder reverse proxy that code-magically works for everyone, and learned that it doesn't work for everyone (ie: me). And I'm brand new to NGINX, and didn't like that I didn't understand the technologies I was trying to use.
Wanted to add my 2 cents, because the discussion above around linking containers together is now dated since it is a deprecated feature. So here's an explanation on how to do it using networks. This answer is a full example of setting up nginx as a reverse proxy to a statically paged website using Docker Compose and nginx configuration.
TL;DR;
Add the services that need to talk to each other onto a predefined network. For a step-by-step discussion on Docker networks, I learned some things here:
https://technologyconversations.com/2016/04/25/docker-networking-and-dns-the-good-the-bad-and-the-ugly/
Define the Network
First of all, we need a network upon which all your backend services can talk on. I called mine web but it can be whatever you want.
docker network create web
Build the App
We'll just do a simple website app. The website is a simple index.html page being served by an nginx container. The content is a mounted volume to the host under a folder content
DockerFile:
FROM nginx
COPY default.conf /etc/nginx/conf.d/default.conf
default.conf
server {
listen 80;
server_name localhost;
location / {
root /var/www/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
docker-compose.yml
version: "2"
networks:
mynetwork:
external:
name: web
services:
nginx:
container_name: sample-site
build: .
expose:
- "80"
volumes:
- "./content/:/var/www/html/"
networks:
default: {}
mynetwork:
aliases:
- sample-site
Note that we no longer need port mapping here. We simple expose port 80. This is handy for avoiding port collisions.
Run the App
Fire this website up with
docker-compose up -d
Some fun checks regarding the dns mappings for your container:
docker exec -it sample-site bash
ping sample-site
This ping should work, inside your container.
Build the Proxy
Nginx Reverse Proxy:
Dockerfile
FROM nginx
RUN rm /etc/nginx/conf.d/*
We reset all the virtual host config, since we're going to customize it.
docker-compose.yml
version: "2"
networks:
mynetwork:
external:
name: web
services:
nginx:
container_name: nginx-proxy
build: .
ports:
- "80:80"
- "443:443"
volumes:
- ./conf.d/:/etc/nginx/conf.d/:ro
- ./sites/:/var/www/
networks:
default: {}
mynetwork:
aliases:
- nginx-proxy
Run the Proxy
Fire up the proxy using our trusty
docker-compose up -d
Assuming no issues, then you have two containers running that can talk to each other using their names. Let's test it.
docker exec -it nginx-proxy bash
ping sample-site
ping nginx-proxy
Set up Virtual Host
Last detail is to set up the virtual hosting file so the proxy can direct traffic based on however you want to set up your matching:
sample-site.conf for our virtual hosting config:
server {
listen 80;
listen [::]:80;
server_name my.domain.com;
location / {
proxy_pass http://sample-site;
}
}
Based on how the proxy was set up, you'll need this file stored under your local conf.d folder which we mounted via the volumes declaration in the docker-compose file.
Last but not least, tell nginx to reload it's config.
docker exec nginx-proxy service nginx reload
These sequence of steps is the culmination of hours of pounding head-aches as I struggled with the ever painful 502 Bad Gateway error, and learning nginx for the first time, since most of my experience was with Apache.
This answer is to demonstrate how to kill the 502 Bad Gateway error that results from containers not being able to talk to one another.
I hope this answer saves someone out there hours of pain, since getting containers to talk to each other was really hard to figure out for some reason, despite it being what I expected to be an obvious use-case. But then again, me dumb. And please let me know how I can improve this approach.
Using docker links, you can link the upstream container to the nginx container. An added feature is that docker manages the host file, which means you'll be able to refer to the linked container using a name rather than the potentially random ip.
#gdbj's answer is a great explanation and the most up to date answer. Here's however a simpler approach.
So if you want to redirect all traffic from nginx listening to 80 to another container exposing 8080, minimum configuration can be as little as:
nginx.conf:
server {
listen 80;
location / {
proxy_pass http://client:8080; # this one here
proxy_redirect off;
}
}
docker-compose.yml
version: "2"
services:
entrypoint:
image: some-image-with-nginx
ports:
- "80:80"
links:
- client # will use this one here
client:
image: some-image-with-api
ports:
- "8080:8080"
Docker docs
AJB's "Option B" can be made to work by using the base Ubuntu image and setting up nginx on your own. (It didn't work when I used the Nginx image from Docker Hub.)
Here is the Docker file I used:
FROM ubuntu
RUN apt-get update && apt-get install -y nginx
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
RUN rm -rf /etc/nginx/sites-enabled/default
EXPOSE 80 443
COPY conf/mysite.com /etc/nginx/sites-enabled/mysite.com
CMD ["nginx", "-g", "daemon off;"]
My nginx config (aka: conf/mysite.com):
server {
listen 80 default;
server_name mysite.com;
location / {
proxy_pass http://website;
}
}
upstream website {
server website:3000;
}
And finally, how I start my containers:
$ docker run -dP --name website website
$ docker run -dP --name nginx --link website:website nginx
This got me up and running so my nginx pointed the upstream to the second docker container which exposed port 3000.
Just found an article from Anand Mani Sankar wich shows a simple way of using nginx upstream proxy with docker composer.
Basically one must configure the instance linking and ports at the docker-compose file and update upstream at nginx.conf accordingly.

Resources