Now I'm trying to deploy my MERN app on VPS.
I'd like to connect the client to the frontend and let the frontend communicate with the backend inside of the server. Because I think it's unnecessary to expose the backend on the public port since it'll be api server only for this site.
I've tried to mimic those files for Nginx settings and docker settings in so many ways and still doesn't work, and now I'm very frustrated. Because I really can't figure out the reason it doesn't let me connect to the backend docker.
1. On the top, 2 folders; frontend / backend
├── Backend
│ ├── Dockerfile
│
├── Frontend
│ │
│ └── Dockerfile
│ │
│ └── frontend.conf
│
└── docker-compose.yml
2. Frontend - 2 staged Dockerfile : stage 1 - react build, stage 2 - nginx with the build (and add .conf file to nginx)
/Frontend/Dockerfile
#
# Stage 1: React production build
#
FROM node:16.18.0 as frontend
WORKDIR /frontend
COPY package*.json ./
RUN yarn
COPY . .
ENV NODE_ENV=production
RUN yarn build
# CMD ["yarn","start:prod"]
EXPOSE 5500
#
# Stage 2: Nginx as a proxy & static file server
#
FROM nginx
WORKDIR /usr/share/nginx/html
RUN apt-get update && apt-get install -y certbot python3-certbot-nginx
RUN rm -rf /usr/share/nginx/html/*
COPY --from=frontend /peacemomo_frontend/build .
COPY --from=frontend /frontend/frontend.conf /etc/nginx/conf.d/
EXPOSE 80
ENTRYPOINT ["nginx", "-g", "daemon off;"]
frontend.conf
upstream server {
server host.docker.internal:5050;
}
server {
listen 80 default_server;
listen [::]:80;
# listen 443 default_server;
# listen [::]:443 default_server;
# actual domain comes here - all set and able to open the static files.
server_name example.com www.example.com;
server_tokens off;
gzip on;
gzip_proxied any;
gzip_comp_level 4;
gzip_types text/css application/javascript image/svg+xml;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files try_files $uri $uri/ /index.html;
}
location /api {
proxy_pass http://server;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header Authorization "";
proxy_hide_header Authorization;
}
}
**
3. Backend - simple Dockerfile from node
**
# FROM node:15.4.0
FROM node:16.18.0
WORKDIR /backend
COPY package.json ./
RUN yarn
RUN yarn remove sharp
RUN yarn add sharp
COPY . .
EXPOSE 5050
CMD ["yarn", "run", "start:prod"]
4. docker-compose.yml at the top to create docker containers with a network "nw_test".
docker-compose.yml
version: "3.9"
services:
frontend:
image: frontend
container_name: frontend
build:
context: ./frontend
dockerfile: Dockerfile
extra_hosts:
- "host.docker.internal:host-gateway"
ports:
- 80:80
- 443:443
restart: unless-stopped
env_file:
- ./frontend/.env.production
volumes:
- ./frontend:/frontend
- /frontend/node_modules
depends_on:
- backend
backend:
image: backend
container_name: backend
build:
context: ./backend
dockerfile: Dockerfile
extra_hosts:
- "host.docker.internal:host-gateway"
ports:
- "5050"
restart: unless-stopped
env_file:
- ./backend/config/.env.production
volumes:
- ./backend:/backend
- /backend/node_modules
networks:
default:
name: nw_test
**Result:
After docker compose up, approaching to the domain,
index.html of frontend works.
Fail to connect to backend.
: the client request www.example.com/api/... => it attempts to connect upstream xxx.xxx.xx.x:5050 (it's the container local ip address) but failes with 504
**
[logs in the server]
[error] 6#6: *20 upstream timed out (110: Connection timed out) while connecting to upstream, client: ..., server: example.com, request: "GET /api/users/auth HTTP/1.1", upstream: "http://172.17.0.1:5050/api/users/auth", host: "example.com", referrer: "http://example.com/"
client |
"GET /api/users/auth HTTP/1.1" 504 562 "http://example.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
Related
I have a FastAPI API that I want to serve using gunicorn, nginx and docker compose.
I manage to make the FastApi and Gunicorn work with docker compose, now I add nginx. But I cannot manage to make it work. When I do curl http://localhost:80 I get this messsage: If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
So this is my docker compose file:
version: '3.8'
services:
web:
build:
dockerfile: Dockerfile.prod
context: .
command: gunicorn main:app --bind 0.0.0.0:8000 --worker-class uvicorn.workers.UvicornWorker
expose:
- 8000
env_file:
- ./.env.prod
nginx:
build:
dockerfile: Dockerfile.prod
context: ./nginx
ports:
- 1337:80
depends_on:
- web
On this one, if I set ports to 80:80 I get an error when the image is composed: Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use, which I don't know why.
If I put [some random number]:80 (e.g. 1337:80) then the docker build works, but I get the If you see this page, the nginx web server is successfully installed but... error message state before. I think 1337 is not where nginx is listening, and that's why.
This is my nginx conf file:
upstream platic_service {
server web:8000;
}
server {
listen 80;
location / {
proxy_pass http://platic_service;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
I tried to change it to listen to 8080 but does not work.
What am I doing wrong?
I would like to use nginx as a SSL front end to an apache HTTP website. I've reduced my configuration to the simplest:
Nginx default.conf
server {
listen 80;
return 301 https://$host$request_uri;
}
#Work in progress
server {
listen 443 ssl default;
ssl_certificate /etc/ssl/certs/pem-1620200742-1020479.pem;
ssl_certificate_key /etc/ssl/certs/mycomp.com.key;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://wip:80;
}
}
docker-compose.yml:
version: '2'
services:
reverse:
image: nginx
container_name: reverse
ports:
- 80:80
- 443:443 #This addition solves the problem!!!
networks:
- mynet
volumes:
- ./reverse/default.conf:/etc/nginx/conf.d/default.conf
- ./reverse/ssl:/etc/ssl/certs
wip:
image: httpd:2.4
container_name: wip
environment:
TZ: "France/Paris"
ports:
- 8096:80
networks:
- mynet
networks:
mynet:
driver: "bridge"
ipam:
driver: default
everything looks fine in logs:
nginx:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
172.18.0.1 - - [10/Dec/2021:09:15:04 +0000] "GET / HTTP/1.1" 301 169 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.43" "-"
Nginx starts fine, get 80 connnections but nothing about 443 connections
When browsing https://127.0.0.1 I get "connection failure". Same using the following command:
telnet 127.0.0.1 443
It seams that nginx is not listening to 443
I've checked network issues using tinyweb. The 443 port is not blocked by a firewall or anything.
Do you have any clue what's wrong with my nginx configuration?
Context
I have had my application running with a global Nginx as a reverse proxy on my private server without issues. However, for my project I need to deploy it on the servers of my university where I'll need to move all that into my Containers, but I cannot make it work.
General Project Setup
Short introduction to the setup: I have my frontend-ui, which is a simple PWA I built with vue that also uses Firebase Messaging for notifications. Notification tokens are stored via my notification manager - a spring application - in a database and it also performs all database queries such as removing the tokens upon deletion etc. My third ui ist the notification-ui that provides a simple (vue) frontend to send out notifications with firebase, for that it also interacts with the database to retrieve the tokens. All projects are located in one folder with a docker-compose.
I need both of my Frontends to serve https.
Nginx / Docker Setup
frontend-ui
My frontend-ui has the following Nginx configuration and the certificates are in the folder certificates:
server {
listen 80;
server_name SERVERNAME;
# Redirect all traffic to SSL
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443 ssl;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA256:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EDH+aRSA+AESGCM:EDH+aRSA+SHA256:EDH+aRSA:EECDH:!aNULL:!eNULL:!MEDIUM:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4:!SEED";
add_header Strict-Transport-Security "max-age=31536000";
server_name SERVERNAME;
## Access and error logs.
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log info;
## Server certificate and key.
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.cert;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ =404;
}
location /api {
proxy_pass http://127.0.0.1:42372;
}
}
and this Dockerfile:
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json /app/
RUN npm install
COPY . .
RUN npm run build
#COPY default.conf /etc/nginx/conf.d/
#COPY certificates/nginx.cert /etc/ssl/
#COPY certificates/nginx.key /etc/ssl/
# production stage
FROM nginx:stable-alpine as production-stage
COPY certificates/nginx.cert /etc/nginx/ssl/
COPY certificates/nginx.key /etc/nginx/ssl/
COPY default.conf /etc/nginx/conf.d/
COPY --from=build-stage /app/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
notification-ui
My notification-ui has the following Nginx configuration and the certificates are in the folder certificates:
server {
listen 80;
server_name SERVERNAME;
# Redirect all traffic to SSL
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443 ssl;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA256:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EDH+aRSA+AESGCM:EDH+aRSA+SHA256:EDH+aRSA:EECDH:!aNULL:!eNULL:!MEDIUM:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4:!SEED";
add_header Strict-Transport-Security "max-age=31536000";
server_name SERVERNAME;
## Access and error logs.
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log info;
## Server certificate and key.
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.cert;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ =404;
}
location /api {
proxy_pass http://127.0.0.1:42372;
}
}
and this Dockerfile:
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json /app/
RUN npm install
COPY . .
RUN npm run build
#COPY default.conf /etc/nginx/conf.d/
#COPY certificates/nginx.cert /etc/ssl/
#COPY certificates/nginx.key /etc/ssl/
# production stage
FROM nginx:stable-alpine as production-stage
COPY certificates/nginx.cert /etc/nginx/ssl/
COPY certificates/nginx.key /etc/nginx/ssl/
COPY default.conf /etc/nginx/conf.d/
COPY --from=build-stage /app/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
Notification-backend
My backend doesn't have an Nginx config, as it does not need it per se. The Dockerfile looks like this:
### BUILDER
FROM maven:3.6.3-jdk-11-slim as builder
RUN mkdir -p /build
WORKDIR /build
COPY pom.xml /build
#Download dependencies
#RUN mvn -B dependency:resolve dependency:resolve-plugins
#copy src-code
COPY src /build/src
#Build application
RUN mvn clean install
### RUNTIME
FROM openjdk:11-slim as runtime
ENV APP_HOME /
#Create folders for config and logging
RUN mkdir $APP_HOME/config
RUN mkdir $APP_HOME/log
VOLUME $APP_HOME/log
VOLUME $APP_HOME/config
WORKDIR $APP_HOME
#Copy jar from builder
COPY --from=builder /build/target/*.jar notificationmanager.jar
ENTRYPOINT ["java","-jar","notificationmanager.jar", "de.hsa.frontend.notificationmanager.NotificationmanagerApplication"]
Deployment
I deploy the network using a docker-compose:
version: '3.2'
services:
backend:
image: notificationmanager-be:1
build:
context: ./notificationmanager
dockerfile: ./Dockerfile
ports:
- "42372:8085"
networks:
- notificationmanager
restart: on-failure:5
notification-ui:
image: notificationmanager-ui:1
build:
context: ./notificationmanager-ui
dockerfile: ./Dockerfile
ports:
- "42373:80"
- "42376:443"
networks:
- notificationmanager
db:
image: postgres
ports:
- "42374:5432"
environment:
- POSTGRES_USER=USERNAME
- POSTGRES_PASSWORD=PASSWORD
- POSTGRES_DB=DATABASE
volumes:
- data:/var/lib/postgresql/data/
restart: on-failure:5
frontend-ui:
image: frontend-ui:1
build:
context: ./frontend-ui
dockerfile: ./Dockerfile
ports:
- "42375:80"
- "42377:443"
networks:
- notificationmanager
networks:
notificationmanager:
driver: bridge
volumes:
data:
driver: local
The mapping of port 443 I added as a last idea as to why it might not work so I can also take it out again. I cannot really see much of a difference from online-Tutorials I have viewed, but I still get a SSL-error (ERR_SSL_PROTOCOL_ERROR) when trying to open the webpages, the Dev-Tools don't show any errors, the logs from the frontend-ui look like this (others are similar):
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/03/29 11:55:04 [warn] 1#1: the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/default.conf:23
nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/conf.d/default.conf:23
10.144.43.100 - - [29/Mar/2021:11:55:53 +0000] "\x16\x03\x01\x02\x00\x01\x00\x01\xFC\x03\x03\xB1\xC7" 400 157 "-" "-" "-"
10.144.43.100 - - [29/Mar/2021:11:55:53 +0000] "\x16\x03\x01\x02\x00\x01\x00\x01\xFC\x03\x03:X\xFB\x83\xAD\x18\x13n^\xF4\x06:\xED\x93~;\xB2%j\xD0\xAC\xDC\xFB#W\xCB)b\x16r\xC9\xCE \xFE\x1Fu\xA3Y;\xB2\xC0\xFB\x11 \x02\xDE\x91=$U" 400 157 "-" "-" "-"
10.144.43.100 - - [29/Mar/2021:11:55:54 +0000] "\x16\x03\x01\x02\x00\x01\x00\x01\xFC\x03\x03Bp\x91\xA8\xC6h)\x81\xA41\x12\xAAl\xF4\xD1q\xA8\xEA\xC6{\xC4\x0B\x83\xA9\xE1\xFCJ#1#\x1F\xB9 ?\xCFV\xA7\x0Fvx\x1C\xF5\xF5\xA4\x0B\xAF\xA2Z>\xB4\xCA\xC4!i;F6\xC0\x1F\xB5H\x94\xC4\xBC\x19\x00\x22::\x13\x01\x13\x02\x13\x03\xC0+\xC0/\xC0,\xC00\xCC\xA9\xCC\xA8\xC0\x13\xC0\x14\x00\x9C\x00\x9D\x00/\x005\x00" 400 157 "-" "-" "-"
10.144.43.100 - - [29/Mar/2021:11:55:54 +0000] "\x16\x03\x01\x02\x00\x01\x00\x01\xFC\x03\x03\x9D#Ju;j24\xC0\xF6\xEA\xDC\xBF\xFA\x0E;\xBDJ\x030\xD4\xF6\xE8V\x88I\xB8/'\xA6Vj \xA1B\x17\x5C$" 400 157 "-" "-" "-"
I did a bit of renaming to improve the readability and tried to remove all my (failed) attempts on possible issues so I apologize if I failed that somewhere. I had to remove some values (like login data for the database) so I just wrote placeholders there, of course the files are filled completely.
Can anyone point me to my error?
I don't see where you are exposing the ports in the dockerfiles. I think you'll want to add this to your nginx dockerfiles
EXPOSE 80 443
And this to your java dockerfile
EXPOSE 8085
Once you expose those ports, you'll probably run into a problem with the reverse proxy. Each container has it's own localhost, so in your nginx configs, this line won't work.
proxy_pass http://127.0.0.1:42372;
You can access the backend container "directly" without going through the docker host. Try changing that line to
proxy_pass http://backend:8085;
Similarly, I suspect you're trying to connect to your db using localhost:42374. You'll probably need to change that to db:5432.
Short description:
Nginx running on docker, how to configure nginx so that it forwards calls to host.
Long description:
We have one web application which communicates to couple of backends (lets says rest1, rest2 and rest3). We are responsible for rest1.
Lets consider that I started rest1 manually on my pc and running on 2345 port. I want nginx (which is running in docker) to redirect all call torest1 to my own running instance(note, the instance is running on host, not any container and not in docker). And for rest2 and rest3 to some other docker node or may be some other server (who cares).
What I am looking for is:
docker-compose.yml configurations (if needed).
nginx configuration.
Thanks in advance.
Configure nginx like the following (make sure you replace IP of Docker Host) and save it as default.conf:
server {
listen 80;
server_name _;
location / {
proxy_pass http://<IP of Docker Host>;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Now bring up the container:
docker run -d --name nginx -p 80:80 -v /path/to/nginx/config/default.conf:/etc/nginx/conf.d/default.conf nginx
If you are using Docker Compose file version 3 you don't need any special config for docker-compose.yml file at all, just use the special DNS name host.docker.internal to reach a host service, as on the following nginx.conf example:
events {
worker_connections 1024;
}
http {
upstream host_service {
server host.docker.internal:2345;
}
server {
listen 80;
access_log /var/log/nginx/http_access.log combined;
error_log /var/log/nginx/http_error.log;
location / {
proxy_pass http://host_service;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $realip_remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
}
Solution 1
Use network_mode: host, this will bind your nginx instance to host's network interface.
This could result in conflicts when running multiple nginx containers: every exposed port is binded to host's interface.
Solution 2
I'm running more nginx instances for every service I would like expose to outside world.
To keep the nginx configurations simple and avoid binding every nginx to host use the container structure:
dockerhost - a dummy container with network_mode: host
proxy - nginx container used as a proxy to host service,
link dockerhost to proxy, this will add an /etc/hosts entry in proxy contianer - we can use 'dockerhost' as a hostname in nginx configuration.
docker-compose.yaml
version: '3'
services:
dockerhost:
image: alpine
entrypoint: /bin/sh -c "tail -f /dev/null"
network_mode: host
proxy:
image: nginx:alpine
links:
- dockerhost:dockerhost
ports:
- "18080:80"
volumes:
- /share/Container/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
default.conf
location / {
proxy_pass http://dockerhost:8080;
This method allows us to have have automated let's encrtypt certificates generated for every service running on my server. If interested I can post a gist about the solution.
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://host.docker.internal:3000;
}
}
Docker expose host address is host.docker.internal in Mac os
There a couple of things you have to keep in mind:
Docker compose (from version 3) by default uses the service name as hostname for inter container networking
Nginx need to know the upstream first
I strongly recommend mounting the default.conf directly into your docker-compose.yml.
Lastly you have to dockerize your backend to make use of docker internal networking.
An example repo where I use nginx and docker-compose in a full-stack project: https://gitlab.com/datails/api.
The following example have some prerequisites:
you have a folder structure like:
- backend/
- frontend/
- default.conf
- docker-compose.yml
Secondly the backend and front-end dit have a Dockerfile that exposes an application on port 3000.
Example default.conf:
upstream backend {
server backend:3000;
}
upstream frontend {
server frontend:3000;
}
server {
listen 80;
location /api {
proxy_pass http://backend;
}
location / {
proxy_pass http://frontend/;
}
}
Example docker-compose.yml:
version: '3.8'
services:
nginx:
image: nginx:1.19.4
depends_on:
- server
- frontend
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- '8080:80'
Then make sure you have your backend dockerized and called (in this case) backend as a service and a front-end (if needed) called frontend as a service in your docker-compose:
version: '3.8'
services:
nginx:
image: nginx:1.19.4
depends_on:
- server
- frontend
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- '8080:80'
frontend:
build: ./frontend
backend:
build: ./backend
This is a bare minimum example to get started. Hope this will help future developers.
I've attempted to migrate my stack to use version 2 docker-compose.yml and have run into a problem with network hostnames not being resolved by nginx.
My stack involves an nginx reverse proxy (on debian:wheezy) that serves secure content via several other software components of which I won't go into detail (see config below).
In the version 1 yaml, I used environment variables from docker links alongside with LUA script to insert them into the nginx.conf (using nginx-extras). This worked perfectly as a reverse proxy in front of the docker containers.
In the version 2 yaml I am using the hostnames as generated by docker networking. I am able to successfully ping these hostnames from within the container, however nginx is unable to resolve them.
2016/05/04 01:23:44 [error] 5#0: *3 no resolver defined to resolve ui, client: 10.0.2.2, server: , request: "GET / HTTP/1.1", host: "localhost"
Here is my current config:
docker-compose.yml:
version: '2'
services:
# back-end
api:
build: .
depends_on:
- db
- redis
- worker
environment:
RAILS_ENV: development
ports:
- "3000:3000"
volumes:
- ./:/mmaps
- /var/log/mmaps/api:/mmaps/log
volumes_from:
- apidata
command: sh -c 'rm -rf /mmaps/tmp/pids/server.pid; rails server thin -b 0.0.0.0 -p 3000'
# background process workers
worker:
build: .
environment:
RAILS_ENV: development
QUEUE: "*"
TERM_CHILD: "1"
volumes:
- ./:/mmaps
- /var/log/mmaps/worker:/mmaps/log
volumes_from:
- apidata
command: rake resque:work
# front-end
ui:
image: magiandev/mmaps-ui:develop
depends_on:
- api
ports:
- "8080:80"
volumes:
- /var/log/mmaps/ui:/var/log/nginx
# database
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: pewpewpew
volumes_from:
- mysqldata
volumes:
- /var/log/mmaps/db:/var/log/mysql
# key store
redis:
image: redis:2.8.13
user: root
command: ["redis-server", "--appendonly yes"]
volumes_from:
- redisdata
volumes:
- /var/log/mmaps/redis:/var/log/redis
# websocket server
monitor:
image: magiandev/mmaps-monitor:develop
depends_on:
- api
environment:
NODE_ENV: development
ports:
- "8888:8888"
# media server
media:
image: nginx:1.7.1
volumes_from:
- apidata
ports:
- "3080:80"
volumes:
- ./docker/media/nginx.conf:/etc/nginx/nginx.conf:ro
- /srv/mmaps/public:/usr/local/nginx/html:ro
- /var/log/mmaps/mediapool:/usr/local/nginx/logs
# reverse proxy
proxy:
build: docker/proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/log/mmaps/proxy:/var/log/nginx
apidata:
image: busybox:ubuntu-14.04
volumes:
- /srv/mmaps/public:/mmaps/public
command: echo api data
mysqldata:
image: busybox:ubuntu-14.04
volumes:
- /srv/mmaps/db:/var/lib/mysql
command: echo mysql data
redisdata:
image: busybox:ubuntu-14.04
volumes:
- /srv/mmaps/redis:/data
command: echo redis data
# master data
# convenience container for backups
data:
image: busybox:ubuntu-14.04
volumes_from:
- apidata
- mysqldata
- redisdata
command: echo mmaps data
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
# permanent redirect to https
server {
listen 80;
rewrite ^ https://$host$request_uri? permanent;
}
server {
listen 443 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
location / {
proxy_pass http://ui:80$request_uri;
}
location /monitor/ {
proxy_pass http://monitor:8888$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /api/ {
client_max_body_size 0;
proxy_pass http://api:3000$request_uri;
}
location /files/ {
client_max_body_size 0;
proxy_pass http://media:80$request_uri;
}
location /mediapool/ {
proxy_pass http://media:80$request_uri;
add_header X-Upstream $upstream_addr;
if ($request_uri ~ "^.*\/(.*\..*)\?download=true.*$"){
set $fname $1;
add_header Content-Disposition 'attachment; filename="$fname"';
}
proxy_pass_request_headers on;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www;
}
}
}
# stay in the foreground so Docker has a process to track
daemon off;
After some reading I have tried to use 'dnsmasq' and set resolver 127.0.0.1 within the nginx.conf but I cannot get this to work:
2016/05/04 01:54:26 [error] 6#0: recv() failed (111: Connection refused) while resolving, resolver: 127.0.0.1:53
Is there a better way to configure nginx to proxy pass to my containers that works with V2?
You can rename your containers and resolving by names.