Config nginx to reverse proxy for mqtt communication - nginx

I have an MQTT (EMQX) server running on an ip and a port. And I communicate directly between my service and that port using the nodejs MQTT library.
I want to use a reverse proxy (nginx) to be able to use a DNS in order to prune the communication.
At this moment my nginx is configured like this:
events { worker_connections 1024; }
stream {
upstream websocket {
server ******:7053;
}
server {
listen 8888;
proxy_pass websocket;
}
}
http {
server {
listen 884;
server_name *******;.
error_log /var/log/errors.log;
location / {
proxy_pass *******;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
}
So when I try to connect through port 8888 the nginx always timeout
2020/12/03 16:23:48 [error] 22#22: *31 upstream timed out (110: Connection timed out) while connecting to upstream, client: 89.155.0.10, server: 0.0.0.0:8888, upstream: "192.16.102.26:7053", bytes from/to client:0/0, bytes from/to upstream:0/0
both services are in docker containers. and are started by a docker compose.
the compose for MQTT service is:
version: "2.1"
services:
mqtt-broker:
build:
context: .
dockerfile: Dockerfile
container_name: evio_mqtt_broker
environment:
- EMQX_LISTENER__SSL__EXTERNAL=8883
- EMQX_DASHBOARD__LISTENER__HTTP=18083
- EMQX_LOADED_PLUGINS="emqx_auth_username,emqx_recon,emqx_retainer,emqx_management,emqx_dashboard"
- EMQX_LISTENER__SSL__EXTERNAL__TLS_VERSIONS=tlsv1.2
#- EMQX_LISTENER__SSL__EXTERNAL__KEYFILE=etc/certs/key.pem
#- EMQX_LISTENER__SSL__EXTERNAL__CERTFILE=etc/certs/cert.pem
#- EMQX_LISTENER__SSL__EXTERNAL__CACERTFILE=etc/certs/cacert.pem
- EMQX_LISTENER__SSL__EXTERNAL__VERIFY=verify_peer
#- EMQX_LISTENER__SSL__EXTERNAL__FAIL_IF_NO_PEER_CERT=true
- EMQX_LISTENER__SSL__EXTERNAL__REUSE_SESSIONS=on
- EMQX_LISTENER__SSL__EXTERNAL__HONOR_CIPHER_ORDER=on
- EMQX_ALLOW_ANONYMOUS=false
- EMQX_AUTH__USER__1__USERNAME=****
- EMQX_AUTH__USER__1__PASSWORD=****
#- EMQX_AUTH__USER__2__USERNAME=umdc
#- EMQX_AUTH__USER__2__PASSWORD=umdc_buddy
- EMQX_DASHBOARD__DEFAULT_USER__PASSWORD=****
ports:
- "7053:1883" # MQTT Port
- "8883:8883" # MQTT SSL Port
#- "8083:8083" # MQTT WebSocket Port
#- "8084:8084" # MQTT WebSocket SSL Port
#- "8080:8080" # HTPP Management Port
- "1884:18083" # Web Dashboard Port
logging:
driver: "json-file"
options:
max-size: "50m"
max-file: "3"
networks:
- evio_network
stop_signal: SIGKILL
networks:
evio_network:
and for nginx are:
version: "2.0"
networks:
evio_network:
services:
reverse_proxy:
container_name: reverse_proxy
image: nginx
networks:
- evio_network
ports:
- 8888:8888
- 8843:8843
- 1883:1883
- 8883:8883
volumes:
- /home/evio/src/evio_nginx_reverse_proxy/config/nginxDEV.conf:/etc/nginx/nginx.conf
restart: always
Do I have to change anything in mqtt or is something wrong with my reverse proxy?

As hashed out in the comments.
The problem here was that the 2 services were being started from seperate docker-compose files. While they were both binding to networks with the same name, those networks were separate because they were being prefixed by different orchestration names.
There are 2 solutions to this problem:
Combine the 2 docker compose files, this will mean that they are then in the same namespace and will share the common named network.
Create a "external" network and reference this from both files.
For the second option you use the docker network command to create the network, e.g. docker network create evio_network and then at the end of each compose file include the following:
networks:
evio_network:
external:
name: "evio_network"

Related

FastAPI served through NGINX with gunicorn and docker compose

I have a FastAPI API that I want to serve using gunicorn, nginx and docker compose.
I manage to make the FastApi and Gunicorn work with docker compose, now I add nginx. But I cannot manage to make it work. When I do curl http://localhost:80 I get this messsage: If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
So this is my docker compose file:
version: '3.8'
services:
web:
build:
dockerfile: Dockerfile.prod
context: .
command: gunicorn main:app --bind 0.0.0.0:8000 --worker-class uvicorn.workers.UvicornWorker
expose:
- 8000
env_file:
- ./.env.prod
nginx:
build:
dockerfile: Dockerfile.prod
context: ./nginx
ports:
- 1337:80
depends_on:
- web
On this one, if I set ports to 80:80 I get an error when the image is composed: Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use, which I don't know why.
If I put [some random number]:80 (e.g. 1337:80) then the docker build works, but I get the If you see this page, the nginx web server is successfully installed but... error message state before. I think 1337 is not where nginx is listening, and that's why.
This is my nginx conf file:
upstream platic_service {
server web:8000;
}
server {
listen 80;
location / {
proxy_pass http://platic_service;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
I tried to change it to listen to 8080 but does not work.
What am I doing wrong?

Setting up NGINX as reverse proxy for multiple containerized services

I developed a web app with vue and django however, I'm having problems deploying it.
I added another container to serve as reverse proxy so only port 80 would be exposed and when I finish struggling with this also port 443. I could not find exact anwser how to do it so I hope someone here will be kind enoug to give me some clues.
Here is the conf for the nginx.
The error I'm getting is on ui container.
2022/07/14 09:09:00 [emerg] 1#1: bind() to 0.0.0.0:8080 failed (98: Address already in use)
I looked it up of course, but it was always some different scenario.
BR and thanks in advance
server {
listen 0.0.0.0:80;
listen [::]:80;
location / {
proxy_pass http://0.0.0.0:3000;
}
location /predict {
proxy_pass http://0.0.0.0:5000/predict;
}
location /aggregate {
proxy_pass http://0.0.0.0:5000/aggregate;
}
location /media/pictures {
proxy_pass http://0.0.0.0:5000/media/pictures;
}
access_log /opt/bitnami/nginx/logs/anomaly_access.log;
error_log /opt/bitnami/nginx/logs/anomaly_error.log;
}
My docker-compose looks as follows.
version: '3.2'
services:
se-kpi-sim:
image: test-app:0.0.1
network_mode: "host"
restart: unless-stopped
environment:
MODEL_NAME: "model_final.pickle.dat"
se-kpi-sim-ui:
image: test-ui:0.0.3
network_mode: "host"
restart: unless-stopped
reverse-proxy:
image: test-proxy:0.0.7
network_mode: "host"
restart: unless-stopped
database:
image: postgres
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: kpi_sim_user
POSTGRES_DB: kpi_sim
POSTGRES_HOST_AUTH_METHOD: trust
ports:
- 5432:5432
volumes:
- database:/var/lib/postgresql/data
restart: unless-stopped
volumes:
database:
You can run containers on docker internal network and docker-compose by default creates an network for inter communication of containers. One can modify the port to expose the application to host. while you are trying to run most of the app on host network, there might be two application trying to use the same port (like port 8080 [in this case]), one port can only be used by one application in an OS . Please look at the below snippet for more information to solve this issue.
[port mapping <port on HOST>:<container port where app is exposed inside container>]
version: '3.2'
services:
se-kpi-sim:
image: test-app:0.0.1
ports:
- 5000:8080
restart: unless-stopped
environment:
MODEL_NAME: "model_final.pickle.dat"
se-kpi-sim-ui:
image: test-ui:0.0.3
ports:
- 3000:8080
restart: unless-stopped
reverse-proxy:
image: test-proxy:0.0.7
ports:
- 80:80
# this volume mount if you are using bitnami/nginx image
volumes:
- /path/to/my_server_block.conf:/opt/bitnami/nginx/conf/server_blocks/my_server_block.conf:ro
restart: unless-stopped
database:
image: postgres
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: kpi_sim_user
POSTGRES_DB: kpi_sim
POSTGRES_HOST_AUTH_METHOD: trust
ports:
- 5432:5432
volumes:
- database:/var/lib/postgresql/data
restart: unless-stopped
volumes:
database:
One have to specify either the IP address or DNS name of the application , in order to forward the traffic to specific application. docker-compose create Domain name for all the services defined in docker-compose.yaml file.
server {
listen 0.0.0.0:80;
listen [::]:80;
location / {
proxy_pass http://se-kpi-sim-ui:8080;
}
location /predict {
proxy_pass http://se-kpi-sim:8080/predict;
}
location /aggregate {
proxy_pass http://se-kpi-sim:8080/aggregate;
}
location /media/pictures {
proxy_pass http://se-kpi-sim:8080/media/pictures;
}
access_log /opt/bitnami/nginx/logs/anomaly_access.log;
error_log /opt/bitnami/nginx/logs/anomaly_error.log;
}
One can mount the nginx.conf like:[in bitnami/nginx image]
...
volumes:
- /path/to/my_server_block.conf:/opt/bitnami/nginx/conf/server_blocks/my_server_block.conf:ro
...
Note: all the above is an example for reference to solve the problem. entrypoint for containers might change according to one's requirements.

Unable to load balance using Docker, Consul and nginx

What I want to achive is load balancing using this stack: Docker, Docker Compose, Registrator, Consul, Consul Template, NGINX and, finally, a tiny service that prints out "Hello world" in browser. So, at this moment I have a docker-compose.yml file. It looks like so:
version: '2'
services:
accent:
build:
context: ./accent
image: accent
container_name: accent
restart: always
ports:
- 80
consul:
image: gliderlabs/consul-server:latest
container_name: consul
hostname: ${MYHOST}
restart: always
ports:
- 8300:8300
- 8400:8400
- 8500:8500
- 8600:53/udp
command: -advertise ${MYHOST} -data-dir /tmp/consul -bootstrap -client 0.0.0.0
registrator:
image: gliderlabs/registrator:latest
container_name: registrator
hostname: ${MYHOST}
network_mode: host
restart: always
volumes:
- /var/run/docker.sock:/tmp/docker.sock
command: -ip ${MYHOST} consul://${MYHOST}:8500
nginx:
container_name: nginx
image: nginx:latest
restart: always
volumes:
- /etc/nginx
ports:
- 8181:80
consul-template:
container_name: consul-template
build:
context: ./consul-template
network_mode: host
restart: always
volumes_from:
- nginx
volumes:
- /var/run/docker.sock:/tmp/docker.sock
command: -consul=${MYHOST}:8500 -wait=5s -template="/etc/ctmpl/nginx.ctmpl:/etc/nginx/nginx.conf:docker kill -s HUP nginx"
The first service - accent - is that my web service that I need to load balance. When I run this command:
$ docker-compose up
I see that all services start to run and I see no error messages. It looks as if everything is just perfect. When I run
$ docker ps
I see this in the console:
... NAMES STATUS PORTS
consul-template Up 45 seconds
consul Up 56 seconds 0.0.0.0:8300->8300/tcp, 0.0.0.0:8400->8400/tcp, 8301-8302/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp, 8600/tcp, 8600/udp, 0.0.0.0:8600->53/udp
nginx Up 41 seconds 0.0.0.0:8181->80/tcp
registrator Up 56 seconds
accent Up 56 seconds 0.0.0.0:32792->80/tcp
Please, pay attention to the last row and especially to PORTS column. As you can see, this service publishes 32792 port. To check that my web service is achievable I go to 127.0.0.1:32972 on my host machine (the machine where I run docker compose up) and see this in browser:
Hello World
This is exactly what I wanted to see. However, it is not what I finally want. Please, have a look at the output of docker ps command and you will see, that my nginx service published 8181 port. So, my expectation is that when I go to this address - 127.0.0.1:8181 - I will see exactly the same "Hello world" page. However, it is not. In browser I see Bad Gateway error message and in nginx logs I see this error message
nginx | 2017/01/18 06:16:45 [error] 5#5: *5 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:32792/index.php", host: "127.0.0.1:8181"
It is really interesting, because nginx does what I expect it to do - upstreams to "http://127.0.0.1:32792/index.php". But I'm not sure why does it fail. By the way, this is how nginx.conf (created automatically with Consul Template) looks like:
worker_processes 1;
events {
worker_connections 1024;
}
http {
sendfile on;
upstream app_servers {
server 127.0.0.1:32792;
}
server {
listen 80;
root /code;
index index.php index.html;
location / {
try_files $uri/ $uri/ /index.php;
}
location ~ \.php$ {
proxy_pass http://app_servers;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location ~ /\.ht {
deny all;
}
}
}
I wouldn't change anything, since this nginx.conf looks good to me. Trying to understand why it does not work, I shelled to nginx container and made a couple of commands:
$ curl accent
Hello World
$ curl 127.0.0.1:32972
curl: (7) Failed to connect to 127.0.0.1 port 32972: Connection refused
$ curl accent:32972
curl: (7) Failed to connect to accent port 32972: Connection refused
Again, it is interesting, because nginx container sees my web service under port 80 and not under its published 32972 port. Anyway, at this stage I do not know why it does not work and how to fix it. I just have a guess, that it is somehow connected to the way, how network is configured in docker-compose.yml. I tried various combinations of network_mode: host on accent and nginx service, but to no avail - either accent stops working or nginx or both. So, I need some help.
When you do port binding it publish some port from container (80 in accent e.g.) and some port on your host (random 32792 on host e.g.).Containers in same network as your accent container can access your container port 80 by accent (same as accent:80) due to docker-compose services name resolving. You can access accent:80 from your host with accent:32792. When you are requesting 127.0.0.1:32792 from your nginx container you can access only nginx container 32792 port, not accent. accent:32792 is not correct url from anyway (80 port open on accent, 32792 on host). But 127.0.0.1:32792 should work when you add nginx container to host network. But I noticed that you use incorrect port in curl call. Your accent:80 published to host 32792 but you request 32972.

Docker Compose networking: hostnames in nginx not resolving

I've attempted to migrate my stack to use version 2 docker-compose.yml and have run into a problem with network hostnames not being resolved by nginx.
My stack involves an nginx reverse proxy (on debian:wheezy) that serves secure content via several other software components of which I won't go into detail (see config below).
In the version 1 yaml, I used environment variables from docker links alongside with LUA script to insert them into the nginx.conf (using nginx-extras). This worked perfectly as a reverse proxy in front of the docker containers.
In the version 2 yaml I am using the hostnames as generated by docker networking. I am able to successfully ping these hostnames from within the container, however nginx is unable to resolve them.
2016/05/04 01:23:44 [error] 5#0: *3 no resolver defined to resolve ui, client: 10.0.2.2, server: , request: "GET / HTTP/1.1", host: "localhost"
Here is my current config:
docker-compose.yml:
version: '2'
services:
# back-end
api:
build: .
depends_on:
- db
- redis
- worker
environment:
RAILS_ENV: development
ports:
- "3000:3000"
volumes:
- ./:/mmaps
- /var/log/mmaps/api:/mmaps/log
volumes_from:
- apidata
command: sh -c 'rm -rf /mmaps/tmp/pids/server.pid; rails server thin -b 0.0.0.0 -p 3000'
# background process workers
worker:
build: .
environment:
RAILS_ENV: development
QUEUE: "*"
TERM_CHILD: "1"
volumes:
- ./:/mmaps
- /var/log/mmaps/worker:/mmaps/log
volumes_from:
- apidata
command: rake resque:work
# front-end
ui:
image: magiandev/mmaps-ui:develop
depends_on:
- api
ports:
- "8080:80"
volumes:
- /var/log/mmaps/ui:/var/log/nginx
# database
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: pewpewpew
volumes_from:
- mysqldata
volumes:
- /var/log/mmaps/db:/var/log/mysql
# key store
redis:
image: redis:2.8.13
user: root
command: ["redis-server", "--appendonly yes"]
volumes_from:
- redisdata
volumes:
- /var/log/mmaps/redis:/var/log/redis
# websocket server
monitor:
image: magiandev/mmaps-monitor:develop
depends_on:
- api
environment:
NODE_ENV: development
ports:
- "8888:8888"
# media server
media:
image: nginx:1.7.1
volumes_from:
- apidata
ports:
- "3080:80"
volumes:
- ./docker/media/nginx.conf:/etc/nginx/nginx.conf:ro
- /srv/mmaps/public:/usr/local/nginx/html:ro
- /var/log/mmaps/mediapool:/usr/local/nginx/logs
# reverse proxy
proxy:
build: docker/proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/log/mmaps/proxy:/var/log/nginx
apidata:
image: busybox:ubuntu-14.04
volumes:
- /srv/mmaps/public:/mmaps/public
command: echo api data
mysqldata:
image: busybox:ubuntu-14.04
volumes:
- /srv/mmaps/db:/var/lib/mysql
command: echo mysql data
redisdata:
image: busybox:ubuntu-14.04
volumes:
- /srv/mmaps/redis:/data
command: echo redis data
# master data
# convenience container for backups
data:
image: busybox:ubuntu-14.04
volumes_from:
- apidata
- mysqldata
- redisdata
command: echo mmaps data
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
# permanent redirect to https
server {
listen 80;
rewrite ^ https://$host$request_uri? permanent;
}
server {
listen 443 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
location / {
proxy_pass http://ui:80$request_uri;
}
location /monitor/ {
proxy_pass http://monitor:8888$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /api/ {
client_max_body_size 0;
proxy_pass http://api:3000$request_uri;
}
location /files/ {
client_max_body_size 0;
proxy_pass http://media:80$request_uri;
}
location /mediapool/ {
proxy_pass http://media:80$request_uri;
add_header X-Upstream $upstream_addr;
if ($request_uri ~ "^.*\/(.*\..*)\?download=true.*$"){
set $fname $1;
add_header Content-Disposition 'attachment; filename="$fname"';
}
proxy_pass_request_headers on;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www;
}
}
}
# stay in the foreground so Docker has a process to track
daemon off;
After some reading I have tried to use 'dnsmasq' and set resolver 127.0.0.1 within the nginx.conf but I cannot get this to work:
2016/05/04 01:54:26 [error] 6#0: recv() failed (111: Connection refused) while resolving, resolver: 127.0.0.1:53
Is there a better way to configure nginx to proxy pass to my containers that works with V2?
You can rename your containers and resolving by names.

Docker - nginx proxy - access hosts between containers

I have web application.
Public web app (app1)
api web app (app2)
I make docker configuration for this apps. Each application in its container. To access this applications from web, configured container with nginx, where nginx proxy all requests.
So i can run - http://app1.dev/ and http://app2.dev/
But i need have access from app1 to http://app2.dev/ (access hosts app2.dev from app1 container).
Ping (from app1 container):
PING app2.dev (127.0.53.53) 56(84) bytes of data.
64 bytes from 127.0.53.53: icmp_seq=1 ttl=64 time=0.027 ms
64 bytes from 127.0.53.53: icmp_seq=2 ttl=64 time=0.038 ms
64 bytes from 127.0.53.53: icmp_seq=3 ttl=64 time=0.038 ms
What i should configure else, to have access to http://app2.dev/ host from app1 container?
Nginx proxy config
upstream app1_upstream {
server app1;
}
upstream app1_upstream {
server app2;
}
server {
listen 80;
server_name app1.dev
app2.dev;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
if ($host = "app1.dev") {
proxy_pass http://app1;
}
if ($host = "app2.dev") {
proxy_pass http://app2;
}
}
error_log /var/log/nginx/proxy_error.log;
access_log /var/log/nginx/proxy_access.log;
}
Docker compose
version: '2'
services:
proxy:
build: ./proxy/
ports:
- "80:80"
- "443:443"
links:
- app1
- app2
- app1:app1
- app2:app2
hostname: proxy
app1:
build: ./app1/
volumes:
- ../app1/:/var/www/app1
hostname: app1
app2:
build: ./app2/
volumes:
- ../app2/:/var/www/app2
hostname: app2
docker-compose ps
app1 /sbin/my_init Up 80/tcp
app2 /sbin/my_init Up 80/tcp
proxy_1 /sbin/my_init Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
Not sure what version of docker you running, but if you are (or are able to) run 1.10 you should use a docker network instead of using "link".
If you run all three containers on the same docker network then they will have access to one another through their container name.
That will allow you to make the call from app1 to app2 without going back through your proxy (although I would call that an anti-pattern as if you were to change the interface to app2 you would have to update app1 and the proxy, I would have app1 call app2 through your proxy so you maintain one interface).
For more info on Docker networks: https://docs.docker.com/engine/userguide/networking/dockernetworks/
TLDR:
# create bridge network (for single host)
docker networks create my-network
then change your compose too:
version: '2'
services:
proxy:
build: ./proxy/
ports:
- "80:80"
- "443:443"
networks:
- my-network
hostname: proxy
app1:
build: ./app1/
volumes:
- ../app1/:/var/www/app1
networks:
- my-network
hostname: app1
app2:
build: ./app2/
volumes:
- ../app2/:/var/www/app2
networks:
- my-network
hostname: app2
networks:
my-network:
external: true
ports:
- "80:80"
- "443:443"
exposes ports to the host machine. When you do
docker ps -a
you will see these ports listed
However, to expose ports between containers you need to use the EXPOSE command in your dockerfile.
https://docs.docker.com/engine/reference/builder/#expose
What i should configure else, to have access to http://app2.dev/ host from app1 container?
You must EXPOSE ports in dockerfile!
also if you do a ...
docker exec -it containerName bash
you will be able to explore.
View the hosts file inside the container.
cat /etc/hosts
you will see an entry for the other container in the hosts file if you have --link the containers correctly.
you can ping using the domain name in the hosts file.

Resources