Properly mount Flask app at location using Nginx reverse proxy - nginx

I have a Flask app, which runs inside a Docker container and should be exposed under a specific URL: myserver.com/mylocation. I want to use another container running Nginx as a reverse proxy to achieve the routing. I am following an awesome tutorial that got me quite far already.
My Nginx-config (the relevant part) reads:
server {
server_name myserver.com;
location /mylocation {
proxy_pass http://myapp:5000;
proxy_set_header Host $host;
rewrite ^/mylocation(.*)$ $1 break;
}
}
My docker-compose.yml reads:
version: '2'
services:
nginx:
image: nginx:latest
container_name: production_nginx
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80
- 443:443
myapp:
build: .
image: app_image
container_name: app_container
expose:
- "5000"
Now, when I run this, I successfully get my applications' index.html from myserver.com/mylocation, but subsequent requests (the CSS, JS etc) are being fired at myserver.com without the location part (/mylocation), and so Nginx does not route them to the container and they 404. The references to CSS, JS and such are all relative, they do not (and should not) contain the server name and location.
How can I achieve this? Am I missing something in my NGinx config that would let the app know it should run at /mylocation?

Related

FastAPI served through NGINX with gunicorn and docker compose

I have a FastAPI API that I want to serve using gunicorn, nginx and docker compose.
I manage to make the FastApi and Gunicorn work with docker compose, now I add nginx. But I cannot manage to make it work. When I do curl http://localhost:80 I get this messsage: If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
So this is my docker compose file:
version: '3.8'
services:
web:
build:
dockerfile: Dockerfile.prod
context: .
command: gunicorn main:app --bind 0.0.0.0:8000 --worker-class uvicorn.workers.UvicornWorker
expose:
- 8000
env_file:
- ./.env.prod
nginx:
build:
dockerfile: Dockerfile.prod
context: ./nginx
ports:
- 1337:80
depends_on:
- web
On this one, if I set ports to 80:80 I get an error when the image is composed: Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use, which I don't know why.
If I put [some random number]:80 (e.g. 1337:80) then the docker build works, but I get the If you see this page, the nginx web server is successfully installed but... error message state before. I think 1337 is not where nginx is listening, and that's why.
This is my nginx conf file:
upstream platic_service {
server web:8000;
}
server {
listen 80;
location / {
proxy_pass http://platic_service;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
I tried to change it to listen to 8080 but does not work.
What am I doing wrong?

Nginx proxy is not finding the backend services

I've an installation with 2 backend services that should be proxied by a third Nginx service.
I've deployed the 3 services succesfully but for some reason I can't get nginx to see the other 2 services giving the error:
GET / HTTP/1.1" 502 560
and
[error] 8#8: *1 no live upstreams while connecting to upstream
I have tried changing all the services to their own network but it seems the issue is not solved.
Adding my docker-compose.yml:
version: "3"
services:
nginx_web_1:
image: nginx:1.17
volumes:
- "./files_1:/usr/share/nginx/html:ro"
nginx_web_2:
image: nginx:1.17
volumes:
- "./files_2:/usr/share/nginx/html:ro"
nginx_balancer:
build: ./balancer
ports:
- 5000:80
depends_on:
- nginx_web_1
- nginx_web_2
and this is how I configured the proxy:
File moved to /etc/nginx/conf.d/default.conf
upstream backend_hosts {
server nginx_web_1;
server nginx_web_2;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://backend_hosts;
}
}
After some investigation, the issue was that I was running only docker-compose up/down which didn't rebuild my Nginx proxy image.
After cleaning up and running a docker build the proxy was configured properly and now runs fine.
That means that also the config listed in the question is a valid one

docker compose nginx reverse proxy mount config

I'm trying to use nginx as a reverse proxy using the below docker-compose file
version: '3'
services:
nginx:
image: nginx
volumes:
- "nginx-conf:/etc/nginx/conf.d"
ports:
- 80:80
depends_on:
- nginxtest
nginxtest:
image: nginx
volumes:
nginx-conf:
Inside ${PWD}/nginx-conf I've the default.conf file like so
http {
server {
listen 80;
location / {
proxy_pass http://nginxtext;
}
}
}
nginx container doesn't load my reverse proxy config; instead it loads default config.
It depends what you are trying to achieve, as per documentation.
Those lines are of interest in this particular case:
# Path on the host, relative to the Compose file
- ./volume_name:/some/docker/path
# Named volume
- volume_name:/some/docker/path
If you are trying to mount folder from host, to nginx configuration folder, update the volumes part to the following:
volumes:
- "~/nginx-conf:/etc/nginx/conf.d"

Docker nginx proxy to host

Short description:
Nginx running on docker, how to configure nginx so that it forwards calls to host.
Long description:
We have one web application which communicates to couple of backends (lets says rest1, rest2 and rest3). We are responsible for rest1.
Lets consider that I started rest1 manually on my pc and running on 2345 port. I want nginx (which is running in docker) to redirect all call torest1 to my own running instance(note, the instance is running on host, not any container and not in docker). And for rest2 and rest3 to some other docker node or may be some other server (who cares).
What I am looking for is:
docker-compose.yml configurations (if needed).
nginx configuration.
Thanks in advance.
Configure nginx like the following (make sure you replace IP of Docker Host) and save it as default.conf:
server {
listen 80;
server_name _;
location / {
proxy_pass http://<IP of Docker Host>;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Now bring up the container:
docker run -d --name nginx -p 80:80 -v /path/to/nginx/config/default.conf:/etc/nginx/conf.d/default.conf nginx
If you are using Docker Compose file version 3 you don't need any special config for docker-compose.yml file at all, just use the special DNS name host.docker.internal to reach a host service, as on the following nginx.conf example:
events {
worker_connections 1024;
}
http {
upstream host_service {
server host.docker.internal:2345;
}
server {
listen 80;
access_log /var/log/nginx/http_access.log combined;
error_log /var/log/nginx/http_error.log;
location / {
proxy_pass http://host_service;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $realip_remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
}
Solution 1
Use network_mode: host, this will bind your nginx instance to host's network interface.
This could result in conflicts when running multiple nginx containers: every exposed port is binded to host's interface.
Solution 2
I'm running more nginx instances for every service I would like expose to outside world.
To keep the nginx configurations simple and avoid binding every nginx to host use the container structure:
dockerhost - a dummy container with network_mode: host
proxy - nginx container used as a proxy to host service,
link dockerhost to proxy, this will add an /etc/hosts entry in proxy contianer - we can use 'dockerhost' as a hostname in nginx configuration.
docker-compose.yaml
version: '3'
services:
dockerhost:
image: alpine
entrypoint: /bin/sh -c "tail -f /dev/null"
network_mode: host
proxy:
image: nginx:alpine
links:
- dockerhost:dockerhost
ports:
- "18080:80"
volumes:
- /share/Container/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
default.conf
location / {
proxy_pass http://dockerhost:8080;
This method allows us to have have automated let's encrtypt certificates generated for every service running on my server. If interested I can post a gist about the solution.
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://host.docker.internal:3000;
}
}
Docker expose host address is host.docker.internal in Mac os
There a couple of things you have to keep in mind:
Docker compose (from version 3) by default uses the service name as hostname for inter container networking
Nginx need to know the upstream first
I strongly recommend mounting the default.conf directly into your docker-compose.yml.
Lastly you have to dockerize your backend to make use of docker internal networking.
An example repo where I use nginx and docker-compose in a full-stack project: https://gitlab.com/datails/api.
The following example have some prerequisites:
you have a folder structure like:
- backend/
- frontend/
- default.conf
- docker-compose.yml
Secondly the backend and front-end dit have a Dockerfile that exposes an application on port 3000.
Example default.conf:
upstream backend {
server backend:3000;
}
upstream frontend {
server frontend:3000;
}
server {
listen 80;
location /api {
proxy_pass http://backend;
}
location / {
proxy_pass http://frontend/;
}
}
Example docker-compose.yml:
version: '3.8'
services:
nginx:
image: nginx:1.19.4
depends_on:
- server
- frontend
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- '8080:80'
Then make sure you have your backend dockerized and called (in this case) backend as a service and a front-end (if needed) called frontend as a service in your docker-compose:
version: '3.8'
services:
nginx:
image: nginx:1.19.4
depends_on:
- server
- frontend
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- '8080:80'
frontend:
build: ./frontend
backend:
build: ./backend
This is a bare minimum example to get started. Hope this will help future developers.

Running multiple dev projects in docker containers with nginx-proxy

As I understand a docker-compose file, using the docker-compose up command, loads the images and starts the containers. Conversely using a Dockerfile file with the docker build command creates the image only. I think I am missing something here as things aren't working as I'd like.
Following the bitnami/wordpress instructions I got an install running fine using docker-compose up d. Can then access via localhost:81
version: '2'
services:
mariadb:
image: bitnami/mariadb:latest
volumes:
- /path/to/mariadb-persistence:/bitnami/mariadb
wordpress:
image: bitnami/wordpress:latest
depends_on:
- mariadb
ports:
- '81:80'
- '443:443'
volumes:
- ./wordpress-persistence:/bitnami/wordpress
- ./apache-persistence:/bitnami/apache
- ./php-persistence:/bitnami/php
Because I want to be able to access this as domain.com.dev, I looked at implementing nginx-proxy. Following the instructions there, and with some inspiration from Docker nginx-proxy : proxy between containers, I came up with the following:
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
restart: always
ports:
- "88:80"
volumes:
- "/var/run/docker.sock:/tmp/docker.sock:ro"
mariadb:
image: bitnami/mariadb:latest
volumes:
- //c/websites/domain_com/mariadb-persistence:/bitnami/mariadb
domain.com.dev:
image: bitnami/wordpress:latest
depends_on:
- mariadb
ports:
- '81:80'
environment:
- VIRTUAL_HOST=domain.com.dev
volumes:
- //c/websites/domain_com/wordpress-persistence:/bitnami/wordpress
- //c/websites/domain_com/apache-persistence:/bitnami/apache
- //c/websites/domain_com/php-persistence:/bitnami/php
Running docker-compose up -d with this appears to complete without error. However when I access domain.com.dev in a browser, I get a default Index of / page, which suggests I somehow got partway there but not all the way. Looking at the local folders, they get created but it seems like the wordpress-persistence does not get populated, which could explain the default view in the browser.
Any thoughts on why this isn't coming up as expected? Something obvious I missed?
1) For the first approach, you need "to finish" the configuration.
If you don't have a running webserver (nginx, apache, etc.) (on port 80) - just change the port from 81 to 80:
ports:
- '80:80'
- '443:443'
and add the record "127.0.0.1 domain.com.dev" to your hosts file (/etc/hosts in linux).
P.S. you may change port from 88 to 80 at the second approach - it will work without changing hosts file
If you have a running wevserver on port 80 - then it is needed to you proxy directives at virtualhost config file. Here is an example:
server {
listen 80 default_server;
server_name _;
include expires.conf;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://172.17.0.1:81;
proxy_http_version 1.1;
}
}
2) The second approach is usually used with dnsmasq configuration.
Use this and this links to get more detailed information and examples of configuration.

Resources