How to use Nginx to connect to my app in Docker image? - nginx

My Nginx is not in docker image. My app is in docker image. They both live on the same server.
I don't want Nginx in a docker image, since it looks awful complex for me to configure. But my app is running in a docker container.
How to configure Nginx to use the docker image which my app is running in?
Here is my Nginx config file:
server {
listen 80;
server_name my.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name www.nicolasxu.space nicolasxu.space;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000";
ssl_certificate /root/.ssh/nicolasxu.space.cert;
ssl_certificate_key /root/nicolasxu.space.key;
[....]
}

To easily setup nginx (in docker host) as a reverse proxy in front of a dockerized webapp you could just --publish the port of your webapp and route the trafic to this port:
Run your docker container with --publish argument to bind host port with container's webapp port, for instance with a jenkins container I would do:
docker run --publish 127.0.0.1:8080:8080 --name jenkins jenkins
This binds port 8080 of the container to port 80 on localhost's 127.0.0.1 of the host machine (this avoids port 8080 to be opened to anyone if you don't use any firewall). The Docker User Guide explains in detail how to manipulate ports in Docker.
Forward all incoming trafic as a reverse proxy to the local container your port (8080 in my example)
server {
...
listen 443 ssl;
server_name www.nicolasxu.space nicolasxu.space;
...
ssl_certificate ...
location / {
# forward all the trafic to docker container's published port
proxy_pass http://localhost:8080;
}
}
Setting SSL on nginx and routing the trafic as HTTP to dockerized webapp is a good practice and will work like a charm.
Edit
For maximum performances, you can also use :
docker run --network=host ...
When using --network=host, docker will instruct the container to use the hosts networking stack. You won't have to --publish ports on host as it is the same network stack, and web application will be available on it's native port.

Related

wordpress website reachable from internal network but not from public network (nginx, docker)

I have set up a wordpress-website inside a docker container which works perfectly fine inside the local network.
To access the webserver from outside the internal network I am using an nginx reverse proxy (docker as well). Unfortunately I have no clue why that is. I have a bunch of other services hosted with no problem.
When I change the internal IP to another IP address where a server is running it works so I guess the problem is related to wordpress.
Here my nginx-config-file:
server {
set $forward_scheme http;
set $server "192.168.2.2";
set $port 8000;
listen 80;
server_name mydomain.com;
access_log /data/logs/proxy-host-12_access.log proxy;
error_log /data/logs/proxy-host-12_error.log warn;
location / {
# Proxy!
include conf.d/include/proxy.conf;
}
# Custom
include /data/nginx/custom/server_proxy[.]conf;
}
I am thankful for any advice!

Nginx connectivity with vultr loadbalancer

We had many applications on single vultr cloud instance, but it has only one default healthcheck for a single https loadBalancer with SSL certificate.
so we used nginx to configure mutliple /backend URL with http specified and running using docker-compose to make applications running on a single network.
server {
listen 80;
listen [::]:80;
server_name *.example.com;
access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://strapi-container:1337/;
}
location /chat {
proxy_pass http://rocketchat-container:3000;
}
location /auth {
proxy_pass http://keycloak-container:8080;
proxy_set_header Host $host;
}
}
}
The backend url is http://instance-ip/, http://instance-ip/chat, http://instance-ip/auth respectively
nginx:
image: nginx:1.20
container_name: nginx
ports:
- 80:80
restart: unless-stopped
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
depends_on:
- strapi-cms
- rocketchat
- keycloak
networks:
- test-network
Everything works fine and we are able to access the applications through nginx default port 80 with the above backend URL's.
But our intentions is to somehow connect the nginx with HTTPS LoadBalancer in vultr, it should works as
For example: https://qa.example.com/, https://qa.example.com/chat, https://qa.example.com/auth
What you will want to do is setup a single forwarding rule on the Vultr Load Balancer.
Forwarding rule 443->443 for TLS on the instance
Forwarding rule 443->80 for TLS on the LB.
This will have the LB forward all incoming traffic on the defined LB port to your NGINX defined port. Then your nginx instance should route the the location to the appropriate proxy_pass you have defined.
As for the health check...Vultr Load Balancers only have a single health check as they were designed to work with single applications behind the LB. However, you could have a /health endpoint that when hit would check the status of all of your other applications and return 200 ok if they are all running.
We have some detailed docs available at https://www.vultr.com/docs/vultr-load-balancers
Full disclosure I am the Technical Lead on load balancers for Vultr.

Nginx Gunicorn socket issue? Unresponsive

I'm trying to deploy a Django project to a AWS Lightsail server.
I followed mostly this tutorial. I added some SSL protocols for additional security.
This projects runs perfectly on my Ubuntu 18.04 VirtualBox with exact same setup and exact same components, same SSL protocols. However on the Lightsail it doesn't respond to the browser request. It will redirect me to https but then will die... I wasn't able to identify any errors in any of the logs. Which leaves me guessing
/etc/systemd/system/webrock.socket:
[Unit]
Description=gunicorn socket
[Socket]
ListenStream=/run/webrock.sock
[Install]
WantedBy=sockets.target
/etc/systemd/system/webrock.service:
[Unit]
Description=gunicorn daemon
Requires=webrock.socket
After=network.target
[Service]
User=ubuntu
Group=www-data
WorkingDirectory=/home/ubuntu/django/webrock
ExecStart=/home/ubuntu/django/webrock/venv/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/webrock.sock \
core.wsgi:application
[Install]
WantedBy=multi-user.target
/etc/nginx/sites-available/webrock:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2 ipv6only=on;
include snippets/signed.conf; # path to certs
include snippets/params.conf; # cert related params
index index.html index.htm index.nginx-debian.html;
server_name mydomain.com www.mydomain.com; #changed this line by replacing domain name with dummy
location = /favicon.ico {access_log off; log_not_found off;}
location /static/ {
root /home/ubuntu/django/webrock;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/webrock.sock;
try_files $uri $uri/ =404;
}
}
server {
listen 80;
listen [::]:80;
server_name mydomain.com www.mydomain.com; #changed this line by replacing domain name with dummy
return 302 https://$server_name$request_uri;
}
I left the nginx default file alone. Now every time I visit the page by punching in the server IP, I see the nginx default page. When I use the domain name I get redirected to HTTPS, but then... nothing. I assume that there is some disruption between gunicorn and nginx, but I'm not experienced enough to troubleshoot there or solve to solve it.
As I mentioned above, exact the same setup runs flawless on the similar system in my VirtualBox.
I'm very thankful for suggestions and hints.
Update:
I disabled the redirect portion in nginx and made it listen to port 80. It worked. Now I'm trying to figure out how to introduce HTTP2 and port 443 back to the setup. BTW my ufw looks like this:
After two days banging my head against this issue here is the solution.
So Amazon Lightsail has an additional firewall in front of the UFW on the actual server.
You can access Lightsail firewall by clicking on...
Menue of your instance > Manage > Networking
You will see a summarized networking for your instance like IP addresses, Firewall, Loadbalancer. In that firewall you need to add an additional port (In my case HTTPS).
Why would they put an additional firewall in front of UFW beats me.

nginx + cloudflare + digitalocean = 521

I'm trying host a website with multiple subdomains (created with Cloudflare, which also provides SSL) hosted on DigitalOcean with Nginx serving as a reverse proxy.
My Cloudflare Configs
DNS setup:
Type ~ Name ~ Value
A ~ api ~ MyDigitalOceanIPv4
A ~ example.com ~ MyDigitalOceanIPv4
A ~ www ~ MyDigitalOceanIPv4
Crypto setup:
SSL: Full (strict)
Always use HTTPS: On
Automatic HTTPS Rewrites: On
I've also used Cloudflare to Create Certificate (and followed their instructions to set it up with Nginx)
My Nginx config:
server {
listen 443;
server_name example.com www.example.com;
ssl on;
ssl_certificate /srv/example.com/cloudflare.pem;
ssl_certificate_key /srv/example.com/cloudflare.key;
location / {
proxy_pass http://localhost:8000;
}
}
server {
listen 443;
server_name api.example.com;
ssl on;
ssl_certificate /srv/example.com/cloudflare.pem;
ssl_certificate_key /srv/example.com/cloudflare.key;
location / {
proxy_pass http://localhost:8080;
}
}
I have opened for all TCP ports on DigitalOcean, and if I try to open MyDigitalOceanIPv4:8000 in my browser then my website (hosted in a Docker container) successfully loads. However, if I try to open my website "example.com" then I get Cloudflare's 521 web server is down message.
I have also verified that the Cloudflare SSL key paths and content are correct, nginx -t shows no errors, and I've made sure to restart nginx after making changes.
I have also tried to whitelist Cloudflare's IPs using my Nginx config file but it didn't work.
If I try to telnet MyDigitalOceanIPv4 443 or 80 then I get telnet: Unable to connect to remote host: Connection refused.
Inside my DigitalOcean instance I have tried to curl http://localhost:8000 which successfully prints my website content.
I suspect there's some DigitalOcean setting I need to configure, or there's something wrong with my Nginx file (even though I've successfully used same Nginx config on a different cloud provider), but feel like I've tried everything..

Mulitple Docker Containers on Port 80 with Same Domain

My question is similar to this question but with only one domain.
Is it possible to run multiple docker containers on the same server, all of them on port 80, but with different URL paths?
For example:
Internally, all applications are hosted on the same docker server.
172.17.0.1:8080 => app1
172.17.0.2:8080 => app2
172.17.0.3:8080 => app3
Externally, users will access the applications with the following URLs:
www.mydomain.com (app1)
www.mydomain.com/app/app2 (app2)
www.mydomain.com/app/app3 (app3)
I solved this issue with an nginx reverse proxy.
Here's the Dockerfile for the nginx container:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
And this is the nginx.conf:
http {
server {
listen 80;
location / {
proxy_pass http://app1:5001/;
}
location /api/ {
proxy_pass http://app2:5000/api/;
}
}
}
I then stood up the nginx, app1, and app2 containers inside the same docker network.
Make sure to include the trailing / in the location and proxy paths, otherwise nginx will return a '502: Bad Gateway'.
All requests go through the docker host on port 80, which hands them off to the nginx container, which then forwards them onto the app containers based on the url path.

Resources