Reverse proxy to two separate nginx instances - nginx

I have several repositories that I need to be able to run individually, or together on the same host. In this case, I have two applications: A and B. Both are run using docker compose.
Each one has:
API (Django): API for application A runs on port 5000; API for application B runs on port 5001 (through channels socket)
its own database: Database A runs on 5432; Database B runs on 5433
its own nginx reverse proxy: Application A listens on port 8001; Application B listens on port 8002
Both are meant to be reached through a reverse proxy listening on port 80 and 443. This is the config for the "main" nginx instance:
ssl_password_file /etc/nginx/certificates/global.pass;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.2 TLSv1.1;
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/certificates/certificate.crt;
ssl_certificate_key /etc/nginx/certificates/privatekey.key;
proxy_set_header X-Forwarded-Proto $scheme;
server_name a.my.domain.com;
location / {
proxy_redirect off;
proxy_pass http://a.my.domain.com:8001;
}
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/certificates/certificate.crt;
ssl_certificate_key /etc/nginx/certificates/privatekey.key;
proxy_set_header X-Forwarded-Proto $scheme;
server_name b.my.domain.com;
location / {
proxy_redirect off;
proxy_pass http://b.my.domain.com:8002;
}
}
This is the config for Application A:
upstream channels-backend {
server api:5000;
}
server {
listen 8001 default_server;
server_name a.my.domain.com [local IP address];
access_log /var/log/nginx/access.log;
underscores_in_headers on;
location /static {
alias /home/docker/code/static;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_read_timeout 30;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_redirect off;
proxy_pass http://channels-backend;
}
}
This is the pretty much identical config for Application B:
upstream channels-backend {
server api:5001;
}
server {
listen 8002 default_server;
server_name b.my.domain.com [same local IP address];
keepalive_timeout 70;
access_log /var/log/nginx/access.log;
underscores_in_headers on;
location /static {
alias /home/docker/code/static;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_read_timeout 30;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_redirect off;
proxy_pass http://channels-backend;
}
}
When I run all three application using docker-compose up --build, starting with Application A, then Application B, then the "main" reverse proxy, I can open a web browser, go to b.my.domain.com and use Application B just fine. If I try a.my.domain.com, however, I get 502 Bad Gateway. Nginx shows:
[error] 27#27: *10 connect() failed (111: Connection refused) while connecting to upstream, client: [my IP address], server: a.my.domain.com, request: "GET / HTTP/1.1", upstream: "http://[local IP address]:8001/", host: "a.my.domain.com"
So I'm assuming there's some sort of conflict. Because if I run Application A in isolation and access it directly through http://a.my.domain.com:8001, it works fine.
Any ideas? Suggestions on a better setup are also welcome, though I vastly prefer ease of maintenance over performance. I don't want to keep both applications in the same repository. I don't want to rely on the third ("main") reverse proxy, I just want to be able to quickly add more applications on the same server if need be and proxy to one or the other depending on the subdomain of the request.
Edit: If I switch the order in which the applications are built and run, Application B will return 502 Bad Gateway instead of Application A, so the issue is not with either of the applications.

There were a couple of problems: Container names were the same, the configuration for channels was outdated. This was a very specific case, so I doubt this will be helpful to anyone, but I gave each service of each compose file a unique name and made sure that there were no port conflicts. I also changed the compose files so that port 8001 maps to port 80, for example, so the nginx configuration doesn't need to be aware of any unusual port numbers. I updated the channels configuration to reflect the new container names, and now it's working.

Related

Can't get Pocketbase running behind Ngnix as a reverse proxy

I want to use Pocketbase behind Ngnix as a reverse proxy on my Ubuntu-VPS. I followed the documentation on https://pocketbase.io/docs/going-to-production/.
I wanted to put pocketbase to /api/. When i try to connect to the pocketbase admin panel the browser shows some 404 and a ContentSecurityPolicy Error. It looks like this:
It also seems to be that some HTML is loaded from Pocketbase.
This is my current ngnix config (i replaced my domain with test.com)
server {
listen 80;
listen 443 ssl;
server_name test.com;
ssl_certificate /etc/letsencrypt/live/test.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/test.com/privkey.pem;
location / {
try_files $uri $uri/ /index.html;
root /var/www/html;
index index.html;
}
location /api/ {
# check http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
proxy_set_header Connection '';
proxy_http_version 1.1;
proxy_read_timeout 360s;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:8090;
}
}
Pocketbase is started with the default localhost settings on the VPS.
I can even access pocketbase over http://127.0.0.1:8090/api/ when i'm connected via SSH in VS Code and see the requests in the log. (i am surprised that this is even possible. At first i tought i had pocketbase running on my local machine but when i killed the backend on my vps i couldn't access it anymore)
I hope that somebody can help me out as i can't find much about this in the internet.
Problem solved. It works when append a / to the address at the proxy_pass directive
proxy_pass http://127.0.0.1:8090/;

nginx reverse proxy for application

I use nginx for reverse proxy with domain name. I've some application publish on IIS and i want to proxy different location name for each application.
For example;
Domain name on nginx :
example.com.tr
application end points for app:
1.1.1.1:10
1.1.1.2:10
upstream for app in nginx.conf:
upstream app_1 {
least_conn;
server 1.1.1.1:10;
server 1.1.1.2:10;
}
server {
listen 443 ssl;
server_name example.com.tr;
proxy_set_header X-Forwarded-Port 443;
ssl_certificate /etc/cert.crt;
ssl_certificate_key /etc/cert.key;
location /app_1/ {
proxy_pass http://app_1/;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-REAL-SCHEME $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
access_log /etc/nginx/log/access.log;
error_log /etc/nginx/log/error.log;
}
}
When I try to access example.com.tr/app_1/ , I can access application but not all data.
I inspected this site and so many requests of application were failed.
All requests sended to example.com.tr/uri instead of example.com.tr/app_1/uri. How can I fix this ?
thanks,
You need a transparent path proxy setup. Means NGINX should use the requested URI without removing the matched location from it.
proxy_pass http://app_1;
Remove the tailing slash to tell NGINX not to do so. Using an upstream definition is great but make sure you apply keepalive.

Nginx returns 502 to browsers but works fine with curl

I have a MediaWiki running in a kubernetes cluster. The kubernetes cluster is behind an nginx proxy with the following config:
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 1024;
}
http {
upstream rancher {
server 192.168.122.90:80;
}
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
listen 443 ssl http2;
server_name .domain;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://rancher;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
proxy_read_timeout 900s;
proxy_connect_timeout 75s;
}
}
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
}
I can get to the main page of the wiki, but have to log in before using it. When I click to login using OAuth2 I get a 502 status from the nginx proxy server (nginx reports that the upstream ended the connection prematurely). If I do the same request with curl I get a 302 with the location of the authorization endpoint as expected. I really don't understand why it is like that. Not using the proxy and directly accessing the cluster (from the vm host) works just as normally but that isn't what I want.
So the issue was not related to nginx, nor kubernetes. It was an issue with mediawiki, where compression had some funny behaviour. See more here, if anyone encounters anything similar:)

Two locations not working in nginx for MERN application

I have nginx configuration like this:
server {
listen 80 default_server;
listen[::]:80 default_server;
server_name _;
root /var/www/html/ericwu-trademarket/frontend/build;
location /backend/ {
proxy_pass http://localhost:8000; #backend in node js
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'Upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location / {
try_files $uri /index.html; #front end in react js
}
}
the front end is running properly. But by running backend like this http://server-ip-address/backend it is showing cannot get /backend/.
Where might I be mistaken?
Check UFW port Allow in server
Check status of UFW:
sudo ufw status verbose
If not show 8000/tcp as a Allow then allow it:
sudo ufw allow 8000
Obviously you are trying to use Websokets.
When it comes to best practices, is better to have the backend services defined inside an upstream definition. You are trying to proxy requests to "localhost:8000" but localhost translates to ip 127.0.0.1. If that is not the ip address of the nodejs app, then is pretty normal that your config won't work.
Nginx expects a fully qualified domain name (FQDN), or ip addresses list of backend servers to work properly.
That being said, your config should be:
http {
upstream backend_server {
#least_conn; #Loadbalancing method in case you want to use multiple backends
#ip_hash;
server backend1.example.com:8000; #or IP address
}
server {
server_name _;
listen 80 default_server;
listen[::]:80 default_server;
root /var/www/html/ericwu-trademarket/frontend/build;
location / {
try_files $uri /index.html;
}
location /backend {
proxy_pass http://backend_server;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket specific
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# For long running HTTP requests, don't buffer up the
# response from origin servers but send them directly to the client.
proxy_buffering off;
}
}
}

Nginx - Is it possible to use load balancer with external urls?

My problem is the following:
I have 2 web applications, a "Normal" and an "Expensive". The "Normal" communicates with the "Expensive" for expensive tasks. In order to improve speeds and reduce bottlenecks the plan is deploy at least a couple of the "Expensive" app in 2 different machines and use a load balancer to split the requests (Instead of having a NASA PC, having 2 or more regular PCs).
The apps are made in Gunicorn + Django and served through sockets with Nginx. (No Docker or weird stuff, at much a Supervisor to keep things alive)
Current systems works perfectly, but it could go faster for certains tasks, that's why the load balancer. However I'm incapable of making the load balancer works using server addresses which are not in the same machine (no localhost:port, x.x.x.x, x.x.x.x:port, or urls included in /etc/hosts)
This is a balancer.conf that worked in my local using local apps
upstream balancer {
# least_conn;
server 192.168.22.200:8000;
server 192.168.22.200:8001;
}
server {
listen 80;
server_name localhost;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 120;
proxy_redirect off;
proxy_pass http://balancer;
}
}
And this is my last attempt to make it work with remote servers (I need the SSL stuff because it is forced on them)
upstream balancer {
# least_conn;
server external.machine.com;
}
server {
listen 80;
server_name test.url.com;
return 301 https://$server_name$1;
}
server {
listen 443 ssl http2;
server_name test.url.com;
# Turn on SSL
ssl on;
<exactly the same stuff I have in the others .conf for the ssl>
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
location / {
# proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Protocol $scheme;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_read_timeout 120;
# proxy_redirect off;
proxy_pass http://balancer;
}
}
To clarify and remember: external.machine.com and test.url.com are not in the same machine. They have different public IPs. And in the external.machine.com, I have configured an Nginx that serves the "Expensive" app correctly.
I'm unable to find anything related or people who have tried this, everything single post or documentation I found is related or done with local IPs, instead of regular URLs for external IPs.
So I have now the question whether is it possible to use the Nginx load balancer with remote IPs or only with local ones
Yes, you can use outer urls BUT you need to specify the port. Or at least that's how I made it works.
Said that, the nginx configuration file will be something like this:
upstream balancer {
# least_conn;
server external.machine.com:<CUSTOM_PORT>;
}
server {
listen 80;
server_name test.url.com;
return 301 https://$server_name$1;
}
server {
listen 443 ssl http2;
server_name test.url.com;
# Turn on SSL
ssl on;
<exactly the same stuff I have in the others .conf for the ssl>
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 120;
proxy_redirect off;
proxy_pass http://balancer;
}
}
Obviously you need to open that port in the machine
And in the pointed machine your nginx file must look like this
upstream wsgi_socket {
server unix:/tmp/socket.sock fail_timeout=0;
}
server {
# listen [::]:80 ipv6only=on;
listen 80;
server_name test.url.com; # same server name as is the balancer.conf
return 301 https://$server_name$1;
}
server {
listen <CUSTOM POST> ssl http2;
server_name test.url.com; # same server name as is the balancer.conf
root <path to your proejct root>;
client_max_body_size 15M;
# You can configure access_log and error_log too
# Turn on SSL
ssl on;
<all the ssl stuff>
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
location /static {
alias <path to your static if you have statics>;
}
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_read_timeout 120;
proxy_redirect off;
proxy_pass http://unix:/tmp/socket.sock;
}
}

Resources