Docker + Nginx: Getting proxy_pass to work - nginx

I'm having a problem trying to get Nginx to proxy a path to another server that is also running in Docker.
To illustrate, I'm using Nexus server as an example.
This is my first attempt...
docker-compose.yml:-
version: '2'
services:
nexus:
image: "sonatype/nexus3"
ports:
- "8081:8081"
volumes:
- ./nexus:/nexus-data
nginx:
image: "nginx"
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
nginx.conf:-
worker_processes 4;
events { worker_connections 1024; }
http {
server {
listen 80;
location /nexus/ {
proxy_pass http://localhost:8081/;
}
}
}
When I hit http://localhost/nexus/, I get 502 Bad Gateway with the following log:-
nginx_1 | 2017/05/29 02:20:50 [error] 7#7: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET /nexus/ HTTP/1.1", upstream: "http://[::1]:8081/", host: "localhost"
nginx_1 | 2017/05/29 02:20:50 [error] 7#7: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET /nexus/ HTTP/1.1", upstream: "http://127.0.0.1:8081/", host: "localhost"
nginx_1 | 172.18.0.1 - - [29/May/2017:02:20:50 +0000] "GET /nexus/ HTTP/1.1" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"
In my second attempt...,
docker-compose.yml - I added links to Nginx configuration:-
version: '2'
services:
nexus:
image: "sonatype/nexus3"
ports:
- "8081:8081"
volumes:
- ./nexus:/nexus-data
nginx:
image: "nginx"
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
links:
- nexus:nexus
nginx.conf... Instead of using http://localhost:8081/, I use http://nexus:8081/:-
worker_processes 4;
events { worker_connections 1024; }
http {
server {
listen 80;
location /nexus/ {
proxy_pass http://nexus:8081/;
}
}
}
Now, when I hit http://localhost/nexus/, it gets proxied properly but the web content is partially rendered. When inspecting the HTML source code of that page, the javascript, stylesheet and image links are pointing to http://nexus:8081/[path]... hence, 404.
What should I change to get this to work properly?
Thank you very much.

The following additional options are what I have used
http {
server {
listen 80;
location /{
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
server_name_in_redirect on;
proxy_pass http://nexus:8081;
}
location /nexus/ {
proxy_pass http://nexus:8081/;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
server_name_in_redirect on;
}
}
}
My solution is to include the redirect for the '/' path in the nginx config. The Nexus app will be making requests to '/' for it resources which will not work.
However, this is not ideal and will not work with an Nginx configuration serving multiple apps.
The docs
cover this configuration and indicate that you need to configure Nexus to serve on /nexus. This would enable you to configure Nginx as follows (from docs) minus the hack above.
location /nexus {
proxy_pass http://localhost:8081/nexus;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
I would recommend using that configuration.

Related

Unable to proxy Shiny using Django and Nginx (HTTP Response Code 101)

I am trying to use Nginx and Django to help me serve Shiny applications within my company's internal servers. I am testing everything locally first to make sure all is working properly.
I am following two tutorials:
https://pawamoy.github.io/posts/django-auth-server-for-shiny/#proxying-shiny-requests-to-the-shiny-app
https://testdriven.io/dockerizing-django-with-postgres-gunicorn-and-nginx
I started with the testdrive.io post to setup Django and Nginx and then combined the ideas with pawamoy's blogpost to setup the Shiny part.
The final setup:
Django app is listening on 8000
Shiny on 8100
Nginx is 1337 (as per testdriven.io tutorial).
My final nginx conf file looks like this:
upstream django_app {
server web:8000;
}
upstream shinyapp_server {
server shiny:8100;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name localhost;
client_max_body_size 100M;
location / {
proxy_pass http://django_app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
# proxy_pass http://django_app;
if (!-f $request_filename) {
proxy_pass http://django_app;
break;
}
}
location /static/ {
alias /home/app/web/staticfiles/;
}
location /media/ {
alias /home/app/web/mediafiles/;
}
location ~ /shiny/.+ {
# auth_request /auth;
rewrite ^/shiny/(.*)$ /$1 break;
proxy_pass http://shinyapp_server;
proxy_redirect http://shinyapp_server/ $scheme://$host/shiny/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
proxy_buffering off;
}
# location = /auth {
# internal;
# proxy_pass http://django_app/shiny_auth/;
# proxy_pass_request_body off;
# proxy_set_header Content-Length "";
# proxy_set_header X-Original-URI $request_uri;
# }
}
I am building the images with the following compose file:
version: '3.8'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn django_app.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env.prod
depends_on:
- db
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
- 1337:80
depends_on:
- web
- shiny
links:
- web
- shiny
shiny:
build:
context: ./shinyapp
dockerfile: Dockerfile.shiny
expose:
- 8100
volumes:
postgres_data:
static_volume:
media_volume:
My views.py file:
def shiny(request):
return render(request, 'django_app/shiny.html')
def shiny_contents(request):
response = requests.get('http://shiny:8100')
soup = BeautifulSoup(response.content, 'html.parser')
return JsonResponse({'html_contents': str(soup)})
All works well up to the point of serving the Shiny contents at localhost:1337/shiny, however, when trying to use the proxy requests at localhost/shiny, I get the following message in the logs:
172.20.0.1 - - [08/Feb/2023:15:42:52 +0000] "GET /shiny/websocket/ HTTP/1.1" 101 31668 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.3 Safari/605.1.15" "-"
This is where I am stuck:
The Shiny contents are rendering nicely and properly but I can't seem to get the proxying setup correctly at the correct address, i.e. not sure if the proxy is working as expected.
Am I mapping ports and servers correctly?
Do need to do something with /etc/nginx/sites-available?
How to tell if this is a firewall issue?
If I manage to set this up correctly, will putting this in a remote server help me produce an internal link for people to interact with the apps?
Thanks a lot for any insights or comments.
Ilan
There are other posts with similar issues which I tried replicating on my end without any luck.

connect() failed (111: Connection refused) while connecting to upstream in kubernet service

enter code hereIn web browser get error "502 Bad Gateway Nginx".
And client get error [connect() failed (111: Connection refused) while connecting to upstream, client: 206.189.90.189, server: abc.xxx.xyz, request: "POST / HTTP/1.1", upstream: "http://10.245.21.96:244/", host: "188.166.204.10"].
enter code hereCan you tell me how to solve it. I spend 1 months but can't not fix it. :(((
enter code here1/my nginx config:
server {
listen 80 ;
listen [::]:80 ;
server_name abc.xxx.xyz;
sendfile off;
# Add stdout logging
error_log /dev/stdout info;
access_log /dev/stdout;
error_page 404 /404.html;
location / {
proxy_pass http://abc-service.default:244/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
}
2/My config abc-service:
Name: abc-service
Namespace: default
Labels: io.kompose.service=abc
Annotations:
Selector: io.kompose.service=abc
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.245.21.96
IPs: 10.245.21.96
Port: 244-tcp 244/TCP
TargetPort: 244/TCP
Endpoints: 10.244.0.211:244
Port: 18081-tcp 18081/TCP
TargetPort: 18081/TCP
Endpoints: 10.244.0.211:18081
Session Affinity: None

Nginx forward proxy based on header value

I want to use nginx as forward proxy, but rewrite (also the host part) the URL based on a header value.
Suppose the browser connect to nginx on port 8888 with a regular http request. The header ha the pair:
X-myattribute: https://somehost.com
nginx should proxy_pass to https://somehost.com
My nginx.conf is now:
server {
listen 8888;
proxy_connect;
proxy_max_temp_file_size 0;
resolver 8.8.8.8;
location / {
proxy_pass https://$http_myattribute;
# proxy_pass http://$http_host$uri$is_args$args;
proxy_set_header Host $http_host;
}
}
}
but I get:
2018/08/16 19:44:08 [error] 9#0: *1 invalid port in upstream "https://somehost.com:443", client: 172.17.0.1, server: , request: "GET / HTTP/1.1", host: "localhost:8888"
2018/08/16 19:47:25 [error] 9#0: *1 invalid URL prefix in "https://somehost.com:443", client: 172.17.0.1, server: , request: "GET / HTTP/1.1", host: "localhost:8888"
(two lines depending if I set proxy_pass http://$X-myattribute or proxy_pass https://$X-myattribute or proxy_pass $X-myattribute. Assume X-myattribute always have http:// or https://)
Any suggestion?

upstream server temporarily disabled while connecting to upstream

I have 2 Harbor servers running below nginx server (acting as load balancer and reverse proxy), namely harbor.
load balance nginx config:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
upstream harbor {
ip_hash;
server 10.57.18.120;
server 10.57.18.236;
}
server{
listen 80;
location / {
proxy_pass http://harbor;
}
}
}
nginx config in harbor:
worker_processes auto;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
tcp_nodelay on;
# this is necessary for us to be able to disable request buffering in all cases
proxy_http_version 1.1;
upstream registry {
server registry:5000;
}
upstream ui {
server ui:80;
}
server {
listen 80;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
location / {
proxy_pass http://ui/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# When setting up Harbor behind other proxy, such as an Nginx instance, remove the below line if the proxy already has similar settings.
# proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
}
location /v1/ {
return 404;
}
location /v2/ {
proxy_pass http://registry/v2/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# When setting up Harbor behind other proxy, such as an Nginx instance, remove the below line if the proxy already has similar settings.
# proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
}
location /service/ {
proxy_pass http://ui/service/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# When setting up Harbor behind other proxy, such as an Nginx instance, remove the below line if the proxy already has similar settings.
# proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
}
}
}
When both upstream servers are up, everything is ok, but if one upstream is down, nginx can't route requests to the server. Here are the logs:
2016/11/17 09:05:28 [error] 6#6: *1 connect() failed (113: No route to host) while connecting to upstream, client: 10.57.2.138, server: , request: "GET / HTTP/1.1", upstream: "http://10.57.18.236:80/", host: "10.57.18.236:2000"
2016/11/17 09:05:28 [warn] 6#6: *1 upstream server temporarily disabled while connecting to upstream, client: 10.57.2.138, server: , request: "GET / HTTP/1.1", upstream: "http://10.57.18.236:80/", host: "10.57.18.236:2000"
2016/11/17 09:05:28 [error] 6#6: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 10.57.2.138, server: , request: "GET / HTTP/1.1", upstream: "http://10.57.18.120:80/", host: "10.57.18.236:2000"
2016/11/17 09:05:28 [warn] 6#6: *1 upstream server temporarily disabled while connecting to upstream, client: 10.57.2.138, server: , request: "GET / HTTP/1.1", upstream: "http://10.57.18.120:80/", host: "10.57.18.236:2000"
10.57.2.138 - - [17/Nov/2016:09:05:28 +0000] "GET / HTTP/1.1" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36" "-"
2016/11/17 09:05:28 [error] 6#6: *1 no live upstreams while connecting to upstream, client: 10.57.2.138, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://apps/favicon.ico", host: "10.57.18.236:2000", referrer: "http://10.57.18.236:2000/"
10.57.2.138 - - [17/Nov/2016:09:05:28 +0000] "GET /favicon.ico HTTP/1.1" 502 575 "http://10.57.18.236:2000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36" "-"
2016/11/17 09:05:34 [error] 6#6: *7 no live upstreams while connecting to upstream, client: 10.57.2.138, server: , request: "GET / HTTP/1.1", upstream: "http://apps/", host: "10.57.18.236:2000"
10.57.2.138 - - [17/Nov/2016:09:05:34 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/601.6.17 (KHTML, like Gecko) Version/9.1.1 Safari/601.6.17" "-"
It shows "upstream server temporarily disabled while connecting to upstream" and "no live upstreams while connecting to upstream", when upstream1 is down, but upstream2 is still up.
But I still get the "502 Bad Gateway" if I use domainUrl. At this time, visiting upstream2 (via IP) in browser works fine.
I tried to add "proxy_next_upstream" in http, in server, in the location / block, same problem.

Nginx bypassing proxy_pass

In my conf file I have
upstream backend {
server xx.xx.xx.xx:8080;
server xx.xx.xx.xx:8080;
}
and then
location /adcode/adcode {
proxy_set_header HOST $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X_IP $remote_addr;
proxy_pass http://backend;
}
But sometimes instead of request going to the upstream servers. It goes to http://backend/adcode/adcode.
016/01/10 14:14:46 [error] 18474#0: *149951 no live upstreams while connecting to upstream, client: 208.107.89.45, server: _, request: "GET /adcode/adcode?crid=1744&refUrl=&cbrs=51487486&zz=51 HTTP/1.1", upstream: "http://backend/adcode/adcode?crid=1744&refUrl=&cbrs=51487486&zz=51", host: "show.*****.com", referrer: "http://show.****.com/adcode/adcode?crid=1744&cbrs=50633123&zz=11"
I have no idea why its doing this. Any suggestions ?

Resources