Unable to proxy Shiny using Django and Nginx (HTTP Response Code 101) - r

I am trying to use Nginx and Django to help me serve Shiny applications within my company's internal servers. I am testing everything locally first to make sure all is working properly.
I am following two tutorials:
https://pawamoy.github.io/posts/django-auth-server-for-shiny/#proxying-shiny-requests-to-the-shiny-app
https://testdriven.io/dockerizing-django-with-postgres-gunicorn-and-nginx
I started with the testdrive.io post to setup Django and Nginx and then combined the ideas with pawamoy's blogpost to setup the Shiny part.
The final setup:
Django app is listening on 8000
Shiny on 8100
Nginx is 1337 (as per testdriven.io tutorial).
My final nginx conf file looks like this:
upstream django_app {
server web:8000;
}
upstream shinyapp_server {
server shiny:8100;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name localhost;
client_max_body_size 100M;
location / {
proxy_pass http://django_app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
# proxy_pass http://django_app;
if (!-f $request_filename) {
proxy_pass http://django_app;
break;
}
}
location /static/ {
alias /home/app/web/staticfiles/;
}
location /media/ {
alias /home/app/web/mediafiles/;
}
location ~ /shiny/.+ {
# auth_request /auth;
rewrite ^/shiny/(.*)$ /$1 break;
proxy_pass http://shinyapp_server;
proxy_redirect http://shinyapp_server/ $scheme://$host/shiny/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
proxy_buffering off;
}
# location = /auth {
# internal;
# proxy_pass http://django_app/shiny_auth/;
# proxy_pass_request_body off;
# proxy_set_header Content-Length "";
# proxy_set_header X-Original-URI $request_uri;
# }
}
I am building the images with the following compose file:
version: '3.8'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn django_app.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env.prod
depends_on:
- db
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
- 1337:80
depends_on:
- web
- shiny
links:
- web
- shiny
shiny:
build:
context: ./shinyapp
dockerfile: Dockerfile.shiny
expose:
- 8100
volumes:
postgres_data:
static_volume:
media_volume:
My views.py file:
def shiny(request):
return render(request, 'django_app/shiny.html')
def shiny_contents(request):
response = requests.get('http://shiny:8100')
soup = BeautifulSoup(response.content, 'html.parser')
return JsonResponse({'html_contents': str(soup)})
All works well up to the point of serving the Shiny contents at localhost:1337/shiny, however, when trying to use the proxy requests at localhost/shiny, I get the following message in the logs:
172.20.0.1 - - [08/Feb/2023:15:42:52 +0000] "GET /shiny/websocket/ HTTP/1.1" 101 31668 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.3 Safari/605.1.15" "-"
This is where I am stuck:
The Shiny contents are rendering nicely and properly but I can't seem to get the proxying setup correctly at the correct address, i.e. not sure if the proxy is working as expected.
Am I mapping ports and servers correctly?
Do need to do something with /etc/nginx/sites-available?
How to tell if this is a firewall issue?
If I manage to set this up correctly, will putting this in a remote server help me produce an internal link for people to interact with the apps?
Thanks a lot for any insights or comments.
Ilan
There are other posts with similar issues which I tried replicating on my end without any luck.

Related

failed: Connection closed before receiving a handshake response

Thats a very weird error, because i have tried configuring it multiple times, and still cant get my websocket work properly. And i dont think that problem is in clientside.
So I have 2 guesses:
I might improperly configured nginx.conf file
i have improperly configured something what is related with docker. for example entrypoint.sh
So far i have tried editing both files, and also tried different variations of configuring routs. I also think that it can be some dumb mistake, but i have spent a long time on this, so i really appreciate any help or advide
here is asgi.py:
import os
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings')
from django.core.asgi import get_asgi_application
from channels.routing import ProtocolTypeRouter, URLRouter
from service import routing
from channels.auth import AuthMiddlewareStack
from django.core.asgi import get_asgi_application
application_asgi = ProtocolTypeRouter({
'http': get_asgi_application(),
'websocket':AuthMiddlewareStack(
URLRouter(
routing.websocket_urlpatterns
)
),
})
application = get_asgi_application()
routing.py:
from django.urls import re_path, path
from .consumers import EventConsumer
websocket_urlpatterns = [
path('^api/wsEvents/', EventConsumer.as_asgi())
]
my nginx.conf:
daemon off;
upstream django {
server django_gunicorn:8000;
}
upstream websocket {
server django_asgi:8080;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
http {
listen 8000;
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /static/ {
autoindex on;
alias ./backend/service/static:/backend/static; #and here also was just /static?
}
location /api/wsEvents/ {
proxy_pass http://localhost:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
docker-compose:
services:
django_asgi:
build:
context: .
command: daphne config.asgi:application --port 8080 --bind 0.0.0.0
volumes:
- .:/app/backend
environment:
- .env
links:
- db
- redis
depends_on:
- db
- redis
redis:
restart: always
image: redis
ports:
- 6379:6379
volumes:
- redisdata:/data
django_gunicorn:
volumes:
- static:/app/static ## here was just /static/ | also in default.conf same
env_file:
- .env
build:
context: .
ports:
- 8000:8000
links:
- redis
nginx:
build: ./nginx
volumes:
- static:/app/static/ # here was just static:/static/
depends_on:
- django_gunicorn
- django_asgi
ports:
- "80:80"
Any advice or help is appreciated

docker-nginx with docker-gen doesnt catch any of the declared subdomain

I setted up docker-nginx with docker-gen in a docker-compose file
version: '2'
services:
nginx:
image: nginx
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
container_name: nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ${NGINX_FILES_PATH}/conf.d:/etc/nginx/conf.d
- ${NGINX_FILES_PATH}/vhost.d:/etc/nginx/vhost.d
- ${NGINX_FILES_PATH}/html:/usr/share/nginx/html
- ${NGINX_FILES_PATH}/certs:/etc/nginx/certs:ro
nginx-gen:
image: jwilder/docker-gen
command: -notify-sighup nginx -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
container_name: nginx-gen
restart: unless-stopped
volumes:
- ${NGINX_FILES_PATH}/conf.d:/etc/nginx/conf.d
- ${NGINX_FILES_PATH}/vhost.d:/etc/nginx/vhost.d
- ${NGINX_FILES_PATH}/html:/usr/share/nginx/html
- ${NGINX_FILES_PATH}/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
restart: unless-stopped
volumes:
- ${NGINX_FILES_PATH}/conf.d:/etc/nginx/conf.d
- ${NGINX_FILES_PATH}/vhost.d:/etc/nginx/vhost.d
- ${NGINX_FILES_PATH}/html:/usr/share/nginx/html
- ${NGINX_FILES_PATH}/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_DOCKER_GEN_CONTAINER: "nginx-gen"
NGINX_PROXY_CONTAINER: "nginx"
networks:
default:
external:
name: nginx-proxy
everything works fine, I do have a default.conf folder generated, depending on my others containers, here it is:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# bnbkeeper.thibautduchene.fr
upstream bnbkeeper.thibautduchene.fr {
## Can be connect with "nginx-proxy" network
# bnbkeeper
server 172.20.0.12:8080;
}
server {
server_name bnbkeeper.thibautduchene.fr;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://bnbkeeper.thibautduchene.fr;
}
}
# gags.thibautduchene.fr
upstream gags.thibautduchene.fr {
## Can be connect with "nginx-proxy" network
# gogs
server 172.20.0.7:3000;
}
server {
server_name gags.thibautduchene.fr;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://gags.thibautduchene.fr;
}
}
# portainer.thibautduchene.fr
upstream portainer.thibautduchene.fr {
## Can be connect with "nginx-proxy" network
# portainer
server 172.20.0.9:9000;
}
server {
server_name portainer.thibautduchene.fr;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://portainer.thibautduchene.fr;
}
}
however, when I reach any of these proxied address, the server does'nt exist and nginx doesnt even catch the request...
It looks like nginx is not even aware of my subdomain.
Ok, for those that are as silly as me, don't forget to add the subdomain to your provider, nginx doe'st yet handle it by itself..

Docker + Nginx: Getting proxy_pass to work

I'm having a problem trying to get Nginx to proxy a path to another server that is also running in Docker.
To illustrate, I'm using Nexus server as an example.
This is my first attempt...
docker-compose.yml:-
version: '2'
services:
nexus:
image: "sonatype/nexus3"
ports:
- "8081:8081"
volumes:
- ./nexus:/nexus-data
nginx:
image: "nginx"
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
nginx.conf:-
worker_processes 4;
events { worker_connections 1024; }
http {
server {
listen 80;
location /nexus/ {
proxy_pass http://localhost:8081/;
}
}
}
When I hit http://localhost/nexus/, I get 502 Bad Gateway with the following log:-
nginx_1 | 2017/05/29 02:20:50 [error] 7#7: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET /nexus/ HTTP/1.1", upstream: "http://[::1]:8081/", host: "localhost"
nginx_1 | 2017/05/29 02:20:50 [error] 7#7: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET /nexus/ HTTP/1.1", upstream: "http://127.0.0.1:8081/", host: "localhost"
nginx_1 | 172.18.0.1 - - [29/May/2017:02:20:50 +0000] "GET /nexus/ HTTP/1.1" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"
In my second attempt...,
docker-compose.yml - I added links to Nginx configuration:-
version: '2'
services:
nexus:
image: "sonatype/nexus3"
ports:
- "8081:8081"
volumes:
- ./nexus:/nexus-data
nginx:
image: "nginx"
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
links:
- nexus:nexus
nginx.conf... Instead of using http://localhost:8081/, I use http://nexus:8081/:-
worker_processes 4;
events { worker_connections 1024; }
http {
server {
listen 80;
location /nexus/ {
proxy_pass http://nexus:8081/;
}
}
}
Now, when I hit http://localhost/nexus/, it gets proxied properly but the web content is partially rendered. When inspecting the HTML source code of that page, the javascript, stylesheet and image links are pointing to http://nexus:8081/[path]... hence, 404.
What should I change to get this to work properly?
Thank you very much.
The following additional options are what I have used
http {
server {
listen 80;
location /{
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
server_name_in_redirect on;
proxy_pass http://nexus:8081;
}
location /nexus/ {
proxy_pass http://nexus:8081/;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
server_name_in_redirect on;
}
}
}
My solution is to include the redirect for the '/' path in the nginx config. The Nexus app will be making requests to '/' for it resources which will not work.
However, this is not ideal and will not work with an Nginx configuration serving multiple apps.
The docs
cover this configuration and indicate that you need to configure Nexus to serve on /nexus. This would enable you to configure Nginx as follows (from docs) minus the hack above.
location /nexus {
proxy_pass http://localhost:8081/nexus;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
I would recommend using that configuration.

serving multiple docker microservices behind nginx proxy

I'm trying to figure out how to dynamically proxy several microservices behind a single nginx proxy via docker. I have been able to pull it off with a single app, but I would like to dynamically add microservices. I'm like to do this without restarting nginx and disrupting users.
Is this possible or should i create a config file for each microservice? I've included samples below:
localhost = simple welcome page
localhost/service1 = microservice
localhost/service2 = microservice
localhost/serviceN = microservice
docker-compose.yml
---
version: '2'
services:
app:
build: app
microservice1:
image: registry.local:4567/microservice1:latest
microservice2:
image: registry.local:4567/microservice2:latest
proxy:
build: proxy
ports:
- "80:80"
proxy.conf
server {
listen 80;
resolver 127.0.0.11 valid=5s ipv6=off;
set $upstream "http://app";
location / {
proxy_pass $upstream$request_uri;
}
}
I was also facing the same issue, I had microservices in the Flask and I had to deploy them in a single EC2 instance as a staging environment.
I had the directory structure as below:
SampleProject
|\_microservices
||\
|| \_A
|| |-docker-compose.yml
|| |-Dockerfile
| \
| \_B
| |-docker-compose.yml
| |-Dockerfile
|
|
|\_docker
| \_web
| |-Dockerfile
| |_nginx
| |-nginx.conf
|
|-docker-compose.yml(Nginx)
For Nginx the docker-compose.yml is given below:
version: '3.7'
services:
web:
build:
context: .
dockerfile: ./docker/web/Dockerfile
ports:
- "80:80"
networks:
default:
external:
name: microservices
And the configurations for Nginx is given below:
upstream files_to_text {
server microserviceA:5000;
}
upstream text_cleaning {
server microserviceB:5050;
}
server {
listen 80;
location /microserviceA {
proxy_pass http://files_to_text;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /microserviceB {
proxy_pass http://text_cleaning;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
To enforce SSL I used AWS Certificate Manager along with Application Load Balancer.
There are 3 steps:
Create an Application Load Balancer with default settings, In the register targets create a target by picking up your EC2 instance with HTTP protocol.
Monitor the Health of the Target group if healthy then edit the listeners of Application Load Balancer remove the default HTTP listener and add HTTPS listener. While adding HTTPS listener we need to specify the default actions as Forward To and select your target group and in Default SSL certificate select the certificate that you created with AWS Certificate Manager
Final step to add DNS name of Application Load Balancer to your name settings where you have purchased the domains.
Config file for each microservice in /etc/nginx/sites-available/ with a symlink in /etc/nginx/sites-enabled/
sample proxy.conf for each where you put app/microservice1/microservice2 as $MICRO_SERVICE,
upstream REPLACEME_SERVICENAME {
server $MICRO_SERVICE:PORT fail_timeout=0;
}
server {
listen 80;
server_name REPLACEME_SITENAME.REPLACEME_DOMAIN;
proxy_pass http://REPLACEME_SERVICENAME;
}
Force-SSL:
upstream REPLACEME_SITENAME.REPLACEME_DOMAIN {
server $MICRO_SERVICE fail_timeout=0;
}
server {
# We only redirect from port 80 to 443
# to enforce encryption
listen 80;
server_name REPLACEME_SITENAME.REPLACEME_DOMAIN;
return 301 https://REPLACEME_SITENAME.REPLACEME_DOMAIN$request_uri;
}
server {
listen 443 ssl http2;
server_name REPLACEME_SITENAME.REPLACEME_DOMAIN;
# If you require basic auth you can use these lines as an example
#auth_basic "Restricted!";
#auth_basic_user_file /etc/nginx/private/httplock;
# SSL
ssl_certificate /etc/letsencrypt/live/REPLACEME_SITENAME.REPLACEME_DOMAIN/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/REPLACEME_SITENAME.REPLACEME_DOMAIN/privkey.pem;
proxy_connect_timeout 75s;
proxy_send_timeout 75s;
proxy_read_timeout 75s;
proxy_http_version 1.1;
send_timeout 75s;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH";
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $remote_addr;
proxy_pass http://REPLACEME_SITENAME.REPLACEME_DOMAIN;
}
}
I also have a repo where I build a tiny nginx service for a raspberryPi in my closet that serves everything in my house to the WAN:
https://github.com/joshuacox/local-nginx/
there's a Makefile to help with creating new services as well.

nginx chaining proxy_pass

I am trying to build a reverse proxy service with nginx that would first validate a session cookie and proxy to the requested backend resource if valdation was successful.
I have this config ...
# upstream services
upstream backend_production {
#ip_hash;
server cumulonimbus.foo.com:81;
}
map $mycookie $mybackend {
_SESSION_COOKIE http://backend_production/session.php;
}
# session server
server {
listen *:80;
server_name devcumulonimbus cumulonimbus.foo.com;
error_log /var/log/nginx/session.error.log;
access_log /var/log/nginx/session.access.log;
#access_log off; # turn access_log off for speed
root /var/www/;
location = /favicon.ico {
return 204;
access_log off;
log_not_found off;
}
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_cache off;
# does cookie exist?
if ($http_cookie ~* "_SESSION_COOKIE")
{
set $mycookie '_SESSION_COOKIE';
proxy_pass $mybackend;
error_page 400 403 404 500 = /denied;
post_action /reader.php;
break;
}
# no cookie
rewrite ^(.*)$ http://cumulonimbus.foo.com:81/login.php;
}
location =/reader.php {
internal;
proxy_pass http://backend_production/reader.php;
}
location /denied/ {
internal;
rewrite ^(.*)$ http://cumulonimbus.foo.com:81/login.php;
}
}
I set _SESSION_COOKIE in login.php, update the cookie value in session.php and pop a page in reader.php. The problem is that I do not see the page emitted by reader.php eventhough syslog tells me it was hit (only the session.php page is displayed). nginx is the front-end, apache is the backend runnning the php services (this ienvironment is used for prototyping only).
==> /var/log/apache2/error.log <==
session_manager[4350]: CONNECTED TO SESSION MANAGER
==> /var/log/syslog <==
Jul 12 19:41:06 devcumulonimbus session_manager[4350]: CONNECTED TO SESSION MANAGER
==> /var/log/apache2/access.log <==
10.10.11.113 - - [12/Jul/2011:19:41:06 -0700] "GET /session.php HTTP/1.0" 200 454 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:5.0) Gecko/20100101 Firefox/5.0"
==> /var/log/apache2/error.log <==
web_reader[4352]: CONNECTED TO WEB READER
==> /var/log/syslog <==
Jul 12 19:41:06 devcumulonimbus web_reader[4352]: CONNECTED TO WEB READER
If I hit memcache first I can see the page.
I would also like to be able to capture the output response of the first request (first proxy_pass)? should I use the nginx lua module.

Resources