Can shiny determine the use who logged in to nginx reverse proxy - r

I've successfully implemented an nginx reverse proxy for my shiny-server in order to have SSL and user authentication. However, there is still a gap that I can't figure out. Is there a way for my shiny app to determine which user is actually logged in?
Here's my /etc/nginx/sites-available/default
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name myserver.com;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/shiny.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://localhost:3838;
proxy_read_timeout 90;
proxy_redirect http://localhost:3838 https://myserver.com;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
}
}
With the last two lines of my location I expect to have a header with the user name. I found that tip here. I found this which allows me to see my header information but none of the headers I see are my user name.
Edit:
With Bert Neef's citation, I see why the above didn't work. However, the server does have access to the HTTP_SEC_WEBSOCKET_KEY header which is unique across sessions. It seems that if we can get Nginx to log that value then the server can look at that code to match up the header to an actual user. That being said, I don't know if that is feasible and I don't know how to get Nginx to log that value.

Based on the Shiny Docs this a Shiny Server Professional feature only and you need to use the whitelist_headers directive to get those headers:
4.9 Proxied Headers
Typically, HTTP headers sent to Shiny Server will not be forwarded to the underlying Shiny application. However, Shiny
Server Professional is able to forward specified headers into the
Shiny application using the whitelist_headers configuration directive,
which can be set globally or for a particular server or location.
Update: just tested the whitelist-headers option in a non-pro shiny server install, and I can't get the custom headers to show. I did verify the headers were send on by nginx by using netcat to show me the incoming data (nc -l 8080 and a quick change to proxy_pass in the nginx.conf file).
Update 2: can't get NGINX to log the HTTP_SEC_WEBSOCKET_KEY header (the authorization header is logged after specifying it in the log specification) and I can't see it in the traffic between nginx and Shiny Server, I think it either boils down to getting Shiny Server Professional or to modifying the shiny source code to pass the authorization header to the application.

Related

Howto block nginx web site access if browsers have no ssl certificate

I am a newb and i installed jupyterhub with nginx reverse proxy on my ubuntu 18.04 server. I built my own root CA and self signed certificate with openssl. Https connections works very well if my rootCA is installed on my others computers. I want to block access for the computers who don't have my rootCA.
the file /etc/nginx/nginx.conf is untouched and my config file /etc/nginx/sites-available/jupyter.conf is:
# top-level http config for websocket headers If Upgrade is defined,
# Connection = upgrade If Upgrade is empty, Connection = close
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP server to redirect all 80 traffic to SSL/HTTPS
server {
listen 80;
server_name 192.168.4.70 mlserver.net localhost;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
# HTTPS server to handle JupyterHub
server {
listen 443;
ssl on;
server_name 192.168.4.70 mlserver.net localhost;
ssl_certificate /etc/ssl/certs/mlserver.net.crt;
ssl_certificate_key /etc/ssl/private/mlserver.net.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
#ssl_stapling on;
# Managing literal requests to the JupyterHub front end
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Scheme $scheme;
proxy_buffering off;
}
}
How can i edit this file to block access for computers who dont have certificate ?
What nginx directive add ?
Thanx.
I want to block access for the computers who don't have my rootCA.
This is not possible. The server has no information if the client has successfully validated the server certificate (i.e. clients which have the rootCA) or if a client simply skipped certificate validation (clients which don't have rootCA).
One could try to add a HSTS header so that browsers will not simply allow to ignore certificate problems. But this can also be bypassed on the client side without the server noticing, it just makes it a bit harder.
If you want to control who can access the notebook you would need proper authentication of the clients instead. Knowledge of the rootCA is not client authentication.

Can't Connect to Meteor Web Socket through React Native

I am currently hosting a bundled Meteor app on Digital Ocean with nginx using this tutorial
I am using the react-native-meteor package in React Native to connect to this server. When the server is hosted on localhost, Meteor.connect(ws://192.168.0.2:3000/websocket) works.
Also, when the app is running on Digital Ocean, I am able to connect to the meteor server's webpage with https://XXX.XXX.X.XX after bypassing security warning and the websocket with wss://XXX.XXX.X.XX/websocket.
However, running Meteor.connect(wss://XXX.XXX.X.XX/websocket) or Meteor.connect(ws://XXX.XXX.X.XX/websocket) do not work.
Here is the nginx config:
server_tokens off; # for security-by-obscurity: stop displaying nginx version
# this section is needed to proxy web-socket connections
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP
server {
listen 80 default_server; # if this is not a default server, remove "default_server"
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html; # root is irrelevant
index index.html index.htm; # this is also irrelevant
server_name XXX.XXX.X.X; # the domain on which we want to host the application. Since we set "default_server" previously, nginx will answer all hosts anyway.
# redirect non-SSL to SSL
location / {
rewrite ^ https://$server_name$request_uri? permanent;
}
}
# HTTPS server
server {
listen 443 ssl spdy; # we enable SPDY here
server_name XXX.XXX.X.X; # this domain must match Common Name (CN) in the SSL certificate
root html; # irrelevant
index index.html; # irrelevant
ssl_certificate /etc/nginx/ssl/budget.pem; # full path to SSL certificate and CA certificate concatenated together
ssl_certificate_key /etc/nginx/ssl/budget.key; # full path to SSL key
# performance enhancement for SSL
ssl_stapling on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
# safety enhancement to SSL: make sure we actually use a safe cipher
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK';
# config to enable HSTS(HTTP Strict Transport Security) https://developer.mozilla.org/en-US/docs/Security/HTTP_Strict_Transport_Security
# to avoid ssl stripping https://en.wikipedia.org/wiki/SSL_stripping#SSL_stripping
add_header Strict-Transport-Security "max-age=31536000;";
# If your application is not compatible with IE <= 10, this will redirect visitors to a page advising a browser update
# This works because IE 11 does not present itself as MSIE anymore
if ($http_user_agent ~ "MSIE" ) {
return 303 https://browser-update.org/update.html;
}
# pass all requests to Meteor
location / {
proxy_pass http://0.0.0.0:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; # allow websockets
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr; # preserve client IP
# this setting allows the browser to cache the application in a way compatible with Meteor
# on every applicaiton update the name of CSS and JS file is different, so they can be cache infinitely (here: 30 days)
# the root path (/) MUST NOT be cached
if ($uri != '/') {
expires 30d;
}
}
}
Any help is appreciated!
You should update your question to show the error message (open you browser javascript console then refresh your link and recreate the error condition)
... your nginx config must include these settings
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
in your nginx config as per
location / {
proxy_pass http://GKE_NGINX_NODEJS_ENDUSER_SERVER_IP:3000/;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
# Include support for web sockets:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
in addition to above assure you have this in your server block
server {
server_name example.com;
and not the IP of the server as per :
server_name XXX.XXX.X.X; # this domain must match Common Name (CN) in the SSL certificate
there are many moving parts ... assure you have defined the environment variable METEOR_SETTINGS prior to launching your app when you execute node
METEOR_SETTINGS={
"public": {
"rootUrl": "https://example.com",
< ... more here ... >
},
"cordova": {
"localhost": "http://localhost:12416"
},
< ... more here ... >
}

WSO2 APIM: Subdomains for different contexts

We have the WSO2 API Manager 1.10.0 deployed and working. Although we are trying to figure out if it is possible to have multiple subdomains for it.
For example:
store.domain.com
publisher.domain.com
carbon.domain.com
Is this at all possible? We've seen this https://docs.wso2.com/display/Carbon442/Adding+a+Custom+Proxy+Path, but this is for different applications, we want to do this only with the API Manager.
In front of the API Manager, we are using nginx with reverse proxy. Below, you can find a snippet from nginx to help while understanding the problem.
server {
listen 80;
server_name store.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
ssl on;
ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH";#:AES128+EDH";
ssl_protocols TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=63072000";
server_name store.domain.com;
ssl_certificate /etc/nginx/ssl/domain.com/self-ssl.crt;
ssl_certificate_key /etc/nginx/ssl/domain.com/self-ssl.key;
access_log /var/log/nginx/store.log;
underscores_in_headers on;
location / {
proxy_pass http://wso2server:9443/store/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
When attempting to access with HTTP (for the store context) all works fine, but as soon as we switch this over to HTTPS this fails with the following error in nginx upstream prematurely closed connection while reading response header from upstream however, we see nothing in API Manager logs.
Thanks in advance!
Best Regards
You can solve your issue by one of following methods.
Adding proxy_redirect configs to Nginx. So nginx will rewrite all the URLs to proper URL. Please refer the following config segment.
proxy_redirect http://wso2server/ http://store.domain.com/;
Also you can achieve the same by adding reverse proxy configurations in API Manager store. To do this Open "repository/deployment/server/jaggeryapps/store/site/conf/site.json" and see the following config section
"reverseProxy" : {
"enabled" : false, // values true , false , "auto" - will look for X-Forwarded-* headers
"host" : "sample.proxydomain.com", // If reverse proxy do not have a domain name use IP
"context":"",
//"regContext":"" // Use only if different path is used for registry
},

How to create Kubernetes cluster serving its own container with SSL and NGINX

I'm trying to build a Kubernetes cluster with following services inside:
Docker-registry (which will contain my django Docker image)
Nginx listenning both on port 80 and 443
PostgreSQL
Several django applications served with gunicorn
letsencrypt container to generate and automatically renew signed SSL certificates
My problem is a chicken and egg problem that occurs during the creation of the cluster:
My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80)
So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed...
So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443
-> This kind of look like a waste of resources in my opinion, but why not.
Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https.
So in my nginx configuration, I'll have a docker-registry.conf file looking like:
upstream docker-registry {
server registry:5000;
}
server {
listen 443;
server_name docker.thedivernetwork.net;
# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
The important part is the proxy_pass that redirect toward the registry container.
The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf:
upstream django {
server django:5000;
}
server {
listen 443 ssl;
server_name example.com;
charset utf-8;
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 20M;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_django;
}
location #proxy_to_django {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
#proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 65;
proxy_read_timeout 65;
proxy_pass http://django;
}
}
So nginx will successfully start only under 3 conditions:
secret is mounted (this could be addressed by splitting Nginx into 2 separate containers)
registry service is started
django service is started
The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again.
I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them
The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations:
I start docker registry service
I start Nginx with only the registry.conf
I create my django rc and service
I reload nginx with both registry.conf and django.conf
If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well.
How can I cleanly achieve this setup?
Thanks for your help
Thibault
Are you using Kubernetes Services for your applications?
With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned.
So you start the Services, then start nginx and whatever Pod you want in the order you want.

Deploying Meteor to production with Meteor-Up, SSL and NGINX

I'm having difficulty deploying my meteor app ("myApp" below) into production using meteor-up with https and NGINX as a proxy. In particular, I think I am having trouble configuring the correct ports and/or paths.
The deployment has worked in most respects. It is running on a digital ocean droplet with a mongohq (now compose.io) database. My mup setup, mup reconfig (run now many times on my mup.json file) and mup deploy commands with meteor-up all report no errors. If I ssh into my ubuntu environment on digital ocean and run status myApp it reports myApp start/running, process 10049, and when I check my mongohq database, I can see the expected collections for myApp were created and seeded. I think on this basis that the app is running properly.
My problem is that I cannot locate it visiting the site, and having no experience with NGINX servers, I cannot tell if I am doing something very basic and wrong setting up the ports and forwarding.
I have reproduced the relevant parts of my NGINX config file and mup.json file below.
The behavior I expected with the setup below is that if my meteor app listens on port 3000 in mup.json the app should appear when I visit the site. In fact, if I set mup.json's env.PORT to 3000, when visiting the site my browser tells me there is a redirect loop. If I change mup's env.PORT to 80, or leave the env.PORT out entirely, I receive a 502 Bad Gateway message - this part is to be expected because myApp should be listening on localhost:3000 and I wouldn't expect to find anything anywhere else.
All help is MUCH appreciated.
MUP.JSON (in relevant part, lmk if more needs to be shown)
"env": {
"PORT": 3000,
"NODE_ENV": "production",
"ROOT_URL": "http://myApp.com",
"MONGO_URL": // working ok, not reproduced here,
"MONGO_OPLOG_URL": // working ok I think,
"MAIL_URL": // working ok
}
NGINX
server_tokens off;
# according to a digital ocean guide i followed here, https://www.digitalocean.com/community/tutorials/how-to-deploy-a-meteor-js-application-on-ubuntu-14-04-with-nginx, this section is needed to proxy web-socket connections
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name myApp.com;
# redirect non-SSL to SSL
location / {
rewrite ^ https://$server_name$request_uri? permanent;
}
}
# HTTPS
server {
listen 443 ssl spdy;
# this domain must match Common Name (CN) in the SSL certificate
server_name myApp.com;
root html;
index index.html index.htm;
ssl_certificate /etc/nginx/ssl/tempcert.crt;
ssl_certificate_key /etc/nginx/ssl/tempcert.key;
ssl_stapling on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'long string I didn't reproduce here'
add_header Strict-Transport-Security "max-age=31536000;";
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Also note that the SSL certificates are configured and work fine so I think it is something with how the ports, paths and forwarding is configured. I don't know where the redirect loop is coming from.
For anyone coming across this in the future, I was able to solve things by removing the force-ssl package from my bundled meteor app. Apparently force-ssl and an NGINX proxy are either redundant or if used together can cause too many redirects. This was not well-documented in the materials I was able to locate.
If there is a configuration that supports using force-ssl together with a proxy that serves some purpose and is preferable to removing the package altogether, please post as I would be interested to know. Thanks.
I believe you can keep the force-ssl package as long as you add the X-Forward-Proto header to your Nginx config.
Example:
proxy_set_header X-Forward-Proto https;
Additionally, make sure you have the X-Forward-For set as well, though that's already in the example you posted.
Source
As the documentation of the force-ssl package says , you have to set the x-forwarded-proto header to https :
So your location field in the nginx configuration will be like :
location / {
#your own config...
proxy_set_header X-Forwarded-Proto https;
}
I'm running meteor behind an NGinx proxy. I got the error about too many redirects after installing force-ssl.
What worked to remove force-ssl and then add the following lines to my location in my nginx config:
proxy_set_header X-Forward-Proto https;
proxy_set_header X-Nginx-Proxy true;
Works perfectly now.

Resources