Reverse Proxy to VPC-Based AWS Elasticsearch Domain Without Bypassing AWS Cognito - nginx

I apologize, in advance - I'm extremely new to Nginx.
I have two VPC-based AWS Elasticsearch Domains, we'll call dev and prod. I want both domains to be inaccessible to the open internet, but available in some networks outside the VPC. To that end, I set them up as VPC-based Elasticsearch domains and planned to use a reverse proxy accessible only from the networks I wish. I've setup the dev cluster, which has no authentication, using an NGINX reverse proxy with the following config:
events{
}
http{
server {
listen 80;
server_name kibana-dev.[domain name];
location / {
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
proxy_pass https://[vpc id].[vpc region].es.amazonaws.com/_plugin/kibana/;
proxy_redirect https://[vpc id].[vpc region].es.amazonaws.com/_plugin/kibana/ https://kibana-dev.[domain name]/;
}
location ~ (/app/kibana|/app/timelion|/bundles|/es_admin|/plugins|/api|/ui|/elasticsearch|/app/opendistro-alerting) {
proxy_pass https://[vpc id].[vpc region].es.amazonaws.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
}
}
}
This works fine.
For the prod domain, however, I'm running into an issue. I want all users, even those that use the proxy, to have to authenticate with AWS Cognito (so I don't just want to, for example, create an access policy with an IP exception for the proxy's IP address, as that bypasses Cognito).
I have used a similar NGINX config for my "prod" Elasticsearch instance, but with no luck. The Cognito login page redirects to the VPC-based URL after authentication. I've tried manually adding my proxy's URL to the Cognito app's Callback URLs, but it still redirects by default to the VPC-based URL. I've also tried manually changing the redirect URI in the Cognito URL to refer to my proxy, but I've found that after authenticating I'm redirected to the Cognito login page again - perhaps a header or something isn't getting through?
How (or can) I get this running in Nginx, so that users can access the "prod" Elasticsearch domain while still being required to authenticate with AWS Cognito?
Thank you!

Doh! I should have read the documentation more carefully. AWS provides an example Nginx conf file for a Kibana proxy with Cognito:
{
server {
listen 443;
server_name $host;
rewrite ^/$ https://$host/_plugin/kibana redirect;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location /_plugin/kibana {
# Forward requests to Kibana
proxy_pass https://$kibana_host/_plugin/kibana;
# Handle redirects to Cognito
proxy_redirect https://$cognito_host https://$host;
# Update cookie domain and path
proxy_cookie_domain $kibana_host $host;
proxy_cookie_path / /_plugin/kibana/;
# Response buffer settings
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
location ~ \/(log|sign|fav|forgot|change|saml|oauth2) {
# Forward requests to Cognito
proxy_pass https://$cognito_host;
# Handle redirects to Kibana
proxy_redirect https://$kibana_host https://$host;
# Update cookie domain
proxy_cookie_domain $cognito_host $host;
}
}

Related

http proxy-pass to https S3 bucket

Im'm trying to deploy an application in kubernetes which connects to an s3 object bucket. The s3 bucket is exposed by an storage api secured with a self signed certificate.
The application which is supposed to connect to this bucket is already into a container and it't not that easy to edit to include the CA of the s3 bucket. I can only manipulate the endpoint of the s3 bucket.
Since all this is deployed into kubernetes, I thoght that It would be possible to deploy an nginx pod to act as a proxy and negotiate SSL with the bucket and expose it by a kubernetes service.
I searched about this in google and found this article in which explains how to use nginx as a proxy.
This is my nginx configuration.
http {
default_type text/html;
#access_log /;
sendfile on;
keepalive_timeout 65;
proxy_cache_path /tmp/ levels=1:2 keys_zone=s3_cache:10m max_size=500m
inactive=60m use_temp_path=off;
server {
listen 80;
# Configure your domain name here:
server_name _;
# Configure your Object Storage bucket URL here:
set $bucket "myobjectstoragebucket.int.company.net";
# This configuration provides direct access to the Object Storage bucket:
location / {
resolver 1.1.1.1;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Connection "";
proxy_set_header Authorization '';
proxy_set_header Host $bucket;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-meta-server-side-encryption;
proxy_hide_header x-amz-server-side-encryption;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_intercept_errors on;
add_header Cache-Control max-age=31536000;
proxy_ssl_verify off;
proxy_pass http://$bucket;
}
And doesn't seem to be working. The client logs that is trying to connect to the upstream endpoint (myobjectstoragebucket.int.company.net) and fails to authenticate ssl.
In the client I've added the kubernetes service to use this nginx proxy. And seems to be working since it reaches the s3 bucket.
Is the idea even possible? Sorry if this is nonsese I don't know much about NGINX or S3.
Thanks for the help.

Firebase custom domain 'Needs Setup' with nginx proxy server

I have a Frontend app hosted on Firebase hosting. I also have a backend API running on a Digital Ocean droplet. I have nginx installed on the droplet which will either redirect to the frontend app or to the backend API. My nginx configuration file looks like the following:
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name myapp.com *.myapp.com;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://150.101.64.193:80;
proxy_read_timeout 90;
proxy_redirect http://150.101.64.193:80 https://myapp.com;
}
location /api/ {
proxy_pass http://localhost:5000;
proxy_read_timeout 90;
proxy_redirect http://localhost:5000 https://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Firebase tells me to copy two TXT records into my DNS settings but could only get that to work if I wanted to map the domain name to the frontend app. Instead my DNS settings map my domain name to the IP address of the droplet. The proxy on the droplet should then forward the request to either the frontend or the backend depending on the route passed. e.g.
www.myapp.com/blah redirects to Firebase app
www.myapp.com/api/blah redirects to API
Currently Firebase reports that my custom domain needs setup because there is no corresponding TXT records. This is the first time I have tried to deploy a web app so I am unsure if this setup will work.
If you are using NGINX to proxy traffic to a Firebase Hosting site, you likely want to proxy that traffic to the shared domain e.g. <site>.web.app instead of re-proxying back to the same domain that is serving traffic.
We don't recommend putting proxies in front of Firebase Hosting as that defeats the purpose of Firebase Hosting's global CDN, but it should work.
You could also use Cloud Functions or Cloud Run to build your API surface directly instead of proxying to an NGINX backend.

Howto block nginx web site access if browsers have no ssl certificate

I am a newb and i installed jupyterhub with nginx reverse proxy on my ubuntu 18.04 server. I built my own root CA and self signed certificate with openssl. Https connections works very well if my rootCA is installed on my others computers. I want to block access for the computers who don't have my rootCA.
the file /etc/nginx/nginx.conf is untouched and my config file /etc/nginx/sites-available/jupyter.conf is:
# top-level http config for websocket headers If Upgrade is defined,
# Connection = upgrade If Upgrade is empty, Connection = close
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP server to redirect all 80 traffic to SSL/HTTPS
server {
listen 80;
server_name 192.168.4.70 mlserver.net localhost;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
# HTTPS server to handle JupyterHub
server {
listen 443;
ssl on;
server_name 192.168.4.70 mlserver.net localhost;
ssl_certificate /etc/ssl/certs/mlserver.net.crt;
ssl_certificate_key /etc/ssl/private/mlserver.net.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
#ssl_stapling on;
# Managing literal requests to the JupyterHub front end
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Scheme $scheme;
proxy_buffering off;
}
}
How can i edit this file to block access for computers who dont have certificate ?
What nginx directive add ?
Thanx.
I want to block access for the computers who don't have my rootCA.
This is not possible. The server has no information if the client has successfully validated the server certificate (i.e. clients which have the rootCA) or if a client simply skipped certificate validation (clients which don't have rootCA).
One could try to add a HSTS header so that browsers will not simply allow to ignore certificate problems. But this can also be bypassed on the client side without the server noticing, it just makes it a bit harder.
If you want to control who can access the notebook you would need proper authentication of the clients instead. Knowledge of the rootCA is not client authentication.

WSO2 APIM: Subdomains for different contexts

We have the WSO2 API Manager 1.10.0 deployed and working. Although we are trying to figure out if it is possible to have multiple subdomains for it.
For example:
store.domain.com
publisher.domain.com
carbon.domain.com
Is this at all possible? We've seen this https://docs.wso2.com/display/Carbon442/Adding+a+Custom+Proxy+Path, but this is for different applications, we want to do this only with the API Manager.
In front of the API Manager, we are using nginx with reverse proxy. Below, you can find a snippet from nginx to help while understanding the problem.
server {
listen 80;
server_name store.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
ssl on;
ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH";#:AES128+EDH";
ssl_protocols TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=63072000";
server_name store.domain.com;
ssl_certificate /etc/nginx/ssl/domain.com/self-ssl.crt;
ssl_certificate_key /etc/nginx/ssl/domain.com/self-ssl.key;
access_log /var/log/nginx/store.log;
underscores_in_headers on;
location / {
proxy_pass http://wso2server:9443/store/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
When attempting to access with HTTP (for the store context) all works fine, but as soon as we switch this over to HTTPS this fails with the following error in nginx upstream prematurely closed connection while reading response header from upstream however, we see nothing in API Manager logs.
Thanks in advance!
Best Regards
You can solve your issue by one of following methods.
Adding proxy_redirect configs to Nginx. So nginx will rewrite all the URLs to proper URL. Please refer the following config segment.
proxy_redirect http://wso2server/ http://store.domain.com/;
Also you can achieve the same by adding reverse proxy configurations in API Manager store. To do this Open "repository/deployment/server/jaggeryapps/store/site/conf/site.json" and see the following config section
"reverseProxy" : {
"enabled" : false, // values true , false , "auto" - will look for X-Forwarded-* headers
"host" : "sample.proxydomain.com", // If reverse proxy do not have a domain name use IP
"context":"",
//"regContext":"" // Use only if different path is used for registry
},

How to create Kubernetes cluster serving its own container with SSL and NGINX

I'm trying to build a Kubernetes cluster with following services inside:
Docker-registry (which will contain my django Docker image)
Nginx listenning both on port 80 and 443
PostgreSQL
Several django applications served with gunicorn
letsencrypt container to generate and automatically renew signed SSL certificates
My problem is a chicken and egg problem that occurs during the creation of the cluster:
My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80)
So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed...
So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443
-> This kind of look like a waste of resources in my opinion, but why not.
Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https.
So in my nginx configuration, I'll have a docker-registry.conf file looking like:
upstream docker-registry {
server registry:5000;
}
server {
listen 443;
server_name docker.thedivernetwork.net;
# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
The important part is the proxy_pass that redirect toward the registry container.
The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf:
upstream django {
server django:5000;
}
server {
listen 443 ssl;
server_name example.com;
charset utf-8;
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 20M;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_django;
}
location #proxy_to_django {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
#proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 65;
proxy_read_timeout 65;
proxy_pass http://django;
}
}
So nginx will successfully start only under 3 conditions:
secret is mounted (this could be addressed by splitting Nginx into 2 separate containers)
registry service is started
django service is started
The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again.
I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them
The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations:
I start docker registry service
I start Nginx with only the registry.conf
I create my django rc and service
I reload nginx with both registry.conf and django.conf
If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well.
How can I cleanly achieve this setup?
Thanks for your help
Thibault
Are you using Kubernetes Services for your applications?
With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned.
So you start the Services, then start nginx and whatever Pod you want in the order you want.

Resources