NGINX proxy to a Zeit Now deployment - nginx

I have several application server running several Node applications (via PM2).
I have one NGINX server which has the SSL certificate for the domain and reverse-proxies to the Node applications.
Within the NGINX configuration file I set the domains with their location block like this:
server {
listen 443 ssl;
server_name
geolytix.xyz;
ssl_certificate /etc/letsencrypt/live/geolytix.xyz/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/geolytix.xyz/privkey.pem;
location /demo {
proxy_pass http://159.65.61.61:3000/demo;
proxy_set_header HOST $host;
proxy_buffering off;
}
location /now {
proxy_pass https://xyz-heigvbokgr.now.sh/now;
proxy_set_header HOST $host;
proxy_buffering off;
}
}
This only works for the application server. The proxy to the Zeit Now deployment yields a bad gateway. The application itself work as expected if I go to the Zeit Now address of my deployment.
Does anybody know whether I might be missing some settings to proxy to Zeit Now?

now servers require the use of SNI for https connections. Like almost all modern webservers.
You need do add
proxy_ssl_server_name on;
to your configuration.
The smallest location block would be the following:
location / {
proxy_set_header host my-app.now.sh;
proxy_ssl_server_name on;
proxy_pass https://alias.zeit.co;
}

Related

Nginx: How to deploy front end & backend apps on same machine with same domain but different ports?

I have two apps one for frontend built using ReactJS and one is for backend built using FastAPI. I have server machine where I have deployed both the apps. Now I want to use Nginx (because of SSL) to host both my application on the same machine with same domain name but the ports are different. I know how to do it for different domains or subdomain but I don't have another domain/subdomain with me right now. So I want to aks how I can achive this in Nginx?
For example my FE is using port 5000 & BE is using 8000,I am able to configure Nginx to serve my FE but I am getting this error,
Blocked loading mixed active content
because my FE which is httpstrying to connect to backend on port 8000 which is not https.
Here is my nginx config file,
server {
listen 443 ssl;
ssl_certificate /opt/ssl/bundle.crt;
ssl_certificate_key /opt/ssl/custom.key;
# add here the ip address of your server
# or a domain pointing to that ip (like example.com or www.example.com)
server_name something-c11.main0.auto.qa.use1.mydomain.net;
keepalive_timeout 5;
client_max_body_size 100M;
access_log /opt/MY_FE/nginx-access.log;
error_log /opt/MY_FE/nginx-error.log;
# checks for static file, if not found proxy to app
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://localhost:5000;
proxy_redirect off;
}
}
server {
if ($host = something-c11.main0.auto.qa.use1.mydomain.nett) {
return 301 https://$host$request_uri;
}
listen 80;
server_name something-c11.main0.auto.qa.use1.mydomain.net;
return 404;
}
Any help would be appreciated....

Reverse Proxy to VPC-Based AWS Elasticsearch Domain Without Bypassing AWS Cognito

I apologize, in advance - I'm extremely new to Nginx.
I have two VPC-based AWS Elasticsearch Domains, we'll call dev and prod. I want both domains to be inaccessible to the open internet, but available in some networks outside the VPC. To that end, I set them up as VPC-based Elasticsearch domains and planned to use a reverse proxy accessible only from the networks I wish. I've setup the dev cluster, which has no authentication, using an NGINX reverse proxy with the following config:
events{
}
http{
server {
listen 80;
server_name kibana-dev.[domain name];
location / {
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
proxy_pass https://[vpc id].[vpc region].es.amazonaws.com/_plugin/kibana/;
proxy_redirect https://[vpc id].[vpc region].es.amazonaws.com/_plugin/kibana/ https://kibana-dev.[domain name]/;
}
location ~ (/app/kibana|/app/timelion|/bundles|/es_admin|/plugins|/api|/ui|/elasticsearch|/app/opendistro-alerting) {
proxy_pass https://[vpc id].[vpc region].es.amazonaws.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
}
}
}
This works fine.
For the prod domain, however, I'm running into an issue. I want all users, even those that use the proxy, to have to authenticate with AWS Cognito (so I don't just want to, for example, create an access policy with an IP exception for the proxy's IP address, as that bypasses Cognito).
I have used a similar NGINX config for my "prod" Elasticsearch instance, but with no luck. The Cognito login page redirects to the VPC-based URL after authentication. I've tried manually adding my proxy's URL to the Cognito app's Callback URLs, but it still redirects by default to the VPC-based URL. I've also tried manually changing the redirect URI in the Cognito URL to refer to my proxy, but I've found that after authenticating I'm redirected to the Cognito login page again - perhaps a header or something isn't getting through?
How (or can) I get this running in Nginx, so that users can access the "prod" Elasticsearch domain while still being required to authenticate with AWS Cognito?
Thank you!
Doh! I should have read the documentation more carefully. AWS provides an example Nginx conf file for a Kibana proxy with Cognito:
{
server {
listen 443;
server_name $host;
rewrite ^/$ https://$host/_plugin/kibana redirect;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location /_plugin/kibana {
# Forward requests to Kibana
proxy_pass https://$kibana_host/_plugin/kibana;
# Handle redirects to Cognito
proxy_redirect https://$cognito_host https://$host;
# Update cookie domain and path
proxy_cookie_domain $kibana_host $host;
proxy_cookie_path / /_plugin/kibana/;
# Response buffer settings
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
location ~ \/(log|sign|fav|forgot|change|saml|oauth2) {
# Forward requests to Cognito
proxy_pass https://$cognito_host;
# Handle redirects to Kibana
proxy_redirect https://$kibana_host https://$host;
# Update cookie domain
proxy_cookie_domain $cognito_host $host;
}
}

NGINX requires restart when service reset

We use NGINX in docker swarm, as a reverse proxy. NGINX sits within the overlay network and relays external requests on to the relevant swarm service.
However we have an issue, where every time we restart / update or otherwise take down a swarm service, NGINX returns 502 Bad Gateway. NGINX then continues to serve a 502 even after the service is restarted, and this is not corrected until we restart the NGINX service, which obviously defies the whole point of having a load balancer and services running in multiple places.
Here is our NGINX CONF:
events {}
http {
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
client_max_body_size 20M;
large_client_header_buffers 8 256k;
client_header_buffer_size 256k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
map $host $client {
default clientname;
}
#Healthcheck
server {
listen 443;
listen 444;
location /is-healthy {
access_log off;
return 200;
}
}
#Example service:
server {
listen 443;
server_name scheduler.clientname.com;
location / {
resolver 127.0.0.11 ipv6=off;
proxy_pass http://$client-scheduler:60911;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
#catchll
server {
listen 443;
listen 444;
server_name _;
location / {
return 404 'Page not found';
}
}
}
We use the $client placeholder as otherwise we can't even start nginx when one of the services is down.
The other alternative is to use an upstream directive that has health checks, which can work well. Issue with this is that if any of the services are unavailable, NGINX won't even start!
What are we doing wrong?
UPDATE
It appears what we want here is impossible (please prove me wrong though!). Seems crazy to miss such a feature in the world of docker and micro-services!
We are currently looking at HAPROXY as an alternative, as this can be setup with default-server init-addr none to stop failure on startup.
Here is how I do it, create an upstream with max_fails=0
upstream docker-api {
server docker.api:80 max_fails=0;
}
# load configs
server {
listen 80;
listen [::]:80;
server_name localhost;
location /api {
proxy_pass http://docker-api;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
# Others config...
}
}
I had the same problem by using docker-compose. Nginx container could not connect the web service after docker-compose restart.
Finally I figure out two circumstances cause this glitch. First, docker-compose restart do not follow the depends_on which should be restart the nginx after web restarted. Second, docker-compose restart reassign a new internal ip address to containers and nginx do not refresh the web ip address after it start up.
My solution is define a variable to force nginx resolve the ip everytime:
location /api {
$web_service "http://web_container_name:13579"
proxy_pass $web_service;
}

How to create Kubernetes cluster serving its own container with SSL and NGINX

I'm trying to build a Kubernetes cluster with following services inside:
Docker-registry (which will contain my django Docker image)
Nginx listenning both on port 80 and 443
PostgreSQL
Several django applications served with gunicorn
letsencrypt container to generate and automatically renew signed SSL certificates
My problem is a chicken and egg problem that occurs during the creation of the cluster:
My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80)
So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed...
So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443
-> This kind of look like a waste of resources in my opinion, but why not.
Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https.
So in my nginx configuration, I'll have a docker-registry.conf file looking like:
upstream docker-registry {
server registry:5000;
}
server {
listen 443;
server_name docker.thedivernetwork.net;
# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
The important part is the proxy_pass that redirect toward the registry container.
The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf:
upstream django {
server django:5000;
}
server {
listen 443 ssl;
server_name example.com;
charset utf-8;
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 20M;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_django;
}
location #proxy_to_django {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
#proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 65;
proxy_read_timeout 65;
proxy_pass http://django;
}
}
So nginx will successfully start only under 3 conditions:
secret is mounted (this could be addressed by splitting Nginx into 2 separate containers)
registry service is started
django service is started
The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again.
I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them
The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations:
I start docker registry service
I start Nginx with only the registry.conf
I create my django rc and service
I reload nginx with both registry.conf and django.conf
If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well.
How can I cleanly achieve this setup?
Thanks for your help
Thibault
Are you using Kubernetes Services for your applications?
With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned.
So you start the Services, then start nginx and whatever Pod you want in the order you want.

How do I serve statical Jira's files with nginx?

My actual nginx configuration is like this:
server {
listen 443;
server_name jira.domain.com;
ssl_certificate /etc/ssl/certs/jira.crt;
ssl_certificate_key /etc/ssl/private/jira.key;
ssl on;
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
}
}
How do I serve statical file with Jira standalone edition?
nginx has to be used as a proxy server with Jira standalone edition. There is a similar question here. Alternatively you can deploy the Jira jar edition in a container(jboss or tomcat) and couple nginx with it.

Resources