Redirect HTTPS request to HTTP (varnish) and then backend server HTTPS - nginx

My current configuration is like this :
1. Nginx listening on Port 8080 and 443
2. Varnish listening to port 80
Currently, when requests are made through HTTP they are delivered through the varnish, but when requests are made through HTTPS varnish doesn't deliver them.
My goal is to put varnish between Client and Nginx web server ( or make varnish work with port 443 )
Reading through articles and answer on StackOverflow, I tried to setup reverse proxy 443 to 80 ( or 8080 maybe ?)
I followed these article(s) :
https://www.smashingmagazine.com/2015/09/https-everywhere-with-nginx-varnish-apache/
https://serverfault.com/questions/835887/redirect-http-to-https-using-varnish-4-1
Problem is that when I try to set these up, I get 502 bad request error, and sometimes the default Nginx page.
PS: I'm trying to set this up using virtual server block, not default server.
PS2: I also need to deliver the final web page through HTTPS weather the request made through HTTP or HTTPS ( but I get too many redirects error )
PS3: I'm using Cloudflare

The basic concept is to sandwich varnish between an entity handling SSL and a back-end server working on port 8080 or whatever you choose.
Here's the traffic flow:
user 443 > front-end proxy for SSL offloading 443 > Varnish 80 > nginx 8080.
Now your options for Front end proxy are:
1.A Load balancer supporting SSL termination / offloading.
2.Nginx or apache working as a proxy to receive traffic on 443 and forward that on port 80 to Varnish.
Error 502 means your Varnish is having issues connecting your backend, please check varnish.vcl

Related

Enabling proxy protocol in Nginx for just one vhost without breaking the others?

I just set up HAProxy on a server by itself to act as a reverse proxy. This will send the traffic to a main server that's running Nginx.
I got it almost working, aside from being able to pass the original IP through from the proxy server to the web server.
I'm using mode tcp in haproxy since a lot of the traffic coming in will already be on port 443 using SSL. I read that I can't use the option forwardfor in tcp mode, and I can't send SSL traffic using mode http. So I added send-proxy to the server lines, and then tried to enable the proxy protocol on Nginx.
That actually worked for the domain I'm running, but we have about 10 other virtualhost domains being hosted on that same machine, and as soon as I enabled proxy protocol on one vhost separately, it broke ALL of our other domains pointing to that server, as they all started timing out.
Is there a way around this? Can I enable proxy protocol for just one virtualhost on Nginx without breaking the rest of them? Or is there a way to just use http mode with the forwardfor option, even if it's sending SSL traffic?
Below is my haproxy config, with the IPs redacted:
global
maxconn 10000
user haproxy
group haproxy
defaults
retries 3
timeout client 30s
timeout server 30s
timeout connect 30s
mode tcp
frontend incoming
bind *:80
bind *:443
option tcplog
default_backend client_proxy
backend client_proxy
use-server ps_proxy_http if { dst_port 80 }
use-server ps_proxy_https if { dst_port 443 }
server ps_proxy_http XXX.XXX.XXX.XXX:80 send-proxy
server ps_proxy_https XXX.XXX.XXX.XXX:443 send-proxy
This is my first time using HAProxy as a reverse proxy, so any insight would be much appreciated.

How can I redirect NON HTTP/NON HTTPS traffic to a specified IP with Nginx?

I have website and some game server.
I have domain which I connect to Cloudflare.
I want to redirect non http/https traffic to my server IP because when I try to connect to server with domain I can't do this because of Cloudflare proxy.
Maybe it can be done differently?
I use Nginx.
Cloudflare has its own SSL configuration.
There are 4 options for you:
Off disables https completely
Flexible Cloudflare will automatically switch client requests from HTTP to HTTPS but it still points to port 80 on your nginx server, should not configure SSL on nginx in this case.
So the only options for you are Full or Full Strict (more restricted on the cert configured on nginx, must be a valid cert).
With Full you can configure your nginx with a self-signed SSL and let it go. Cloudflare will handle the part between client and its proxy server.

How to can http services from https server

I have apache server(Frontend code) running on 80 port with https secure(SSL is configured). Backend server is nodejs and it is configured on 3000 port. When I tried to call services from https to http i.e. Apache(port 80 ssl configured) to Nodejs (port 3000 non-ssl) it is giving showing failed with status as "net::ERR_SSL_PROTOCOL_ERROR".
This may have something to do with Same Origin policy.
Please see that answer: HTTP Ajax Request via HTTPS Page.

Why run Varnish on port 80 for an HTTPS only setup?

In nearly every example I've seen for setting up Varnish with nginx and SSL support, the setup is Varnish running on port 80, nginx on port 443 for SSL termination and nginx running on another port doing the actual work communicating with the backend.
Given most websites now redirect port 80 to 443, what advantage is there in having Varnish running on port 80?
Why wouldn't you have nginx running on port 80, doing the 301 to the HTTPS version, nginx running on port 443 doing the SSL termination and proxying to Varnish, which is running on a different port, with nginx again running on another port doing the actual work?
HTTP: nginx [80] (301)
HTTPS: nginx [443] <> Varnish [6081] <> nginx [8080] <> backend
I really can't see any merit in having Varnish on port 80 front of house just to do a redirect. Unless, there's some problem with redirects and the unwanted addition of port numbers to URLs? Maybe adding 3 nginx server blocks is adding "more" work to the setup, but then having to configure Varnish to redirect port 80, unless it's internal, seems like "more" work.
Bonus question: Why is Apache added to the mix in most of these setups when nginx is already in use and visa-versa? They can both handle SSL termination and proxying.
I agree with "why not":
HTTP: nginx [80] (301)
HTTPS: nginx [443] <> Varnish [6081] <> nginx [8080] <> backend
As to why:
HTTP: Varnish [80] (conditional 301, using VCL)
HTTPS: nginx [443] <> Varnish [80] <> nginx [8080] <> backend
The answer is:
legacy reasons. This is just the way to go in "conditional HTTPs" world (where it is OK to have a website work in both HTTP and HTTPs versions or no HTTPs version at all), which was just a couple years ago, before Google, as web monopolist, did not insist on all websites having HTTPs or fear poor-er search rankings. It is relatively recently, that LetsEncrypt allowed everyone to avail of free certificates, and the aforementioned requirement from Google made so many websites use those. The websites/tutorials for Varnish setup, simply did not pick up / adjust ports as something that doesn't strike as being needed to be adjusted.
expandability. Think outside the "single server" setup. When you decide to build a stack of Varnish-es (CDN), it makes much more sense to keep the "main" Varnish on port 80. (Outside/edge Varnish instances will be talking to the main Varnish,as opposed to talking to main backend, for "cache of cache" sort of thing). The traffic between edge<>main wouldn't be secure but have no performance penalty of encryption.
I think we can simplify a bit:
HTTPS: nginx [443] <> Varnish [6081]<> backend
Let Varnish do the caching and avoid the extra Nginx layer.
More simplification:
hitch [443] <> Varnish [6081]<> backend
Hitch: https://hitch-tls.org/

ssl redirection in docker container on aws ecs

I have a frontend angular application running in a nginx docker container in aws ecs ec2. This is a saas product and other third party domain names will be pointed to this frontend docker container. I have set the default rule to that target group, But I wonder how to set up the ssl for each domain. ALB currently support only 100 listener rules ie in effect each listener will have only 50 rules( considering 80 and 443)
30 rules are already filled by the backend apis.
if I have 150 domains needs to be pointed to this frontend how can I set the ssl? if I set a 301 redirection in the port 80 vhost of nginx like
return 301 https://$host$request_uri
the request will again pass to the application load balancer port 443 and it will take the defaul ssl and may cause ssl error. Is there any chance we can make the nginx https redirection with out going again back port 443 of application load balancer? or any other method? I think the multidomain ssl certificate is an option here so that making it as a defaul ssl on the load balancer.
Do you have access to SSL certs for all these domains? If yes, you can configure them in the nginx container. Use a network load balancer instead of ALB and add a TCP listener on port 443 which will not terminate SSL and redirect traffic to nginx container which will terminate certificate.
You can also dynamically reload nginx configuration to setup certificates dynamically.
AWS load balancers now support SSL redirection so you don't have to do it on your containers.
In addition, your 443 listener can have multiple certificates added to it. So just add all your certs to the 443 listener on your load balancer.
Then in your 443 listener rules, just have a single rule with:
IF: Requests otherwise not routed
THEN:
HTTPS, Port 443
Redirect to 'Original host, path, query'
'301 - permanently moved' as the status
Now all your http requests will be sent back to the user with a redirect back to HTTPS without ever hitting your container or nginx. When they come back as HTTPS, AWS ALB has all the certificates there for it.
If you run up against limits on the load balancer, you may have to 'chunk' them up into 2 or 3 ALBs, but I find this easier to manage especially when cert change time comes around.

Resources