ssl redirection in docker container on aws ecs - nginx

I have a frontend angular application running in a nginx docker container in aws ecs ec2. This is a saas product and other third party domain names will be pointed to this frontend docker container. I have set the default rule to that target group, But I wonder how to set up the ssl for each domain. ALB currently support only 100 listener rules ie in effect each listener will have only 50 rules( considering 80 and 443)
30 rules are already filled by the backend apis.
if I have 150 domains needs to be pointed to this frontend how can I set the ssl? if I set a 301 redirection in the port 80 vhost of nginx like
return 301 https://$host$request_uri
the request will again pass to the application load balancer port 443 and it will take the defaul ssl and may cause ssl error. Is there any chance we can make the nginx https redirection with out going again back port 443 of application load balancer? or any other method? I think the multidomain ssl certificate is an option here so that making it as a defaul ssl on the load balancer.

Do you have access to SSL certs for all these domains? If yes, you can configure them in the nginx container. Use a network load balancer instead of ALB and add a TCP listener on port 443 which will not terminate SSL and redirect traffic to nginx container which will terminate certificate.
You can also dynamically reload nginx configuration to setup certificates dynamically.

AWS load balancers now support SSL redirection so you don't have to do it on your containers.
In addition, your 443 listener can have multiple certificates added to it. So just add all your certs to the 443 listener on your load balancer.
Then in your 443 listener rules, just have a single rule with:
IF: Requests otherwise not routed
THEN:
HTTPS, Port 443
Redirect to 'Original host, path, query'
'301 - permanently moved' as the status
Now all your http requests will be sent back to the user with a redirect back to HTTPS without ever hitting your container or nginx. When they come back as HTTPS, AWS ALB has all the certificates there for it.
If you run up against limits on the load balancer, you may have to 'chunk' them up into 2 or 3 ALBs, but I find this easier to manage especially when cert change time comes around.

Related

How can I redirect NON HTTP/NON HTTPS traffic to a specified IP with Nginx?

I have website and some game server.
I have domain which I connect to Cloudflare.
I want to redirect non http/https traffic to my server IP because when I try to connect to server with domain I can't do this because of Cloudflare proxy.
Maybe it can be done differently?
I use Nginx.
Cloudflare has its own SSL configuration.
There are 4 options for you:
Off disables https completely
Flexible Cloudflare will automatically switch client requests from HTTP to HTTPS but it still points to port 80 on your nginx server, should not configure SSL on nginx in this case.
So the only options for you are Full or Full Strict (more restricted on the cert configured on nginx, must be a valid cert).
With Full you can configure your nginx with a self-signed SSL and let it go. Cloudflare will handle the part between client and its proxy server.

Mixed content issue in using Application Load Balancer (ALB) in AWS

I have an ASP.Net web application hosted on IIS. The web application (an Umbraco site) is configured to have an HTTP binding in IIS and an SSL certificate is bound to an Application Load Balancer (ALB) in AWS which is used to manage user requests via HTTPS. This means that when a user requests a resource the ALB redirects any HTTP traffic to HTTPS and then forwards the requests to IIS via the port 80 (internal traffic within the VPC).
For most resources this is absolutely fine but there are a handfull of resources (fonts and images) which seem to be requested over HTTP which causes a mixed content warning in the browser. I have tried HTTP -> HTTPS rewrite rules in IIS and outbound rules to rewrite the response but this does not seem to resolve the issue.
Can anyone help?
The solution to the problem was this to run the the web-app locally as HTTPS rather than HTTP and update the load balancer to forward requests to the web-server on port 443 rather than port 80.
To do so
Create a development SSL certificate on IIS. Rather than creating a self-signed certificate I used this project (https://github.com/FiloSottile/mkcert) to do so that the certificate was tusted
In AWS update the target group that the ALB listener used to forward requests to the IIS server on port 443 rather than port 80.

Redirect HTTPS request to HTTP (varnish) and then backend server HTTPS

My current configuration is like this :
1. Nginx listening on Port 8080 and 443
2. Varnish listening to port 80
Currently, when requests are made through HTTP they are delivered through the varnish, but when requests are made through HTTPS varnish doesn't deliver them.
My goal is to put varnish between Client and Nginx web server ( or make varnish work with port 443 )
Reading through articles and answer on StackOverflow, I tried to setup reverse proxy 443 to 80 ( or 8080 maybe ?)
I followed these article(s) :
https://www.smashingmagazine.com/2015/09/https-everywhere-with-nginx-varnish-apache/
https://serverfault.com/questions/835887/redirect-http-to-https-using-varnish-4-1
Problem is that when I try to set these up, I get 502 bad request error, and sometimes the default Nginx page.
PS: I'm trying to set this up using virtual server block, not default server.
PS2: I also need to deliver the final web page through HTTPS weather the request made through HTTP or HTTPS ( but I get too many redirects error )
PS3: I'm using Cloudflare
The basic concept is to sandwich varnish between an entity handling SSL and a back-end server working on port 8080 or whatever you choose.
Here's the traffic flow:
user 443 > front-end proxy for SSL offloading 443 > Varnish 80 > nginx 8080.
Now your options for Front end proxy are:
1.A Load balancer supporting SSL termination / offloading.
2.Nginx or apache working as a proxy to receive traffic on 443 and forward that on port 80 to Varnish.
Error 502 means your Varnish is having issues connecting your backend, please check varnish.vcl

Configuring SSL conenction between ELB and Client only (not between EC2 and ELB)

I have a specific use case.
I have a WordPress site which is on EC2 instance.
There is a classic ELB for this instance.
My EC2 instance is using SSL (letsencrypt). Now I want to use AWS Certificate Manager instead of this one.
And I don't want to communicate over SSL between ELB anc EC2 instance. I only need to communicate over SSL between ELB and Client.
How can I accomplish this?
I tried setting the Instance protocol and Instance port of an HTTPS listener in ELB to HTTP and 80 but no luck so far.
Is there anything which needs to be done on WP config side?
First of all, you need to have three components for your letsencrypt SSL:
Certificate body
Private key (pem)
Certificate chain
Get these three items, and import certificate using ACM.
Once you got your certificate, enable HTTPS listener on your ELB, mapped it to port 443, and instance port to 80:
After this, remove HTTPS from your wordpress. Accept only HTTP on port 80. You can modify security group for EC2 to accept only inbound connection from ELB.
Hope this helps

Failed redirect from naked domain on SSL

I have recently installed SSL on my AWS hosted wordpress site and my named domain is no longer working.
https:// example.com, https:// www.example.com, www.example.com are all working as expected.
example.com is not working. example.com throws a connection refused error.
The Setup:
Wordpress hosting is on a single AWS EC2 installed off the bitnami AMI. EC2 sits behind a classic load balancer.
SSL certificate is managed on AWS certificate manager and was issued to *.example.com, example.com and www.example.com
DNS uses route 53: www.doamin.com and domain.com have A records that point to the same load balancer alias
.htacces has been modified with RewriteRule ^(.*)$ https://example.com/$1 [R,L]
What do I do to get this working?
HTTPS does work, so the issue is not DNS. You mention a load balancer. The Connection Refused error indicates that your request is not making it to the load balancer or being accepted by the load balancer.
Check your security groups for the load balancer and ensure port 80 inbound is allowed.
Check your load balancer has a listener on port 80.
If you have modified the NACL's (Network Access Control Lists) on the public subnets of the loadbalancer, then you will need to allow 80 inbound and everything outbound. The default NACL rules already allow this.
As an aside, I note that you are terminating SSL on the load balancer (because you are using an ACM cert). Depending on your configuration, this may mean that you are forwarding requests to your web server unencrypted on port 80. If so then your re-write rules will not correctly detect the use of HTTPS. AWS has some documentation explaining this in more detail.

Resources