Where should SSL be installed - nginx

I have got a setup like this
Load balancer
Machine 1 - haproxy load balancer
Machine 2 - haproxy load balancer
Web servers
Machine 1 - nginx with app
Machine 2 - nginx with app
Now where should I set up SSL certificate. On loadbalancers or web servers or on both?
What is the correct way of doing it?

The "correct way" to do this depends on your setup. If your load balancers are on the same machines as your webservers, it doesn't matter which you choose to put the cert on. If they are on different servers, encryption depends on how important security is for these particular web apps. If you put the certs on the load balancers you will have unencrypted traffic visible to anyone in your network (as it goes from load balancer to server). If you put certs on your nginx server you will have encryption all the way through to the local server, but you will have to change your haproxy a little to have it route encrypted traffic properly. You also will not be able to route off the url path. You can also put certs on both to be able to route off the url path, but that is a little more to manage (two certs vs one). Overall it's probably best to put the cert on nginx server, assuming your don't need to do any routing in the load balancer off of the url. Also definitely do your own research.

Related

How to have the same SSL certificate install on 2 server?

I'm trying to migrate a wordpress website from AWS to GCP.
I need a load balancer on GCP's side and it will take a while for it to provision.
How can I have the SSL install on GCP's server without any disruption on AWS's side?
Is there a recommended way to migrate with minimum downtime?
With respect, don't do this.
Instead, wait til your new vendor has provisioned the load balancer. Then put the TLS certificate into the load balancer. Then, and only then, switch your DNS records to point to the new load balancer.
The stopgap step of putting your TLS certs on the servers behind the load balancer is a mess of trouble, and unnecessary if you can delay your DNS cutover for a day or so, running your old system for that time.

Best practise for a website hosted on Kubernetes (DigitalOcean)

I followed this guide: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes on how to setup an Nginx Ingress with Cert Manager with Kubernetes having DigitalOcean as a cloud provider.
The tutorial worked fine, I was able to setup everything according to what it was written. Though, (as it is stated) following the tutorial one ends up with three pods of which only one is in "Running 1/1", while the other two are "Down". Also when checking the comments section, it seems that it is quite a problem. Since if all the traffic gets routed to only 1 pods, it is not really scalable. Or am I missing something? Quoting from their tutorial:
Note: By default the Nginx Ingress LoadBalancer Service has
service.spec.externalTrafficPolicy set to the value Local, which
routes all load balancer traffic to nodes running Nginx Ingress Pods.
The other nodes will deliberately fail load balancer health checks so
that Ingress traffic does not get routed to them.
Mainly my question is: Is there a best practice that I am missing in order to have Kubernetes hosting my website? It seems I have to choose either scalability (having all the pods healthy and running) or getting IP of the client visitor.
And for whoever will ever find himself/herself in my situation, this is the reply I got from the DigitaOcean Support:
Unfortunately with that Kubernetes setup it would show those other
nodes as down without additional traffic configuration. It is possible
to skip the nginx ingress part and just use a DigitalOcean load
balancer but this again does require a good deal of setup and can be
more difficult then easy.
The suggestion to have a website with analytics (IP) and scalable was to setup a droplet with Nginx and setup a LoadBalancer to it. More specifically:
As for using a droplet this would be a normal website configuration
with Nginx as your webserver configured to serve content to your app.
You would have full access to your application and the Nginx logs on
the droplet itself. Putting a load balancer in front of this would
require additional configuration as load balancers do not pass the
x-forward header so the IP addresses of clients would not show up in
the logs by default. You would need to configured proxy protocol on
the load balancer and in your nginx configuration to be able to obtain
those IPs.
https://www.digitalocean.com/blog/load-balancers-now-support-proxy-protocol/
This is also a bit more complex unfortunately.
Hope it might save some time to someone

Clarification of nginx and load balancing needed

I'm now reading design of Instagram and I found such a description of their load balancing system.
Every request to Instagram servers goes through load balancing machines; we used to run 2 nginx machines and DNS Round-Robin between them. The downside of this approach is the time it takes for DNS to update in case one of the machines needs to get decomissioned. Recently, we moved to using Amazon’s Elastic Load Balancer, with 3 NGINX instances behind it that can be swapped in and out (and are automatically taken out of rotation if they fail a health check). We also terminate our SSL at the ELB level, which lessens the CPU load on nginx. We use Amazon’s Route53 for DNS, which they’ve recently added a pretty good GUI tool for in the AWS console.
The question is. Am I right that for now they have a DNS Server which uses RR to decide on which nginx server to send the request. And each of this nginx servers at their turn resends the request to a cluster?
And the second question is. What the difference between nginx and load balancer. Why cannot we use nginx instead?
For your first question, I believe the answer seems to be that Instagram now uses Route53 to map DNS to an Elastic Load Balancer, which does two things: It routes traffic fairly evenly to three NGINX load balancers, and it provides SSL for all traffic. The NGINX servers then act as load balancers to content/application servers further down the stack. Using an ELB instead of round-robin DNS means they can add/remove/update instances attached to the ELB without ever having to worry about DNS updates or TTL.
As for the second question, you can use NGINX just as easily as HAproxy or other services to do load balancing. I am sure that part of the appeal to Instagram in choosing NGINX is its incredible speed and that it's asynchronous and "event-driven" instead of threaded like Apache2. When set up properly, that can mean less headaches under heavy loads.

Load balancing and sessions

What is the better approach for load balancing on web servers? My services run in .NET and Mono, so they could be hosted on IIS or Apache2, and the will have to provide SSL connection.
I've read two main approaches, store the state in a common server and use sticky sessions, there is any other else?
I've read 3 diffent things about sticky sessions:
1)the load balancing device will know with which server did you start the connection and all the further connections from that host will be routed to the same server.
2)the load balancing devide read a cookie named: JSESSIONID
3)the load balancing devide read a cookie named: ASPSESSIONID
I'm a little bit confused, what will happen exactly? As the connections will be SSL there is not a chance for the load balancing devide of read the cookies, so then what?
About store the estate in a common server, what solutions do you know? I've read memcache is a good solution but is there any other else?
Cheers.
When using SSL with a load balancer, it is common to put the SSL certificate on the load balancing server, and not on the back end servers. In this way you only need 1 certificate on 1 server. The load balancer then talks to the back end servers using plain HTTP. This obviously requires that your back end servers are not directly accessible from the internet.
So, if the load balancer is responsible for decrypting the request, it will also be able to inspect the request for a jsessionid.
Sticky sessions work well with Apache as load balancer. You should check out the Apache modules mod_proxy and mod_proxy_balancer.
Generally SSL load balancing means that the client is talking to the load balancer over HTTPS, and the load balancer is talking to the web server via HTTP.
Some load balancers are smart enough to establish an SSL session with the web server (so it can read cookies) and maintain a separate SSL session with the client.
And, some load balancers can maintain stickiness without using web server cookies. My load balancers are able to send their own cookies to the client (they have a bunch of other stickiness settings as well).

How to set up SSL in a load balanced environment?

Here is our current infrastructure:
2 web servers behind a shared load balancer
dns is pointing to the load balancer
web app is done in asp.net, with wcf services
My question is how to set up the SSL certificate to support https connection.
Here are 2 ideas that I have:
SSL certificate terminates at the load balancer. secure/unsecure communication behind the load balancer will be forwarded to 2 different ports.
pro: only need 1 certificate as I scale horizontally
cons: I have to check secure or not secure by checking which port the request is
coming from. doesn't quite feel right to me
WCF by design will not work when IIS is binded 2 different ports
(according to this)
SSL certificate terminates on each of the server?
cons: need to add more certificates to scale horizontally
thanks
Definitely terminate SSL at the load balancer!!! Anything behind that should NOT be visible outside. Why wouldn't two ports for secure/insecure work just fine?
You don't actually need more certificates at all. Because the externally seen FQDN is the same you use the same certificate on each machine.
This means that WCF (if you're using it) will work. WCF with the SSL terminating on the external load balancer is painful if you're signing/encrypting at a message level rather than a transport level.
You don't need two ports, most likely. Just have the SSL virtual server on the load balancer add an HTTP header to the request and check for that. It's what we do with our Zeus ZXTM 5.1.
You don't have to get a cert for every site there are such things as wildcard certs. But it would have to be installed on every server. (assuming you are using subdomains, if not then you can reuse the same cert across machines)
But I would probably put the cert on the load balancer if not just for the sake of easy configuration.

Resources