I've started to use kube-lego to manage load-balancing through a nginx controller, instead of GCP Load Balancer controller.
I am being charged for: Network Load Balancing: Forwarding Rule Minimum Service Charge in EMEA right now, at about 21$ per month.
Isn't the load balancing supposed to be done by nginx?
I have TCP Load Balancer in my GCP network resources.
Isn't possible to send the traffic to a node port and do the load balancing on the nginx controller instead ?
I thought the advantage of having a Nginx load balancer would avoid creating a load balancer on GCP and thus avoiding paying crazy expansive network resources.
What is the purpose of the nginx controller then? Beside maybe automatic certificate renewal with LE.
Related
I followed this guide: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes on how to setup an Nginx Ingress with Cert Manager with Kubernetes having DigitalOcean as a cloud provider.
The tutorial worked fine, I was able to setup everything according to what it was written. Though, (as it is stated) following the tutorial one ends up with three pods of which only one is in "Running 1/1", while the other two are "Down". Also when checking the comments section, it seems that it is quite a problem. Since if all the traffic gets routed to only 1 pods, it is not really scalable. Or am I missing something? Quoting from their tutorial:
Note: By default the Nginx Ingress LoadBalancer Service has
service.spec.externalTrafficPolicy set to the value Local, which
routes all load balancer traffic to nodes running Nginx Ingress Pods.
The other nodes will deliberately fail load balancer health checks so
that Ingress traffic does not get routed to them.
Mainly my question is: Is there a best practice that I am missing in order to have Kubernetes hosting my website? It seems I have to choose either scalability (having all the pods healthy and running) or getting IP of the client visitor.
And for whoever will ever find himself/herself in my situation, this is the reply I got from the DigitaOcean Support:
Unfortunately with that Kubernetes setup it would show those other
nodes as down without additional traffic configuration. It is possible
to skip the nginx ingress part and just use a DigitalOcean load
balancer but this again does require a good deal of setup and can be
more difficult then easy.
The suggestion to have a website with analytics (IP) and scalable was to setup a droplet with Nginx and setup a LoadBalancer to it. More specifically:
As for using a droplet this would be a normal website configuration
with Nginx as your webserver configured to serve content to your app.
You would have full access to your application and the Nginx logs on
the droplet itself. Putting a load balancer in front of this would
require additional configuration as load balancers do not pass the
x-forward header so the IP addresses of clients would not show up in
the logs by default. You would need to configured proxy protocol on
the load balancer and in your nginx configuration to be able to obtain
those IPs.
https://www.digitalocean.com/blog/load-balancers-now-support-proxy-protocol/
This is also a bit more complex unfortunately.
Hope it might save some time to someone
In my project, we already have an external load balancer. However, there are several teams within the organisation which uses our internal load balancer. I want to know why do we need internal load balancer if we already have a public-facing external load balancer? Please elaborate.
I answer here to your question in the comment because it's too long for a comment
Things are internal, other are external. For examples
You have an external TCP/UDP load balancer
Your external Load Balancer accepts connexion on port 443 and redirects them to your backend with NGINX installed on it
Your backend needs a MongoDB database. You install your database on a compute and your choose to abstract the VM IP and to use your Load Balancer
You define a new backend on your external load balancer on the port 27017
RESULT: Because the load balancer is external, your MongoDB is publicly exposed on the port 27017.
If you use an internal load balancer, it's not the case, and you increase the security. Only the web facing port is open (443), the rest is not accessible from internet, only by your in your project.
You should check the documentation and then decided if your use case requires using internal load balancer or not. Below you can find links to the Google Cloud documentation and an example.
At first, have a look at the documentation Choosing a load balancer:
To decide which load balancer best suits your implementation of Google
Cloud, consider the following aspects of Cloud Load Balancing:
Global versus regional load balancing
External versus internal load balancing
Traffic type
After that, have a look at the documentation Cloud Load Balancing overview section Types of Cloud Load Balancing:
External load balancers distribute traffic coming from the Internet to your Google Cloud Virtual Private Cloud (VPC) network.
Global load balancing requires that you use the Premium Tier of
Network Service Tiers. For regional load balancing, you can use
Standard Tier.
Internal load balancers distribute traffic to instances inside of Google Cloud.
and
The following diagram illustrates a common use case: how to use
external and internal load balancing together. In the illustration,
traffic from users in San Francisco, Iowa, and Singapore is directed
to an external load balancer, which distributes that traffic to
different regions in a Google Cloud network. An internal load balancer
then distributes traffic between the us-central-1a and us-central-1b
zones.
More information you can find at the documentation.
UPDATE Have a look at the possible use cases for internal HTTP(S) load balancer and for internal TCP/UDP load balancer and check if they're suitable for you and if using them could improve your service.
It's not required to use internal load balancer if you don't need it.
I am trying to set up a HTTP load balancer for my Meteor app on google cloud. I have the application set up correctly, and I know this because I can visit the IP given in the Network Load Balancer.
However, when I try and set up a HTTP load balancer, the health checks always say that the instances are unhealthy (even though I know they are not). I tried including a route in my application that returns a status 200, and pointing the health check towards that route.
Here is exactly what I did, step by step:
Create new instance template/group for the app.
Upload image to google cloud.
Create replication controller and service for the app.
The network load balancer was created automatically. Additionally, there were two firewall rules allowing HTTP/HTTPS traffic on all IPs.
Then I try and create the HTTP load balancer. I create a backend service in the load balancer with all the VMs corresponding to the meteor app. Then I create a new global forwarding rule. No matter what, the instances are labelled "unhealthy" and the IP from the global forwarding rule returns a "Server Error".
In order to use HTTP load balancing on Google Cloud with Kubernetes, you have to take a slightly different approach than for network load balancing, due to the current lack of built-in support for HTTP balancing.
I suspect you created your service in step 3 with type: LoadBalancer. This won't work properly because of how the LoadBalancer type is implemented, which causes the service to be available only on the network forwarding rule's IP address, rather than on each host's IP address.
What will work, however, is using type: NodePort, which will cause the service to be reachable on the automatically-chosen node port on each host's external IP address. This plays more nicely with the HTTP load balancer. You can then pass this node port to the HTTP load balancer that you create. Once you open up a firewall on the node port, you should be good to go!
If you want more concrete steps, a walkthrough of how to use HTTP load balancers with Container Engine was actually recently added to GKE's documentation. The same steps should work with normal Kubernetes.
As a final note, now that version 1.0 is out the door, the team is getting back to adding some missing features, including native support for L7 load balancing. We hope to make it much easier for you soon!
I'm now reading design of Instagram and I found such a description of their load balancing system.
Every request to Instagram servers goes through load balancing machines; we used to run 2 nginx machines and DNS Round-Robin between them. The downside of this approach is the time it takes for DNS to update in case one of the machines needs to get decomissioned. Recently, we moved to using Amazon’s Elastic Load Balancer, with 3 NGINX instances behind it that can be swapped in and out (and are automatically taken out of rotation if they fail a health check). We also terminate our SSL at the ELB level, which lessens the CPU load on nginx. We use Amazon’s Route53 for DNS, which they’ve recently added a pretty good GUI tool for in the AWS console.
The question is. Am I right that for now they have a DNS Server which uses RR to decide on which nginx server to send the request. And each of this nginx servers at their turn resends the request to a cluster?
And the second question is. What the difference between nginx and load balancer. Why cannot we use nginx instead?
For your first question, I believe the answer seems to be that Instagram now uses Route53 to map DNS to an Elastic Load Balancer, which does two things: It routes traffic fairly evenly to three NGINX load balancers, and it provides SSL for all traffic. The NGINX servers then act as load balancers to content/application servers further down the stack. Using an ELB instead of round-robin DNS means they can add/remove/update instances attached to the ELB without ever having to worry about DNS updates or TTL.
As for the second question, you can use NGINX just as easily as HAproxy or other services to do load balancing. I am sure that part of the appeal to Instagram in choosing NGINX is its incredible speed and that it's asynchronous and "event-driven" instead of threaded like Apache2. When set up properly, that can mean less headaches under heavy loads.
I am a beginner and trying to send http requests through elastic load balancer. Could anybody explain briefly about the steps that I need?
set up Elastic Load Balancer A
get DNS of the Elastic Load Balancer A
register EC2 instances to the Elastic Load Balancer A
send traffic to the DNS of Elastic Load Balancer A
But I have no idea what kind of configuration or set up I need to put in the EC2 instances that are to be attached to this Elastic Load Balancer A. Do I need to set up Listener? If so, how do I set this?
I just want to send http request under the ip of EC2s and Elastic Load Balancer A so that I get different IPs assigned to each requests.
Thanks a lot!
By default, Amazon EC2 instances behind an Elastic Load Balancer serve traffic on port 80 (HTTP). When creating the Load Balancer, you can configure which ports should receive traffic (80, 442, 1024+).
Think of it this way... The Load Balancer simply sits "in front" of the EC2 instances. If a user was to go directly to your EC2 instance directly (eg enter its IP Address in a web browser), they should see a website. Going to the Load Balancer does the same thing, but it distributes the requests amongst multiple EC2 instances.
So, in most cases, it's just a matter of running a web server or app on your EC2 instance running on port 80.