target pools vs backend services vs regional backend service difference? - networking

While exploring google cloud platform's Load balancer options
Advanced Menu shows multiple options which are a bit confusing.
there are multiple backends
backend service -> HTTP(S) LB
backend bucket -> HTTP(S) LB
regional backend service -> internal LB
target pools -> TCP LB
Just going through documentations for target pools and backend-service Looks to me they have similar parameters to configure and in the basic menu both are listed as backends.
I understand that target pools are used by TCP forwarding rules where as backend-service used by url map ( http/s Load balancer).
But Are there any other difference between these or is it just names?

A Backend Bucket allow you to use Google Cloud Storage bucket with HTTP(S) load balancing. It can handle request for static content. This option would be useful for a webpage that with static content and it would avoid the costs of resources than a instance would need.
The Backend Service is a centralized service that manages backends, which in turn manage an indeterminate number of instances that handle user requests.
The Target Pools resource defines a group of instances that should receive incoming traffic from forwarding rules. When a forwarding rule directs traffic to a target pool, Google Compute Engine picks an instance from these target pools based on a hash of the source IP and port and the destination IP and port.
This is why they both are listed as backend-services, because at the end they both do the same, but they specify for two different kind of load balancer. The backend service works for HTTP(S) load balancer and target pools are used for forwarding rules.

"A Network load balancer (unlike HTTP(s) load balancer) is a pass-through load balancer. It does not proxy connections from clients." On same note, TargetPools use forwarding rules, backend services use target proxies. Request is sent to instance in target pool "based on a hash of the source IP and port, destination IP and port, and protocol". Backend service has different mechanism to choose an instance group for e.g URL maps.

Related

GCP Proxy to insecure endpoint

I would like to send a request (https) to GCP get GCP to route that request to another (http) onsite endpoint
for example
request intiator: https://some.google.domain.com/some-path (contains body headers etc)
GCP receives this request and forward to
http://myonsitedomain.com/some-path (contains body headers etc)
Is there a solution for this or do I have to create a cloud function for this?
External HTTP(S) Load Balancing is a proxy-based Layer 7 load balancer that enables you to run and scale your services behind a single external IP address. External HTTP(S) Load Balancing distributes HTTP and HTTPS traffic to backends hosted on a variety of Google Cloud platforms (such as Compute Engine, Google Kubernetes Engine (GKE), Cloud Storage, and so on), as well as external backends connected over the internet or via hybrid connectivity. Using this external load balancer you can resolve your issue, follow the source link for more information (source: GCP documentation)

NGINX - HTTPS Load Balancer Configuration

I have created 2 CentOS servers on different zones in the same region and installed NGINX on that.
Created Instance groups like ig1 & ig2 and added those servers in that.
Created the external load balancer.
I'm able to launch the web page using public static IP. But the result is not as expected.
Is there any round-robin method in LB config? if yes how do we achieve that?
I have set the Max RPS is 1 sec on both the instance groups and health check interval period 1 sec.
NA
The requirement is, whenever I'm refreshing the load balancer IP once, it should load the page from different instances. But the thing is, I have to refresh the page no of times to load the page from different instances.
I'm not sure what configuration is missing. Can someone help me with this?
Most load balancers use a round-robin.
In GCP HTTP(S) LB has two methods of determining instance load. Within the backend service resource, the balancing Mode property selects between the requests per second (RPS) and CPU utilization modes.
You can override round-robin distribution by configuring session affinity. However, note that session affinity works best if you also set the balancing mode to requests per second (RPS).
Session affinity sends all requests from the same client to the same virtual machine instance as long as the instance stays healthy and has capacity.
================
Now, GCP HTTP(S) LB offers two types of session affinity:
a) client IP affinity— forwards all requests from the same client IP address to the same instance.
Client IP affinity directs requests from the same client IP address to the same backend instance based on a hash of the client's IP address. Client IP affinity is an option for every GCP load balancer that uses backend services.
but, when using client IP affinity, keep the following in mind:
The client IP address as seen by the load balancer mightn't be the originating client if it is behind NAT or makes requests through a proxy. Requests made through NAT or a proxy use the IP address of the NAT router or proxy as the client IP address. This can cause incoming traffic to clump unnecessarily onto the same backend instances.
If a client moves from one network to another, its IP address changes, resulting in broken affinity.
b) generated cookie affinity— sets a client cookie, then sends all requests with that cookie to the same instance.
When the generated cookie affinity is set, the load balancer issues a cookie named GCLB on the first request and then directs each subsequent request that has the same cookie to the same instance. Cookie-based affinity allows the load balancer to distinguish different clients using the same IP address so it can spread those clients across the instances more evenly. Cookie-based affinity allows the load balancer to maintain instance affinity even when the client's IP address changes.
The path of the cookie is always /, so if there are two backend services on the same hostname that enable cookie-based affinity, the two services are balanced by the same cookie.
===========================
Main source:
Load distribution algorithm
Requests per second

Can AWS Load Balancer be configured to filter out requests?

I have a Django app deployed on AWS Elastic Beanstalk. Django is configured to only serve requests that comes for a specific hostname (ALLOWED_HOSTS). If the host information in the request doesn't match, it will raise return 500 response code, that is fine.
But, I have noticed that I get quite many of those, either sending requests vis IP address, or via other domain names. So, I would like to configure the setup so that the load balancer rejects the request if it doesn't have the proper hostname in the header information.
Is this possible to do? I have been trying to go over settings in the AWS Console, but cannot find any information how to do this. I could patch the EC2 instances to reject those request so it doesn't reach Django at all, but I would like to stop it as early as possible.
Flow now:
Client -> Load Balancer -> EC2 instance -> Nginx -> Django
<-500 error- Django
What I want:
Client -> Load Balancer
<-reject- Load Balancer
Elastic Load Balancer cannot be configured to filter out requests.
If your allowed connections are based on IP address, then you can use VPC ACLs to allow only connections from certain IP addresses. All others will receive failed connections at the ELB level.
If your allowed connections are not based on IP address you can take a look at CloudFront in combination with Amazon Web Application Firewall (WAF).
WAF can be configured to filter at the web request level by IP address, URL, query string, headers, etc.

Google cloud HTTP load balancer always returns unhealthy instance for meteor app

I am trying to set up a HTTP load balancer for my Meteor app on google cloud. I have the application set up correctly, and I know this because I can visit the IP given in the Network Load Balancer.
However, when I try and set up a HTTP load balancer, the health checks always say that the instances are unhealthy (even though I know they are not). I tried including a route in my application that returns a status 200, and pointing the health check towards that route.
Here is exactly what I did, step by step:
Create new instance template/group for the app.
Upload image to google cloud.
Create replication controller and service for the app.
The network load balancer was created automatically. Additionally, there were two firewall rules allowing HTTP/HTTPS traffic on all IPs.
Then I try and create the HTTP load balancer. I create a backend service in the load balancer with all the VMs corresponding to the meteor app. Then I create a new global forwarding rule. No matter what, the instances are labelled "unhealthy" and the IP from the global forwarding rule returns a "Server Error".
In order to use HTTP load balancing on Google Cloud with Kubernetes, you have to take a slightly different approach than for network load balancing, due to the current lack of built-in support for HTTP balancing.
I suspect you created your service in step 3 with type: LoadBalancer. This won't work properly because of how the LoadBalancer type is implemented, which causes the service to be available only on the network forwarding rule's IP address, rather than on each host's IP address.
What will work, however, is using type: NodePort, which will cause the service to be reachable on the automatically-chosen node port on each host's external IP address. This plays more nicely with the HTTP load balancer. You can then pass this node port to the HTTP load balancer that you create. Once you open up a firewall on the node port, you should be good to go!
If you want more concrete steps, a walkthrough of how to use HTTP load balancers with Container Engine was actually recently added to GKE's documentation. The same steps should work with normal Kubernetes.
As a final note, now that version 1.0 is out the door, the team is getting back to adding some missing features, including native support for L7 load balancing. We hope to make it much easier for you soon!

Kubernetes Service Deployment

I have recently started exploring kuberenetes and done with practical implementation of pods,services and replication Controller on google cloud. I have some doubts over service and network access .
First, Where is the service deployed which will work as load balancer for group of pods ?
Second, does the request to access an application running in pod using a service load balancer go through master or direct to minions nodes ?
A service proxy runs on each node on the cluster. From inside the cluster, when you make a request to a service IP, it is intercepted by the service proxy and routed to a pod matching the label selector for the service. If you have specified an external load balancer for your service, the load balancer will pick a node to send the request to, at which point it will be captured by the proxy and directed to an appropriate pod. If you are using public IPs, then your router will send the request to the node with the public IP where it will be captured by the proxy and directed to an appropriate pod.
If you followed by description, you can see that service requests do not go through the master. They bounce through a proxy running on the nodes.
As an aside, there is also a proxy running on the master, which you can use to reach nodes, services, pods, but this proxy isn't in the packet path for services that you create within the cluster.

Resources