GCP Proxy to insecure endpoint - http

I would like to send a request (https) to GCP get GCP to route that request to another (http) onsite endpoint
for example
request intiator: https://some.google.domain.com/some-path (contains body headers etc)
GCP receives this request and forward to
http://myonsitedomain.com/some-path (contains body headers etc)
Is there a solution for this or do I have to create a cloud function for this?

External HTTP(S) Load Balancing is a proxy-based Layer 7 load balancer that enables you to run and scale your services behind a single external IP address. External HTTP(S) Load Balancing distributes HTTP and HTTPS traffic to backends hosted on a variety of Google Cloud platforms (such as Compute Engine, Google Kubernetes Engine (GKE), Cloud Storage, and so on), as well as external backends connected over the internet or via hybrid connectivity. Using this external load balancer you can resolve your issue, follow the source link for more information (source: GCP documentation)

Related

Fargate errors when frontend service tries to communicate with backend service via Service Discovery

I have a frontend app in Fargate (ECS) in a private subnet exposed to internet through an Application Load Balancer. My frontend makes API calls to my backend apps, also in Fargate, same VPC.
Users calls to my frontend are made via HTTPS, but my frontend communicates with my backend via HTTP (AWS Service Discovery - AWS Cloud Map). This way, the user browser is showing the error "blocked: mixed content" since half of the communication is made via HTTPS and the other half uses HTTP.
infra here
As far as I know and had been searching, it is not possible to use a SSL/TLS certificate with Service Discovery.
I've made a lot of researches and couldn't find something really useful. I also tried to create an internal load balancer for each backend service but the communication is timing out, it only works when I have a VPN connected.
What am I missing here? Do I need an internal load balancer in front of each backend service to attach a certificate between frontend and backend? What is the best approach to solve this?
Users calls to my frontend are made via HTTPS, but my frontend communicates with my backend via HTTP (AWS Service Discovery - AWS Cloud Map). This way, the user browser is causing the error "blocked: mixed content" since half of the communication is made via HTTPS and the other half uses HTTP.
The user's browser wouldn't know anything about this if the communication was happening between the front-end server and the back-end server. Apparently you have front-end client JavaScript code running in the user's web browser trying to access the backend server directly.
If you want to access the backend server directly from the user's web browser, then service discovery won't work, because service discovery is only for traffic that is inside the VPC. And of course by trying to use service discovery in this way you are also causing a security issue which the browser is correctly blocking you from doing. You will need to add another load balancer, or another listener on your current load balancer, that exposes the backend API to the Internet.
Alternatively you could use a reverse proxy like Nginx on your front-end server to send backend API requests to the backend service, and then have your client-side JavaScript code send all requests to the front-end server.

Reusing custom domain between GCP and Firebase

I am planning to host 2 webapps using Firebase Hosting: example.com and dev.example.com. For corresponding APIs, I have 2 projects on GCP (using managed instance groups and a load balancer) with custom domains: api.example.com and dev-api.example.com.
Is it possible to have a setup where subdomains of the custom domain example.com can be split/used across Firebase and GCP load balancer? I thought this is a popular setup but can't find any documentation/howto around this. I am using Google Domains as the domain provider for example.com and using Google Managed SSL certificates as well. All the projects belong to one account.
Assuming that you are using a Classic HTTPS Load Balancer with your GCP project, you may get your Firebase Hosting linked to your LB as an additional backend through Internet Network Endpoint Group so all of them can be reached through the same Load Balancer IP.
To do this,
Edit the current Load Balancer and go to Backend configuration
Create a Backend Service, under Backend type, select Internet Network Endpoint Group
Under Backends > New Backend, Create Internet Network Endpoint Group. This will take you to Network endpoint groups under Compute Engine
Under New network endpoint > Add through, you may select IP and port or Fully qualified domin name and port. Just supply the correct FQDN or IP of your Firebase hosting and the Port where the Firebase hostings are listening to, then Create.
Finish creating the backend service using the the Internet network endpoint group that you created as Backend Type
Under Host and Path rules. click +Add Host and Path Rule, please fill out the Host field with the domain of your Firebase hosting. For Path, just put /*. Then select the Internet network endpoint group that you created as Backend.
I am also under the assumption that your Google Managed Certificate is also deployed within the Load Balancer. If this is the case, then you may provision another Google Managed SSL certificate and include all 4 domains
example.com
dev.example.com
api.example.com
dev-api.example.com
Once done, you may create A records with the Load Balancer's IP address for each domain. This is to ensure that the requests will be forwarded to the correct backend, as oppose to just creating CNAME's which will always forward the request to the root domain (example.com) and not to their intended backends. The LB should be able to forward requests based on the domain being accessed.

Why to use internal load balancer if we already have an external load balancer?

In my project, we already have an external load balancer. However, there are several teams within the organisation which uses our internal load balancer. I want to know why do we need internal load balancer if we already have a public-facing external load balancer? Please elaborate.
I answer here to your question in the comment because it's too long for a comment
Things are internal, other are external. For examples
You have an external TCP/UDP load balancer
Your external Load Balancer accepts connexion on port 443 and redirects them to your backend with NGINX installed on it
Your backend needs a MongoDB database. You install your database on a compute and your choose to abstract the VM IP and to use your Load Balancer
You define a new backend on your external load balancer on the port 27017
RESULT: Because the load balancer is external, your MongoDB is publicly exposed on the port 27017.
If you use an internal load balancer, it's not the case, and you increase the security. Only the web facing port is open (443), the rest is not accessible from internet, only by your in your project.
You should check the documentation and then decided if your use case requires using internal load balancer or not. Below you can find links to the Google Cloud documentation and an example.
At first, have a look at the documentation Choosing a load balancer:
To decide which load balancer best suits your implementation of Google
Cloud, consider the following aspects of Cloud Load Balancing:
Global versus regional load balancing
External versus internal load balancing
Traffic type
After that, have a look at the documentation Cloud Load Balancing overview section Types of Cloud Load Balancing:
External load balancers distribute traffic coming from the Internet to your Google Cloud Virtual Private Cloud (VPC) network.
Global load balancing requires that you use the Premium Tier of
Network Service Tiers. For regional load balancing, you can use
Standard Tier.
Internal load balancers distribute traffic to instances inside of Google Cloud.
and
The following diagram illustrates a common use case: how to use
external and internal load balancing together. In the illustration,
traffic from users in San Francisco, Iowa, and Singapore is directed
to an external load balancer, which distributes that traffic to
different regions in a Google Cloud network. An internal load balancer
then distributes traffic between the us-central-1a and us-central-1b
zones.
More information you can find at the documentation.
UPDATE Have a look at the possible use cases for internal HTTP(S) load balancer and for internal TCP/UDP load balancer and check if they're suitable for you and if using them could improve your service.
It's not required to use internal load balancer if you don't need it.

target pools vs backend services vs regional backend service difference?

While exploring google cloud platform's Load balancer options
Advanced Menu shows multiple options which are a bit confusing.
there are multiple backends
backend service -> HTTP(S) LB
backend bucket -> HTTP(S) LB
regional backend service -> internal LB
target pools -> TCP LB
Just going through documentations for target pools and backend-service Looks to me they have similar parameters to configure and in the basic menu both are listed as backends.
I understand that target pools are used by TCP forwarding rules where as backend-service used by url map ( http/s Load balancer).
But Are there any other difference between these or is it just names?
A Backend Bucket allow you to use Google Cloud Storage bucket with HTTP(S) load balancing. It can handle request for static content. This option would be useful for a webpage that with static content and it would avoid the costs of resources than a instance would need.
The Backend Service is a centralized service that manages backends, which in turn manage an indeterminate number of instances that handle user requests.
The Target Pools resource defines a group of instances that should receive incoming traffic from forwarding rules. When a forwarding rule directs traffic to a target pool, Google Compute Engine picks an instance from these target pools based on a hash of the source IP and port and the destination IP and port.
This is why they both are listed as backend-services, because at the end they both do the same, but they specify for two different kind of load balancer. The backend service works for HTTP(S) load balancer and target pools are used for forwarding rules.
"A Network load balancer (unlike HTTP(s) load balancer) is a pass-through load balancer. It does not proxy connections from clients." On same note, TargetPools use forwarding rules, backend services use target proxies. Request is sent to instance in target pool "based on a hash of the source IP and port, destination IP and port, and protocol". Backend service has different mechanism to choose an instance group for e.g URL maps.

Google cloud HTTP load balancer always returns unhealthy instance for meteor app

I am trying to set up a HTTP load balancer for my Meteor app on google cloud. I have the application set up correctly, and I know this because I can visit the IP given in the Network Load Balancer.
However, when I try and set up a HTTP load balancer, the health checks always say that the instances are unhealthy (even though I know they are not). I tried including a route in my application that returns a status 200, and pointing the health check towards that route.
Here is exactly what I did, step by step:
Create new instance template/group for the app.
Upload image to google cloud.
Create replication controller and service for the app.
The network load balancer was created automatically. Additionally, there were two firewall rules allowing HTTP/HTTPS traffic on all IPs.
Then I try and create the HTTP load balancer. I create a backend service in the load balancer with all the VMs corresponding to the meteor app. Then I create a new global forwarding rule. No matter what, the instances are labelled "unhealthy" and the IP from the global forwarding rule returns a "Server Error".
In order to use HTTP load balancing on Google Cloud with Kubernetes, you have to take a slightly different approach than for network load balancing, due to the current lack of built-in support for HTTP balancing.
I suspect you created your service in step 3 with type: LoadBalancer. This won't work properly because of how the LoadBalancer type is implemented, which causes the service to be available only on the network forwarding rule's IP address, rather than on each host's IP address.
What will work, however, is using type: NodePort, which will cause the service to be reachable on the automatically-chosen node port on each host's external IP address. This plays more nicely with the HTTP load balancer. You can then pass this node port to the HTTP load balancer that you create. Once you open up a firewall on the node port, you should be good to go!
If you want more concrete steps, a walkthrough of how to use HTTP load balancers with Container Engine was actually recently added to GKE's documentation. The same steps should work with normal Kubernetes.
As a final note, now that version 1.0 is out the door, the team is getting back to adding some missing features, including native support for L7 load balancing. We hope to make it much easier for you soon!

Resources