Fargate errors when frontend service tries to communicate with backend service via Service Discovery - http

I have a frontend app in Fargate (ECS) in a private subnet exposed to internet through an Application Load Balancer. My frontend makes API calls to my backend apps, also in Fargate, same VPC.
Users calls to my frontend are made via HTTPS, but my frontend communicates with my backend via HTTP (AWS Service Discovery - AWS Cloud Map). This way, the user browser is showing the error "blocked: mixed content" since half of the communication is made via HTTPS and the other half uses HTTP.
infra here
As far as I know and had been searching, it is not possible to use a SSL/TLS certificate with Service Discovery.
I've made a lot of researches and couldn't find something really useful. I also tried to create an internal load balancer for each backend service but the communication is timing out, it only works when I have a VPN connected.
What am I missing here? Do I need an internal load balancer in front of each backend service to attach a certificate between frontend and backend? What is the best approach to solve this?

Users calls to my frontend are made via HTTPS, but my frontend communicates with my backend via HTTP (AWS Service Discovery - AWS Cloud Map). This way, the user browser is causing the error "blocked: mixed content" since half of the communication is made via HTTPS and the other half uses HTTP.
The user's browser wouldn't know anything about this if the communication was happening between the front-end server and the back-end server. Apparently you have front-end client JavaScript code running in the user's web browser trying to access the backend server directly.
If you want to access the backend server directly from the user's web browser, then service discovery won't work, because service discovery is only for traffic that is inside the VPC. And of course by trying to use service discovery in this way you are also causing a security issue which the browser is correctly blocking you from doing. You will need to add another load balancer, or another listener on your current load balancer, that exposes the backend API to the Internet.
Alternatively you could use a reverse proxy like Nginx on your front-end server to send backend API requests to the backend service, and then have your client-side JavaScript code send all requests to the front-end server.

Related

GCP Proxy to insecure endpoint

I would like to send a request (https) to GCP get GCP to route that request to another (http) onsite endpoint
for example
request intiator: https://some.google.domain.com/some-path (contains body headers etc)
GCP receives this request and forward to
http://myonsitedomain.com/some-path (contains body headers etc)
Is there a solution for this or do I have to create a cloud function for this?
External HTTP(S) Load Balancing is a proxy-based Layer 7 load balancer that enables you to run and scale your services behind a single external IP address. External HTTP(S) Load Balancing distributes HTTP and HTTPS traffic to backends hosted on a variety of Google Cloud platforms (such as Compute Engine, Google Kubernetes Engine (GKE), Cloud Storage, and so on), as well as external backends connected over the internet or via hybrid connectivity. Using this external load balancer you can resolve your issue, follow the source link for more information (source: GCP documentation)

Authenticating NGINX forwarded requests to upstream servers using Cognito

I have the following setup:
{Client App} --[HTTP]--> {NGINX} --[HTTPS]--> {API Gateway}
I need such a setup since API Gateway does not support HTTP (it only supports HTTPS) and my client apps are old/obsolete and do not talk HTTPS. So far it works perfectly fine. Now I need to make sure not just any requests are accepted by my API Gateways. I know how to protect API Gateways using Cognito (if I leave the NGINX out of the equation). The process plays as follows in action:
1.
{Consumer Server} --[Credentials]--> {Cognito}
<--[JWT Token]--
2.
{Consumer Server} --[JWT Token]--> {API Gateway}
...
To make sure there's no misunderstanding, the credential sent in step 1 is NOT an email. This is not for authenticating human users, but rather a machine-to-machine communication. And in my case, the client machine is an NGINX instance.
Having set up the scene, this is what I'm trying to achieve. I want my client app to communicate with my API gateway in HTTP. For that, I have to introduce an NGINX in between. But now, I want to make sure that only my designated NGINX instances can do so. So I need to authenticate the requests coming in from the NGINX. That means that NGINX needs to follow the second diagram and asks for JWT tokens from Cognito. And this is not a one-time process. Tokens expire and once they do, NGINX has to refresh them by sending another request to Cognito.
Does anyone know if there's a ready-made solution for this? Or an easy way to implement it?

Mirror requests from cloudrun service to other cloudrun service

I'm currently working on a project where we are using Google Cloud. Within the Cloud we are using CloudRun to provide our services. One of these services is rather complex and has many different configuration options. To validate how these configurations affect the quality of the results and also to evaluate the quality of changes to the service, I would like to proceed as follows:
in addition to the existing service I deploy another instance of the service which contains the changes
I mirror all incoming requests and let both services process them, only the responses from the initial service are returned, but the responses from both services are stored
This allows me to create a detailed evaluation of the differences between the two services without having to provide the user with potentially worse responses.
For the implementation I have setup a NGINX which mirrors the requests. This is also deployed as a CloudRun service. This now accepts all requests and takes care of the authentication. The original service and the mirrored version have been configured in such a way that they can only be accessed internally and should therefore be accessed via a VPC network.
I have tried all possible combinations for the configuration of these parts but I always get 403 or 502 errors.
I have tried setting the NGINX service to the HTTP and HTTPS routes from the service, and I have tried all the VPC Connector settings. When I set the ingress from the service to ALL it works perfectly if I configure the service with HTTPS and port 443 in NGINX. As soon as I set the ingress to Internal I get errors with HTTPS -> 403 and with HTTP -> 502.
Does anyone have experience in this regard and can give me tips on how to solve this problem? Would be very grateful for any help.
If your Cloud Run service are internally accessible (ingress control set to internal only), you need to perform your request from your VPC.
Therefore, as you perfectly did, you plugged a serverless VPC connector on your NGINX service.
The set up is correct. Now, why it works when you route ALL the egress traffic and not only the private traffic to your VPC connector?
In fact, Cloud Run is a public resource, with a public URL, and even if you set the ingress to internal. This param say "the traffic must come to the VPC" and not say "I'm plugged to the VPC with a private IP".
So, to go to your VPC and access a public ressource (Your cloud run services), you need to route ALL the traffic to your VPC, even the public one.

Putting "staging" instance behind VPN - Google Cloud + Firebase

I have a web app with a React frontend on Firebase that connects to a Django backend running on Google App Engine.
I have this setup duplicated for a "staging" environment. The problem is that anyone can access this staging environment.
I'd like to set this up so that you need to be on our VPN to access it.
Can someone point me in the right direction to setup this VPN and move the staging environment behind it?
If you are using Firebase Hosting I believe there's no other way to restrict the access and it does not have a firewall feature. You should use authentication method to restrict and limit who can access your web app.
In App Engine, you can restrict the access of your web server/application by using the following:
App Engine Firewall - #JohnHanley answer, control which using IP addresses can connect to the app.
Identity Aware Proxy - without using VPN you can limit who can access of your App Engine by using their user account. IAP is free but when used with Compute Engine, the required load balancing and firewall configuration may incur additional costs.
App Engine with Load Balancer - to secure and make your App Engine(Standard & Flexible) receives only internal and Cloud Load Balancing traffic
I'd like to set this up so that you need to be on our VPN to access
it.
You cannot limit access to just your VPN. App Engine is in Google's network and you cannot limit access based upon a VPN.
You can use App Engine firewall rules to control which IP addresses can connect to the service. Firebase however does not have firewall rules.
If the public side of your Internet router has a static IP address, then this is simple to setup.
I recommend using authorization to limit who can access your services.

K8s Inter-service communication via FQDN

I have two services deployed to a single cluster in K8s. One is IS4, the other is a client application.
According to leastprivilege, the internal service must also use the FQDN.
The issue I'm having when developing locally (via skaffold & Docker) is that the internal service resolves the FQDN to 127.0.0.1 (the cluster). Is there any way to ensure that it resolves correctly and routes to the correct service?
Another issue is that internally the services communicate on HTTP, and publicly they expose HTTPS. With a URL rewrite I'm able to resolve the DNS part, but I'm unable to change the HTTPS calls to HTTP as NGINX isn't called, it's a call direct to the service. If there is some inter-service ruleset I can hook into (similar to ingress) I believe I could use that to terminate TLS and things would work.
Edit for clarification:
I mean, I'm not deploying to AKS. When deployed to AKS this isn't an issue.
HTTPS is explosed via NGingx ingress, which terminates TLS.

Resources