How to restrict Kubernetes Engine HTTP access to only Firebase apps - firebase

I currently have services running on the Google App Engine platform which use the X-Appengine-Inbound-Appid header to limit HTTP requests to our apps only.
I recently found out that some of my services require a static IP and therefor I would like to move some of the services to the Kubernetes Engine.
Is there a way for Kubernetes Engine to secure requests using a similar header approach? The requests should only be allowed from our own Firebase apps.
Ideally I would keep things as simple as possible for the clients using the services.
Possibly I could generate a specific API key for each user which can be blacklisted on abuse, but that already adds quite a bit of complexity.

You can use ngnix ingress controller as an entry point for your cluster, and add whatever rules for ngnix.

Related

Fargate errors when frontend service tries to communicate with backend service via Service Discovery

I have a frontend app in Fargate (ECS) in a private subnet exposed to internet through an Application Load Balancer. My frontend makes API calls to my backend apps, also in Fargate, same VPC.
Users calls to my frontend are made via HTTPS, but my frontend communicates with my backend via HTTP (AWS Service Discovery - AWS Cloud Map). This way, the user browser is showing the error "blocked: mixed content" since half of the communication is made via HTTPS and the other half uses HTTP.
infra here
As far as I know and had been searching, it is not possible to use a SSL/TLS certificate with Service Discovery.
I've made a lot of researches and couldn't find something really useful. I also tried to create an internal load balancer for each backend service but the communication is timing out, it only works when I have a VPN connected.
What am I missing here? Do I need an internal load balancer in front of each backend service to attach a certificate between frontend and backend? What is the best approach to solve this?
Users calls to my frontend are made via HTTPS, but my frontend communicates with my backend via HTTP (AWS Service Discovery - AWS Cloud Map). This way, the user browser is causing the error "blocked: mixed content" since half of the communication is made via HTTPS and the other half uses HTTP.
The user's browser wouldn't know anything about this if the communication was happening between the front-end server and the back-end server. Apparently you have front-end client JavaScript code running in the user's web browser trying to access the backend server directly.
If you want to access the backend server directly from the user's web browser, then service discovery won't work, because service discovery is only for traffic that is inside the VPC. And of course by trying to use service discovery in this way you are also causing a security issue which the browser is correctly blocking you from doing. You will need to add another load balancer, or another listener on your current load balancer, that exposes the backend API to the Internet.
Alternatively you could use a reverse proxy like Nginx on your front-end server to send backend API requests to the backend service, and then have your client-side JavaScript code send all requests to the front-end server.

Mirror requests from cloudrun service to other cloudrun service

I'm currently working on a project where we are using Google Cloud. Within the Cloud we are using CloudRun to provide our services. One of these services is rather complex and has many different configuration options. To validate how these configurations affect the quality of the results and also to evaluate the quality of changes to the service, I would like to proceed as follows:
in addition to the existing service I deploy another instance of the service which contains the changes
I mirror all incoming requests and let both services process them, only the responses from the initial service are returned, but the responses from both services are stored
This allows me to create a detailed evaluation of the differences between the two services without having to provide the user with potentially worse responses.
For the implementation I have setup a NGINX which mirrors the requests. This is also deployed as a CloudRun service. This now accepts all requests and takes care of the authentication. The original service and the mirrored version have been configured in such a way that they can only be accessed internally and should therefore be accessed via a VPC network.
I have tried all possible combinations for the configuration of these parts but I always get 403 or 502 errors.
I have tried setting the NGINX service to the HTTP and HTTPS routes from the service, and I have tried all the VPC Connector settings. When I set the ingress from the service to ALL it works perfectly if I configure the service with HTTPS and port 443 in NGINX. As soon as I set the ingress to Internal I get errors with HTTPS -> 403 and with HTTP -> 502.
Does anyone have experience in this regard and can give me tips on how to solve this problem? Would be very grateful for any help.
If your Cloud Run service are internally accessible (ingress control set to internal only), you need to perform your request from your VPC.
Therefore, as you perfectly did, you plugged a serverless VPC connector on your NGINX service.
The set up is correct. Now, why it works when you route ALL the egress traffic and not only the private traffic to your VPC connector?
In fact, Cloud Run is a public resource, with a public URL, and even if you set the ingress to internal. This param say "the traffic must come to the VPC" and not say "I'm plugged to the VPC with a private IP".
So, to go to your VPC and access a public ressource (Your cloud run services), you need to route ALL the traffic to your VPC, even the public one.

Putting "staging" instance behind VPN - Google Cloud + Firebase

I have a web app with a React frontend on Firebase that connects to a Django backend running on Google App Engine.
I have this setup duplicated for a "staging" environment. The problem is that anyone can access this staging environment.
I'd like to set this up so that you need to be on our VPN to access it.
Can someone point me in the right direction to setup this VPN and move the staging environment behind it?
If you are using Firebase Hosting I believe there's no other way to restrict the access and it does not have a firewall feature. You should use authentication method to restrict and limit who can access your web app.
In App Engine, you can restrict the access of your web server/application by using the following:
App Engine Firewall - #JohnHanley answer, control which using IP addresses can connect to the app.
Identity Aware Proxy - without using VPN you can limit who can access of your App Engine by using their user account. IAP is free but when used with Compute Engine, the required load balancing and firewall configuration may incur additional costs.
App Engine with Load Balancer - to secure and make your App Engine(Standard & Flexible) receives only internal and Cloud Load Balancing traffic
I'd like to set this up so that you need to be on our VPN to access
it.
You cannot limit access to just your VPN. App Engine is in Google's network and you cannot limit access based upon a VPN.
You can use App Engine firewall rules to control which IP addresses can connect to the service. Firebase however does not have firewall rules.
If the public side of your Internet router has a static IP address, then this is simple to setup.
I recommend using authorization to limit who can access your services.

API gateway vs. reverse proxy

In order to deal with the microservice architecture, it's often used alongside a Reverse Proxy (such as nginx or apache httpd) and for cross cutting concerns implementation API gateway pattern is used. Sometimes Reverse proxy does the work of API gateway.
It will be good to see clear differences between these two approaches.
It looks like the potential benefit of API gateway usage is invoking multiple microservices and aggregating the results. All other responsibilities of API gateway can be implemented using Reverse Proxy. Such as:
Authentication (It can be done using nginx LUA scripts);
Transport security. It itself Reverse Proxy task;
Load balancing
...
So based on this there are several questions:
Does it make sense to use API gateway and Reverse proxy simultaneously (as example request -> API gateway -> reverse proxy(nginx) -> concrete microservice)? In what cases ?
What are the other differences that can be implemented using API gateway and can't be implemented by Reverse proxy and vice versa?
It is easier to think about them if you realize they aren't mutually exclusive. Think of an API gateway as a specific type reverse proxy implementation.
In regards to your questions, it is not uncommon to see both used in conjunction where the API gateway is treated as an application tier that sits behind a reverse proxy for load balancing and health checking. An example would be something like a WAF sandwich architecture in that your Web Application Firewall/API Gateway is sandwiched by reverse proxy tiers, one for the WAF itself and the other for the individual microservices it talks to.
Regarding the differences, they are very similar. It's just nomenclature. As you take a basic reverse proxy setup and start bolting on more pieces like authentication, rate limiting, dynamic config updates, and service discovery, people are more likely to call that an API gateway.
I believe, API Gateway is a reverse proxy that can be configured dynamically via API and potentially via UI, while traditional reverse proxy (like Nginx, HAProxy or Apache) is configured via config file and has to be restarted when configuration changes. Thus, API Gateway should be used when routing rules or other configuration often changes. To your questions:
It makes sense as long as every component in this sequence serves its purpose.
Differences are not in feature list but in the way configuration changes applied.
Additionally, API Gateway is often provided in form of SAAS, like Apigee or Tyk for example.
Also, here's my tutorial on how to create a simple API Gateway with Node.js https://memz.co/api-gateway-microservices-docker-node-js/
Hope it helps.
API gateway acts as a reverse proxy to accept all application programming interface (API) calls, aggregate the various services required to fulfill them, and return the appropriate result.
An API gateway has a more robust set of features — especially around security and monitoring — than an API proxy. I would say API gateway pattern also called as Backend for frontend (BFF) is widely used in Microservices development. Checkout the article for the benefits and features of API Gateway pattern in Microservice world.
On the other hand API proxy is basically a lightweight API gateway. It includes some basic security and monitoring capabilities. So, if you already have an API and your needs are simple, an API proxy will work fine.
The below image will provide you the clear picture of the difference between API Gateway and Reverse proxy.
API Gateways usually operate as a L7 construct.
API Gateways provide additional functionality as compared to a plain reverse proxy. If you consider some of the portals out there they can provide :
full API Lifecycle Management including documentation
a portal which can be used as the source of truth for various client applications and where you can provide client governance, rate limiting etc.
routing to different versions of the API including canary/beta versions
detecting usage patterns, register apps, retrieve client credentials etc.
However with the advent of service meshes like Istio, Consul a lot of the functionality of API Gateways will be subsumed by meshes.
From HTTP: The Definitive Guide:
Strictly speaking, proxies connect two or more applications that speak
the same protocol, while gateways hook up two or more parties that
speak different protocols. A gateway acts as a "protocol converter,"
allowing a client to complete a transaction with a server, even when
the client and server speak different protocols.
In practice, the difference between proxies and gateways is blurry.
Because browsers and servers implement different versions of HTTP,
proxies often do some amount of protocol conversion. And commercial
proxy servers implement gateway functionality to support SSL security
protocols, SOCKS firewalls, FTP access, and web-based applications.
Reverse proxy, such as Nginx and Apache, do not deal with observability, authentication, authorization, service orchestration, etc., but only do load balancing and forward traffic to upstream.
API Gateway is close to the user's business scenario and helps users solve the security and observability issues of various APIs and microservices.
Different positioning leads to different technical aspects of reverse proxy and API gateway. API gateways, such as Apache APISIX, have nearly 100 plugins and support multiple programming languages for plugin development.
If you already have a good API gateway, there is no need to use a reverse proxy.
Regarding the Andrey Chausenko's answer that
I believe, API Gateway is a reverse proxy that can be configured dynamically via API and potentially via UI, while traditional reverse proxy (like Nginx, HAProxy or Apache) is configured via config file and has to be restarted when configuration changes.
I think it is not true nowadays as modern reverse proxy like Envoy can be dynamically configured by control plane via xDS.

Azure: How to connect one cloud service with other in one virtual network

I want deploy backend WCF service in WebRole in Cloud Service 1 only with Internal Endpoint.
And deploy ASP.NET MVC frontend in WebRole in Cloud Service 2.
Is it possible to use Azure Virtual Netowork to call backend from frontend by Internal Endpoint ?
UPDATED: I am just trying build simple SOA architect like this:
Yes and No.
An internal endpoint essentially means that the role instance has been configured to accept traffic on a given port, but that port can NOT receive traffic from outside of the cloud service (hence it being "internal" to the cloud service). Internal endpoints are also not load balanced so you're going to need to "juggle" traffic management from the callers yourself.
Now here is where the issues arise, a virtual network allows you to securely traverse cloud service boundaries, letting a role instance in cloud service 1 call a role instance in cloud service 2. However, to do this, the calling role instance needs to know how to address the receiving instance. If they were in the same cloud service, they you can crawl the cloud service topology via the RoleEnvironment class. But this class only works for the cloud service its exists in, its not aware of a virtual network.
Now you could have the receiving role instance publish its FQDN to a shared area (say Azure table storage). However, a cloud service will only use its own internal DNS resolution (which only allows you to resolve short names in the same cloud service) unless you have configured the virtual network with a self-hosted DNS server.
So yes, you can do what you're trying to accomplish, but it does present some challenges. Given this, I'd have to argue if the convenience of separating for deployment enough to justify the additional complexity of the solution? If so, then I'd also look and see if perhaps there's a better way to interconnect the two services rather then direct calls (like a queue based pattern).
#BrentDaCodeMonkey makes some very valid points in his answer, so read that first.
I, personally, would not want to give up automatic discovery and scale via load balancing. My suggestion would be that you expose the WCF endpoint via an Azure Service Bus Relay endpoint. This will give you a "fixed" endpoint with which to communicate (solving the discovery issue) and infinite scalability because multiple servers can register and listen on the same Service Bus relay address. Additionally it introduces some basic security into the mix via shared key authentication when your web application(s) connect to your WCF services.
If you co-locate the Service Bus instance with your Cloud Services the overhead of the relay in the middle is totally negligible and, IMHO, worth it for the benefits explained above.

Resources