Authenticating NGINX forwarded requests to upstream servers using Cognito - nginx

I have the following setup:
{Client App} --[HTTP]--> {NGINX} --[HTTPS]--> {API Gateway}
I need such a setup since API Gateway does not support HTTP (it only supports HTTPS) and my client apps are old/obsolete and do not talk HTTPS. So far it works perfectly fine. Now I need to make sure not just any requests are accepted by my API Gateways. I know how to protect API Gateways using Cognito (if I leave the NGINX out of the equation). The process plays as follows in action:
1.
{Consumer Server} --[Credentials]--> {Cognito}
<--[JWT Token]--
2.
{Consumer Server} --[JWT Token]--> {API Gateway}
...
To make sure there's no misunderstanding, the credential sent in step 1 is NOT an email. This is not for authenticating human users, but rather a machine-to-machine communication. And in my case, the client machine is an NGINX instance.
Having set up the scene, this is what I'm trying to achieve. I want my client app to communicate with my API gateway in HTTP. For that, I have to introduce an NGINX in between. But now, I want to make sure that only my designated NGINX instances can do so. So I need to authenticate the requests coming in from the NGINX. That means that NGINX needs to follow the second diagram and asks for JWT tokens from Cognito. And this is not a one-time process. Tokens expire and once they do, NGINX has to refresh them by sending another request to Cognito.
Does anyone know if there's a ready-made solution for this? Or an easy way to implement it?

Related

Fargate errors when frontend service tries to communicate with backend service via Service Discovery

I have a frontend app in Fargate (ECS) in a private subnet exposed to internet through an Application Load Balancer. My frontend makes API calls to my backend apps, also in Fargate, same VPC.
Users calls to my frontend are made via HTTPS, but my frontend communicates with my backend via HTTP (AWS Service Discovery - AWS Cloud Map). This way, the user browser is showing the error "blocked: mixed content" since half of the communication is made via HTTPS and the other half uses HTTP.
infra here
As far as I know and had been searching, it is not possible to use a SSL/TLS certificate with Service Discovery.
I've made a lot of researches and couldn't find something really useful. I also tried to create an internal load balancer for each backend service but the communication is timing out, it only works when I have a VPN connected.
What am I missing here? Do I need an internal load balancer in front of each backend service to attach a certificate between frontend and backend? What is the best approach to solve this?
Users calls to my frontend are made via HTTPS, but my frontend communicates with my backend via HTTP (AWS Service Discovery - AWS Cloud Map). This way, the user browser is causing the error "blocked: mixed content" since half of the communication is made via HTTPS and the other half uses HTTP.
The user's browser wouldn't know anything about this if the communication was happening between the front-end server and the back-end server. Apparently you have front-end client JavaScript code running in the user's web browser trying to access the backend server directly.
If you want to access the backend server directly from the user's web browser, then service discovery won't work, because service discovery is only for traffic that is inside the VPC. And of course by trying to use service discovery in this way you are also causing a security issue which the browser is correctly blocking you from doing. You will need to add another load balancer, or another listener on your current load balancer, that exposes the backend API to the Internet.
Alternatively you could use a reverse proxy like Nginx on your front-end server to send backend API requests to the backend service, and then have your client-side JavaScript code send all requests to the front-end server.

What will happen if a SSL-configured Nginx reverse proxy pass to an web server without SSL?

I use Nginx to manage a lot of my web services. They listens different port, but all accessed by the reverse proxy of Nginx within one domain. Such as to access a RESTful-API server I can use http://my-domain/api/, and to access a video server I can use http://my-domain/video.
I have generated a SSL certificate for my-domain and added it into my Nginx conf so my Nginx server is HTTPS now -- But those original servers are still using HTTP.
What will happen when I visit https://my-domain/<path>? Is this as safe as configuring SSL on the original servers?
One of the goals of making sites be HTTPS is to prevent the transmitted data between two endpoints from being intercepted by outside parties to either be modified, as in a man-in-the-middle attack, or for the data to be stolen and used for bad purposes. On the public Internet, any data transmitted between two endpoints needs to be secured.
On private networks, this need isn't quite so great. Many services do run on just HTTP on private networks just fine. However, there are a couple points to take into consideration:
Make sure unused ports are blocked:
While you may have an NGINX reverse proxy listening on port 443, is port 80 blocked, or can the sites still be accessed via HTTP?
Are the other ports to the services blocked as well? Let's say your web server runs on port 8080, and the NGINX reverse proxy forwards certain traffic to localhost:8080, can the site still be accessed at http://example.com:8080 or https://example.com:8080? One way to prevent this is to use a firewall and block all incoming traffic on any ports you don't intend to accept traffic on. You can always unblock them later, if you add a service that requires that port be opened.
Internal services are accessible by other services on the same server
The next consideration relates to other software that may be running on the server. While it's within a private ecosystem, any service running on the server can access localhost:8080. Since the traffic between the reverse proxy and the web server are not encrypted, that traffic can also be sniffed, even if authorisation is required in order to authenticate localhost:8080. All a rogue service would need to do is monitor the port and wait for a user to login. Then that service can capture everything between the two endpoints.
One strategy to mitigate the dangers created by spyware is to either use virtualisation to separate a single server into logical servers, or use different hardware for things that are not related. This at least keeps things separate so that the people responsible for application A don't think that service X might be something the team running application B is using. Anything out of place will more likely stand out.
For instance, a company website and an internal wiki probably don't belong on the same server.
The simpler we can keep the setup and configuration on the server by limiting what that server's job is, the more easily we can keep tabs on what's happening on the server and prevent data leaks.
Use good security practices
Use good security best practices on the server. For instance, don't run as root. Use a non-root user for administrative tasks. For any services that run which are long lived, don't run them as root.
For instance, NGINX is capable of running as the user www-data. With specific users for different services, we can create groups and assign the different users to them and then modify the file ownership and permissions, using chown and chmod, to ensure that those services only have access to what they need and nothing more. As an example, I've often wondered why NGINX needs read access to logs. It really should, in theory, only need write access to them. If this service were to somehow get compromised, the worst it could do is write a bunch of garbage to the logs, but an attacker might find their hands are tied when it comes to retrieving sensitive information from them.
localhost SSL certs are generally for development only
While I don't recommend this for production, there are ways to make localhost use HTTPS. One is with a self signed certificate. The other uses a tool called mkcert which lets you be your own CA (certificate authority) for issuing SSL certificates. The latter is a great solution, since the browser and other services will implicitly trust the generated certificates, but the general consensus, even by the author of mkcert, is that this is only recommended for development purposes, not production purposes. I've yet to find a good solution for localhost in production. I don't think it exists, and in my experience, I've never seen anyone worry about it.

Mirror requests from cloudrun service to other cloudrun service

I'm currently working on a project where we are using Google Cloud. Within the Cloud we are using CloudRun to provide our services. One of these services is rather complex and has many different configuration options. To validate how these configurations affect the quality of the results and also to evaluate the quality of changes to the service, I would like to proceed as follows:
in addition to the existing service I deploy another instance of the service which contains the changes
I mirror all incoming requests and let both services process them, only the responses from the initial service are returned, but the responses from both services are stored
This allows me to create a detailed evaluation of the differences between the two services without having to provide the user with potentially worse responses.
For the implementation I have setup a NGINX which mirrors the requests. This is also deployed as a CloudRun service. This now accepts all requests and takes care of the authentication. The original service and the mirrored version have been configured in such a way that they can only be accessed internally and should therefore be accessed via a VPC network.
I have tried all possible combinations for the configuration of these parts but I always get 403 or 502 errors.
I have tried setting the NGINX service to the HTTP and HTTPS routes from the service, and I have tried all the VPC Connector settings. When I set the ingress from the service to ALL it works perfectly if I configure the service with HTTPS and port 443 in NGINX. As soon as I set the ingress to Internal I get errors with HTTPS -> 403 and with HTTP -> 502.
Does anyone have experience in this regard and can give me tips on how to solve this problem? Would be very grateful for any help.
If your Cloud Run service are internally accessible (ingress control set to internal only), you need to perform your request from your VPC.
Therefore, as you perfectly did, you plugged a serverless VPC connector on your NGINX service.
The set up is correct. Now, why it works when you route ALL the egress traffic and not only the private traffic to your VPC connector?
In fact, Cloud Run is a public resource, with a public URL, and even if you set the ingress to internal. This param say "the traffic must come to the VPC" and not say "I'm plugged to the VPC with a private IP".
So, to go to your VPC and access a public ressource (Your cloud run services), you need to route ALL the traffic to your VPC, even the public one.

Dynamically configurable proxy

We have a fleet of IoT devices, and want to proxy a port to an end user (remote diagnostics sessions). In order to avoid exposing the IP of our IoT devices, we want to proxy it to an end user. However, these IPs can be dynamic, and the proxy will of course need to be authenticated.
So the flow will be like this:
The user requests a remote diagnostic session;
Backend sends request to IoT device to check if the diagnostic service is running, and otherwise starts it;
IoT device starts the diagnostic service and replies with the status;
Backend creates a new secure proxy which proxies the IoT device to the end user with authentication;
Backend replies to the user with the ip and authorization tokens to connect to the proxy;
User connects to the diagnostic session through the proxy;
Now, I found only one solution thus far, which is Ceryx, however, it has no authentication. NGINX plus doesn't seem like an option, due to the significant license costs, but also due to the fact it doesn't seem to be able to handle this.
Are there any solutions besides adjusting Cyrex to support authentication?
With OpenResty you can set up your proxy using:
acces_by_lua request phase to authenticate your request
balancer_by_lua to handle a dynamic proxying
This can be easily achieved, but will require you to write some code.

How to restrict Kubernetes Engine HTTP access to only Firebase apps

I currently have services running on the Google App Engine platform which use the X-Appengine-Inbound-Appid header to limit HTTP requests to our apps only.
I recently found out that some of my services require a static IP and therefor I would like to move some of the services to the Kubernetes Engine.
Is there a way for Kubernetes Engine to secure requests using a similar header approach? The requests should only be allowed from our own Firebase apps.
Ideally I would keep things as simple as possible for the clients using the services.
Possibly I could generate a specific API key for each user which can be blacklisted on abuse, but that already adds quite a bit of complexity.
You can use ngnix ingress controller as an entry point for your cluster, and add whatever rules for ngnix.

Resources