filter requests on nginx server when it serves as both http server and reverse proxy - nginx

I have an nginx server serving both as an http server (frontend) on / and reverse proxy on /api, where it forwards the requests to a node server (backend) on the same machine (localhost).
I am using authentication only on the frontend. The backend accepts all incoming requests and does not authenticate again the requests.
I would like to allow all requests coming from the frontend to the backend. I would like to block all requests coming to the backend from other sources (via the reverse proxy of course).
For example, the frontend will try to login the user using the user third-party auth id via https://domain/api/login/id. The reverse proxy will forward it to the backend. This is fine.
I can also access the backend directly from any other machine via https://domain/api/login/id.
My question is the following. I would like to block all requests not originating by the frontend. I have tried figuring out how to find the information but it seems that the X-Real-IP header always refers to the browser originating the request (either by using the frontend or directly). I am wondering if there is any header I can set in the reverse proxy which will tell the backend that this is an allowed call, or even use nginx own allow/deny rules. Right now, I do not manage to make a distinction between the two types of requests.
Thank you very much!

Related

NGINX - Return backend server name in the response header

I am running a nginx server and using it as reverse proxy (using proxy pass). See below image for my setup.
In the setup, user hits NGINX. NGINX based on proxy pass hits another backend server to get response. This backend server can be load balanced in some cases. In a load balanced scenario, system will hit one of the backend server to get response.
What I want: When response is returned from backend server, I want to include the backend server name in the response header. How can I get the desired output? Do I also need to make changes in backend servers\application? Application on backend servers are running on different web servers (IIS, Tomcat, NGINX etc)

How to configure nginx to only allow requests from cloudfront client?

I have an server behind nginx, and I have a frontend distributed on AWS cloudfront using AWS Amplify. I'd like requests coming not from my client to be denied at the reverse proxy level. If others think I should do this on the app level, please lmk.
What I've tried so far is to allow all the ips of AWS' cloudfront system (https://ip-ranges.amazonaws.com/ip-ranges.json), and deny all after that. However, my requests from the correct client get blocked.
My other alternative is to do a lookup by IP of the domain for every request, and check against that - but I'd rather not do a DNS lookup every time.
I can also include some kind of token with every request, but come on - there's gotta be some easier way to get this done.
Any ideas?

Visit Kubernetes dashboard through nginx proxy

I wanted to visit my dashboard on a local Kubernetes installation (using docker for mac). I was 'blocked'. I have to provide a token or my config which is normal since the RBAC updates.
Now I don't want to kubectl proxy or enable port forwarding every time I want to visit my dashboard so I installed an nginx proxy with a ingress (tls) which redirects me to https://kubernetes-dashboard.kube-system.svc.cluster.local:443.
This works fine but now I'm a bit confused because I can see the dashboard now, without facing the RBAC issue.
I read this here:
To make Dashboard use authorization header you simply need to pass
Authorization: Bearer in every request to Dashboard. This can
be achieved i.e. by configuring reverse proxy in front of Dashboard.
Proxy will be responsible for authentication with identity provider
and will pass generated token in request header to Dashboard. Note
that Kubernetes API server needs to be configured properly to accept
these tokens.
But it's still not very clear for me. Can someone explain we why I can see the dashboard when I create a proxy in front of it?
Proxy is usually needed to transfer data between different segments of the network without connecting them directly.
Each segment of the network is "talking" to proxy host without any knowledge of the existence of the other network segment.
The Proxy Server is responsible for all negotiations and operations concerning requests and response packets. So, to enable authentication, authorization, SSL termination and many other things you need to configure your proxy server according to your needs.
If you can see the kubernetes dashboard via proxy in front of it it just means that you did not configure any security on that proxy.
For example, to learn how to configure Nginx Ingress to protect a service with basic authentication in your cluster consider to read this article.
For more complex security setup read the article about securing Kubernetes services with Ingress, TLS and LetsEncrypt.

Is it possible to have client certificates with HTTP (not HTTPS)?

I have an application set up like this:
There is a server, with a reverseproxy/load balancer that acts as the HTTPS termination (this is the one that has a server certificate), and several applications behind it(*)
However, some applications require authentication of the client with a certificate. Authentication cannot happen in the reverse proxy. Will the application be able to see the user certificate, or will it be jettisoned by the HTTPS->HTTP transfer?
(*) OK, so this is a Kubernetes ingress, and containers/pods.
It will be lost. I think you need to extract it in the reverse proxy (i.e. Nginx) and pass it in as a HTTP header if you really must. See for example https://serverfault.com/questions/788895/nginx-reverse-proxy-pass-through-client-certificate. Not very secure as the cert is passed in the clear!
I don't know if we have that level of control over the ingress, personally I'm using a normal Nginx server for incoming traffic instead.

Sending http request behind nginx

I am not sure how to formulate my question but here we go:
I have 2 servers, one is the nginx reverse proxy and one is the app server.
In my app server, I am developing a simple http client using jerseyclient that will send a request to another server. I can do this now but the traffic goes from the app server and directly to the destination. Is it possible to it from the app server, passes through the reverse proxy server and goes to the destination?
And, is this design ok or is it an abomination?
nginx reverse proxy works only for requests outside your network.
To configure your system works as you described you have to configure firewall NAT or caching HTTP proxy like squid etc.
If you have no reasons why your servers should look as single computer - your configuration is OK.

Resources