I have an server behind nginx, and I have a frontend distributed on AWS cloudfront using AWS Amplify. I'd like requests coming not from my client to be denied at the reverse proxy level. If others think I should do this on the app level, please lmk.
What I've tried so far is to allow all the ips of AWS' cloudfront system (https://ip-ranges.amazonaws.com/ip-ranges.json), and deny all after that. However, my requests from the correct client get blocked.
My other alternative is to do a lookup by IP of the domain for every request, and check against that - but I'd rather not do a DNS lookup every time.
I can also include some kind of token with every request, but come on - there's gotta be some easier way to get this done.
Any ideas?
Related
I have an nginx server serving both as an http server (frontend) on / and reverse proxy on /api, where it forwards the requests to a node server (backend) on the same machine (localhost).
I am using authentication only on the frontend. The backend accepts all incoming requests and does not authenticate again the requests.
I would like to allow all requests coming from the frontend to the backend. I would like to block all requests coming to the backend from other sources (via the reverse proxy of course).
For example, the frontend will try to login the user using the user third-party auth id via https://domain/api/login/id. The reverse proxy will forward it to the backend. This is fine.
I can also access the backend directly from any other machine via https://domain/api/login/id.
My question is the following. I would like to block all requests not originating by the frontend. I have tried figuring out how to find the information but it seems that the X-Real-IP header always refers to the browser originating the request (either by using the frontend or directly). I am wondering if there is any header I can set in the reverse proxy which will tell the backend that this is an allowed call, or even use nginx own allow/deny rules. Right now, I do not manage to make a distinction between the two types of requests.
Thank you very much!
I am setting up a load balancer in Kubernetes which will allow access to only authorized IPs. I am considering APIGEE to use an abstraction layer to manage all the authentication, rate limiting, and other filters before the client request reaches the load balancer or the service endpoint.
I understand that using 'Access Control' policy in Apigee I can restrict the access of the Apigee endpoint to only authorized IPs. So I want to allow ONLY traffic in Kubernetes service (or load balancer) which goes through Apigee endpoint. In short, adding Apigee endpoints IP in the authorized networks in the load balancer is the identical solution I am considering at this point.
I went through a few articles and questions and I am still not sure whether or not the IP address of the Apigee endpoint (from which the requests are being sent to the Kubernetes Load Balancer) is static, and how to find it out.
I tried sending a curl -v and I got the public IP of the endpoint which can also be retrieved from https://ipinfo.info/html/ip_checker.php
To summarize, here are my questions:
1. The IP address from which APIGEE sends the request to an endpoint is fixed or changes? If changes, how often?
2. Is there any fixed IP range per proxy in APIGEE?
As I find this a simple ended question. Answer to this would be a 'Yes, the IP of Apigee source can change'.
The frequency of the change is supposed to be really low, but in rare cases, the IP can change.
Using two-way TLS can be a better solution to the problem you've described than IP whitelisting.
More about how we can configure Two-way TLS between Apigee Edge and Backend Server can be found here.
Posting the same question on the Apigee Community helped me get a conclusion that the IPs assigned to the Apigee proxy can be changed. This is a rare case that it happens, but they only disappear if something goes wrong with one of the associated hardware machines in the cloud datacenter. This happens less than once per year and it is never a planned change.
Hence using IP whitelisting in the Firewall of your backend to allow requests only through Apigee Edge proxy is not the best solution. Two-way TLS is the best approach to secure the backend service with enabled client authentication.
Here is the link to the question on community.apigee.com
I have an application set up like this:
There is a server, with a reverseproxy/load balancer that acts as the HTTPS termination (this is the one that has a server certificate), and several applications behind it(*)
However, some applications require authentication of the client with a certificate. Authentication cannot happen in the reverse proxy. Will the application be able to see the user certificate, or will it be jettisoned by the HTTPS->HTTP transfer?
(*) OK, so this is a Kubernetes ingress, and containers/pods.
It will be lost. I think you need to extract it in the reverse proxy (i.e. Nginx) and pass it in as a HTTP header if you really must. See for example https://serverfault.com/questions/788895/nginx-reverse-proxy-pass-through-client-certificate. Not very secure as the cert is passed in the clear!
I don't know if we have that level of control over the ingress, personally I'm using a normal Nginx server for incoming traffic instead.
I'm trying to use nginx behind a Compute Engine http load balancer. I would like to allow Health Check requests to come through a port unauthorized and all other requests to be authorized with basic auth.
The Health Check requests come from IP block: 130.211.0.0/22. If I see requests coming from this IP block with no X-forwarded-for header, then it is a health check from the load balancer.
I'm confused on how to set this up with nginx.
Have you tried using Nginx header modules? Googling around I found these:
HttpHeadersMoreModule
Headers
There's also a similar question here.
Alternative. In the past I worked with a software (RT), which had thought of this possibility in the software itself, providing a subdirectory for unauthorized access (/noauth/). Maybe your software might have the same, and you could configure GCE health check to point to something like /noauth/mycheck.html.
Please remember that headers can be easily forged, so an attacker who knows your vulnerability could access your server without auth.
I'm trying to understand the best way to handle SOA on heroku, i've got it into my head that making requests to custom domains will somehow be slower, or would all requests go "out" via the internet?
On previous projects which are SOA in nature we've had dedicated hosting so could make requests like http://blogs/ (obviously on the internal network) I'm wondering if heroku treats *.herokuapp.com requests as "internal"... Or is it clever enough to know the myapp.com is actually myapp.herokuapp.com and route locally, or am i missing the point completely, and in fact all requests are "external"
What you are asking about is general knowledge of how internet requests are working.
Whenever you do request from your application to lets say example.com, domain name will first be translated into IP address using so called DNS servers.
So this how it works: does not matter you request myapp.com or myapp.heroku.com you will always request infromation from specific IP address, and domain name you have requested will be passed as part of request headers.
Server which receives this request will try to find in its internal records this domain name and handle request accordingly.
So conclusion is that does not matter you put myapp.com or myapp.heroku.com, the speed of request will always be same.
PS: As heroku will load balance your requests between different instances of your running myapp.com, the speed here will depend on several factors: how quickly your application will respond, how many instances you have running and load average per instance, how much is load balancer loaded at the moment. But surely it will not depend on which domain name you use.