I'm trying to use nginx behind a Compute Engine http load balancer. I would like to allow Health Check requests to come through a port unauthorized and all other requests to be authorized with basic auth.
The Health Check requests come from IP block: 130.211.0.0/22. If I see requests coming from this IP block with no X-forwarded-for header, then it is a health check from the load balancer.
I'm confused on how to set this up with nginx.
Have you tried using Nginx header modules? Googling around I found these:
HttpHeadersMoreModule
Headers
There's also a similar question here.
Alternative. In the past I worked with a software (RT), which had thought of this possibility in the software itself, providing a subdirectory for unauthorized access (/noauth/). Maybe your software might have the same, and you could configure GCE health check to point to something like /noauth/mycheck.html.
Please remember that headers can be easily forged, so an attacker who knows your vulnerability could access your server without auth.
Related
I have an server behind nginx, and I have a frontend distributed on AWS cloudfront using AWS Amplify. I'd like requests coming not from my client to be denied at the reverse proxy level. If others think I should do this on the app level, please lmk.
What I've tried so far is to allow all the ips of AWS' cloudfront system (https://ip-ranges.amazonaws.com/ip-ranges.json), and deny all after that. However, my requests from the correct client get blocked.
My other alternative is to do a lookup by IP of the domain for every request, and check against that - but I'd rather not do a DNS lookup every time.
I can also include some kind of token with every request, but come on - there's gotta be some easier way to get this done.
Any ideas?
my fathers job requires him to use a VPN to access anything work related (emails, websites, etc) outside his office, and the VPN blocks facebook, currently whenever he wants to use facebook, he has to log off the VPN first. He asked me if I could set up something to try and get around that, so I am attempting to setup NGINX on Debian 9 to act as a reverse proxy, however I have very little experience with NGINX. I have found that if I include proxy_set_header Host $host; then I can get to facebook but see
"Sorry, something went wrong.
We're working on getting this fixed as soon as we can."
But if I don't include it the VPN still blocks facebook.
Any advice?
nginx config
You're not going to be able to reverse proxy Facebook, for a few reasons:
facebook.com isn't going to load from an alternative hostname, such as kyles-facebook-proxy-clone.com. The browser sends a request header, Host. Facebook's servers won't serve for a hostname that they aren't expecting.
Undoubtedly there's some client-side JavaScript that will be hardcoded to other hostnames you're not proxying (for API access, CDNs for images/video, etc.) that will break, unless you rewrote the page in your code as well (which isn't reasonably possible due to obfuscation).
You can't serve traffic for facebook.com without having a properly signed certificate for HTTPS. HTTPS is required for facebook.com due to HSTS.
Even if you managed to get a certificate, it isn't going to work due to key pinning.
What can you do?
Use a proper proxy server.
Use Tor.
Ask for Facebook to be let through on the VPN.
If a client is requesting a domain that does not have a valid CA signed certificate and the server not intend on supporting HTTPS but does support HTTP for this domain, what is the best way to handle this in the web server. Note, the server does handle requests for SSL (HTTPS) on other domains so it is listening on 443.
Example where this would apply is for multi sub-domains where the sub-domains are dynamically created and thus making it extremely difficult to register CA signed certificates.
I've seen people try to respond with HTTP error codes but these seem moot as the client (browser) will first verify the certificate and will present the hard warning to the user before processing any HTTP. Therefore the client will only see the error code if they "proceed" past the cert warning.
Is there a canonical way of handling this scenario?
There is no canonical way for this scenario. Clients don't automatically downgrade to HTTP if HTTPS is broken and it would be a very bad idea to change clients in this regard - all what an attacker would need to do to attack HTTPS would be to infer with the HTTPS traffic to make a client downgrade to unprotected HTTP traffic.
Thus, you need to make sure that the client either does not try to attempt to access URL's which do not work properly (i.e. don't publish such URL's) or to make sure that you have a working certificate for these subdomains, i.e. adapt the processes for creation of subdomains so that they not only have an IP address but also a valid certificate (maybe use wildcard certificates).
Considering these websites don't have to work with SSL, the webserver should close all SSL connections for them in a proper way.
There is no canonical way for this, but RFC 5246 implicitly suggests to interrupt the handshake on the server side by using the user_cancel + close_notify alerts. How to achieve this is another question, it will be a configuration of the default SSL virtual host.
user_canceled
This handshake is being canceled for some reason unrelated to a
protocol failure. If the user cancels an operation after the
handshake is complete, just closing the connection by sending a
close_notify is more appropriate. This alert should be followed
by a close_notify. This message is generally a warning.
If you are dealing with subdomains, you probably can use a wildcard certificate for all of your subdomains.
Adding the CA certificate to your client will remove the warning (that's what companies do, no worry).
When hosting with Apache, for example, you can use VirtualDocumentRoot to add domains without editing your configuration. Have a look at the solution provided here : Virtual Hosting in SSL with VirtualDocumentRoot
I need to use http health checks on a Elastic Beanstalk application, with proxy protocol turned on. That is currently not possible, and the health check fails with a an error --> *58 broken header while reading PROXY protocol
I figured I have two options
Perform the health check on another port, and setup nginx to listen to http requests on that port and proxy to my app.
If it is possible to catch the broken header errors, or detect regular http requests in the proxy_protocol server block, then redirect those requests to a port that listens to http.
I would prefer the latter(#2), if possible. So is there any way to do this?
Ideally, I would prefer not to have to do any of this. A feature request to fix this has been submitted to AWS, but it has no ETA.
The proxy protocol specification says:
The receiver MUST be configured to only receive the protocol described in this
specification and MUST not try to guess whether the protocol header is present
or not. This means that the protocol explicitly prevents port sharing between
public and private access. Otherwise it would open a major security breach by
allowing untrusted parties to spoof their connection addresses.
I think this means that option 2 is a sufficiently bad idea that it's not even supported by conforming implementations of the proxy protocol.
Option 1, on the other hand, seems pretty reasonable. You can set up a security group so that only legitimate health checks can come in on the port without proxy protocol enabled.
Another couple of options spring to mind too:
Simply point your health checks at the thing that's adding the header (i.e. ELB?), rather than directly at your Nginx instance. Not sure if this is possible with Elastic Beanstalk, it's not a service I use.
Use something else to add the proxy protocol header before forwarding the health-check traffic on to your Nginx, which would avoid having to duplicate your Nginx config. For instance a HAProxy running on the same machine as your Nginx could do this. Again, use security groups to ensure that only legitimate traffic gets through.
I have a web server cluster behind a proxy/load balancer. That proxy contains my SSL certs and hands the web servers the decrypted traffic, and along the way adds an "x-forwarded-for" header into the HTTP header the web application receives. This application has seen millions of IP addresses over the past decade, but something weird happened today.
For the first time, I saw an x-forwarded-for that contained a second address reach the application [addressed altered]:
x-forwarded-for: 62.211.19.218, 177.168.159.85
This indicates that the traffic came through a proxy, and I understand this is normal for x-f-f. I would have thought this was impossible (or at least unlikely) with https as the protocol.
Can someone explain how this is legit?
As per RFC 7239, this HTTP header is specified as
X-Forwarded-For: client, proxy1, proxy2, ...
Where client is the IP of the original client and then each proxy adds the IP it received the request from, at the end of the list. In the above example, you would see IP of proxy3 in your webserver and proxy2 is the IP which connected to the proxy3.
As anyone can put anything inside this header, you should accept it only from known sources like your own reverse proxy or whitelist of known legit proxies. For example Apache has mod_rpaf, which transparently changes client IP address to the one provided in this header, but only if the request is received from the IP of known proxy server.
On corporate networks you can easily do transparent proxying for HTTPS traffic without any notice from normal users. Just create your own certification authority, use for example Windows Group Policy to install & trust this CA on all corporate workstations. Then redirect all HTTPS connections to your proxy which will generate certificate for all visited domains on the fly. This is something which is happening and you can even buy enterprise hardware proxies using this method.
So to summarize the reasons why you could see multiple IPs in the X-Forwarded-For header:
Transparent HTTPS proxy as mentioned above
The header was added by the requestor itself (browser, wget, script) for whatever reason, for example to hide its own IP
Some CDN like Cloudflare could add that header if used
Multiple reverse proxies defined either intentionally or by mistake
Conclusion: You should only trust this header if it originates from your own proxy (in case of multiple IPs, trust only the last one).
MAYBE it's using the Proxy protocol for HTTPS. Granted you may not be using httproxy, but this seems to be a decent description:
http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt
I'm not sure about the SSL cert, but there's no guarantee someone is doing something pathalogical (maybe unintentionally) like running all their HTTPS traffic through a proxy and then accepting all the invalid certificates. But I suspect the proxy protocol might make this work; it does expose the HTTP headers to the proxy in some sense.