Nginx catch "broken header" when listening to proxy_protocol - nginx

I need to use http health checks on a Elastic Beanstalk application, with proxy protocol turned on. That is currently not possible, and the health check fails with a an error --> *58 broken header while reading PROXY protocol
I figured I have two options
Perform the health check on another port, and setup nginx to listen to http requests on that port and proxy to my app.
If it is possible to catch the broken header errors, or detect regular http requests in the proxy_protocol server block, then redirect those requests to a port that listens to http.
I would prefer the latter(#2), if possible. So is there any way to do this?
Ideally, I would prefer not to have to do any of this. A feature request to fix this has been submitted to AWS, but it has no ETA.

The proxy protocol specification says:
The receiver MUST be configured to only receive the protocol described in this
specification and MUST not try to guess whether the protocol header is present
or not. This means that the protocol explicitly prevents port sharing between
public and private access. Otherwise it would open a major security breach by
allowing untrusted parties to spoof their connection addresses.
I think this means that option 2 is a sufficiently bad idea that it's not even supported by conforming implementations of the proxy protocol.
Option 1, on the other hand, seems pretty reasonable. You can set up a security group so that only legitimate health checks can come in on the port without proxy protocol enabled.
Another couple of options spring to mind too:
Simply point your health checks at the thing that's adding the header (i.e. ELB?), rather than directly at your Nginx instance. Not sure if this is possible with Elastic Beanstalk, it's not a service I use.
Use something else to add the proxy protocol header before forwarding the health-check traffic on to your Nginx, which would avoid having to duplicate your Nginx config. For instance a HAProxy running on the same machine as your Nginx could do this. Again, use security groups to ensure that only legitimate traffic gets through.

Related

Does internal communication between private servers use DNS and HTTPS?

I would like to know how internal communication links between private internal servers and a reverse proxy look.
When from my client (browser) I make a request to, say https://facebook.com, I hit Facebook's reverse proxy. I have two questions, when that reverse proxy gets a request and needs to forward it to the server that should handle it, does that sever it is forwarding the request to have a domain name or is it just an IP address ((user.facebook.com or useroffacebook.com v.s. 34.23.66.25 (DO NOT GO TO THAT ADDRESS I JUST MADE IT UP!!!)))? Also, does that connection use HTTP or HTTPS?
Like Kshitij Joshi already mentioned, it could be both.
A more detailed perspective for implementation:
reverse proxy should use IP addresses for routing so they are still working even if the DNS fails or is unavailable to the proxy for some reason.
internal traffic should also be encrypted (HTTPS). using plain text, even in internal networks, must be considered dangerous and is not recommended.
from my mindset you can replace the 'should' with a 'must'.

Canonical handling of HTTPS request when SSL not supported

If a client is requesting a domain that does not have a valid CA signed certificate and the server not intend on supporting HTTPS but does support HTTP for this domain, what is the best way to handle this in the web server. Note, the server does handle requests for SSL (HTTPS) on other domains so it is listening on 443.
Example where this would apply is for multi sub-domains where the sub-domains are dynamically created and thus making it extremely difficult to register CA signed certificates.
I've seen people try to respond with HTTP error codes but these seem moot as the client (browser) will first verify the certificate and will present the hard warning to the user before processing any HTTP. Therefore the client will only see the error code if they "proceed" past the cert warning.
Is there a canonical way of handling this scenario?
There is no canonical way for this scenario. Clients don't automatically downgrade to HTTP if HTTPS is broken and it would be a very bad idea to change clients in this regard - all what an attacker would need to do to attack HTTPS would be to infer with the HTTPS traffic to make a client downgrade to unprotected HTTP traffic.
Thus, you need to make sure that the client either does not try to attempt to access URL's which do not work properly (i.e. don't publish such URL's) or to make sure that you have a working certificate for these subdomains, i.e. adapt the processes for creation of subdomains so that they not only have an IP address but also a valid certificate (maybe use wildcard certificates).
Considering these websites don't have to work with SSL, the webserver should close all SSL connections for them in a proper way.
There is no canonical way for this, but RFC 5246 implicitly suggests to interrupt the handshake on the server side by using the user_cancel + close_notify alerts. How to achieve this is another question, it will be a configuration of the default SSL virtual host.
user_canceled
This handshake is being canceled for some reason unrelated to a
protocol failure. If the user cancels an operation after the
handshake is complete, just closing the connection by sending a
close_notify is more appropriate. This alert should be followed
by a close_notify. This message is generally a warning.
If you are dealing with subdomains, you probably can use a wildcard certificate for all of your subdomains.
Adding the CA certificate to your client will remove the warning (that's what companies do, no worry).
When hosting with Apache, for example, you can use VirtualDocumentRoot to add domains without editing your configuration. Have a look at the solution provided here : Virtual Hosting in SSL with VirtualDocumentRoot

Do HTTPS connections require HTTPS proxies or can I use HTTP proxies?

The question is about HTTP vs HTTPS.
If I want to anonymously load a website that forces HTTPS, like Google.com, do I need an HTTPS proxies, or can I get away with HTTP proxies?
If your proxy is SOCKS it will not care what kind of socket is connecting through it. It has its own handshake and it does not care about what happens after the handshake. Whether after the SOCKS handshake an SSL handshake (HTTPS) is started it is not a SOCKS proxy problem, it will just pass through.
Several HTTP proxies on the other hand expect HTTP headers to guide them, such a HTTP proxy will not allow HTTPS since it needs to read the headers.
On the third hand (ekhm... well, foot?), an HTTP proxy that supports HTTP CONNECT can also setup the transfer of arbitrary data. Therefore such a proxy can setup any type of socket, which can have an SSL handshake, which can then be used for HTTPS transfer.
HTTP Proxy Server supports CONNECT verb which supports HTTPS connections within HTTP Proxy. You don't need special HTTPS proxy server or any other setup.
CONNECT verb allows you to create binary socket tunnel to any given IP:Port address. So any HTTP client (all browsers), will open secure tunnel and communicate securely over proxy server. However, no one cant control or see anything that is going through the tunnel unless they implement man in middle attack by sending you self-signed certificates.
Most firewall these days automatically implement man in middle self signed certificates that are deployed in work network, so you have to probably dig more to identify whether it is really secure or not. So it may not be that anonymous.
If you're trying to access a service anonymously, you won't get this by running your own proxy. It's not clear from the original question what is meant by "proxy", e.g. local service, or remote service. You won't get anonymity by surfing through a proxy that's on your network, unless it's something like a TOR proxy which relays out through the TOR network.
As for whether proxies can support HTTPS or not, that's been covered here, it would be unusual to find a proxy that doesn't support CONNECT. However if it's a remote anonymizing service you're using, I doubt they would do MitM, since you'd need to install the signing cert into your trusted root store, so they couldn't do that surreptitiously.

Should x-forwarded-for contain a proxy in https traffic?

I have a web server cluster behind a proxy/load balancer. That proxy contains my SSL certs and hands the web servers the decrypted traffic, and along the way adds an "x-forwarded-for" header into the HTTP header the web application receives. This application has seen millions of IP addresses over the past decade, but something weird happened today.
For the first time, I saw an x-forwarded-for that contained a second address reach the application [addressed altered]:
x-forwarded-for: 62.211.19.218, 177.168.159.85
This indicates that the traffic came through a proxy, and I understand this is normal for x-f-f. I would have thought this was impossible (or at least unlikely) with https as the protocol.
Can someone explain how this is legit?
As per RFC 7239, this HTTP header is specified as
X-Forwarded-For: client, proxy1, proxy2, ...
Where client is the IP of the original client and then each proxy adds the IP it received the request from, at the end of the list. In the above example, you would see IP of proxy3 in your webserver and proxy2 is the IP which connected to the proxy3.
As anyone can put anything inside this header, you should accept it only from known sources like your own reverse proxy or whitelist of known legit proxies. For example Apache has mod_rpaf, which transparently changes client IP address to the one provided in this header, but only if the request is received from the IP of known proxy server.
On corporate networks you can easily do transparent proxying for HTTPS traffic without any notice from normal users. Just create your own certification authority, use for example Windows Group Policy to install & trust this CA on all corporate workstations. Then redirect all HTTPS connections to your proxy which will generate certificate for all visited domains on the fly. This is something which is happening and you can even buy enterprise hardware proxies using this method.
So to summarize the reasons why you could see multiple IPs in the X-Forwarded-For header:
Transparent HTTPS proxy as mentioned above
The header was added by the requestor itself (browser, wget, script) for whatever reason, for example to hide its own IP
Some CDN like Cloudflare could add that header if used
Multiple reverse proxies defined either intentionally or by mistake
Conclusion: You should only trust this header if it originates from your own proxy (in case of multiple IPs, trust only the last one).
MAYBE it's using the Proxy protocol for HTTPS. Granted you may not be using httproxy, but this seems to be a decent description:
http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt
I'm not sure about the SSL cert, but there's no guarantee someone is doing something pathalogical (maybe unintentionally) like running all their HTTPS traffic through a proxy and then accepting all the invalid certificates. But I suspect the proxy protocol might make this work; it does expose the HTTP headers to the proxy in some sense.

intercepting http proxy - disadvantages compared to a normal proxy

I would like to know how "realistic" is to consider implementing an intercepting proxy(with cache support) for the purpose of web filtering. I would like to support also IPv6, authentication of clients and caching.
Reading to the list of disadvantages from squid wiki http://wiki.squid-cache.org/SquidFaq/InterceptionProxy that implements an intercepting proxy, it mentions some things to consider as disadvantages when using it(that I want to clarify):
Requires IPv4 with NAT - proxy intercepting does not support IPv6, why ?
it causes path-MTU (PMTUD) to possibly fail - why ?
Proxy authentication does not work - client thinks it's talking directly to the originating server, in there a way to do authentication in this case ?
Interception Caching only supports the HTTP protocol, not gopher, SSL, or FTP. You cannot setup a redirection-rule to the proxy server for other protocols other than HTTP since it will not know how to deal with it - This seems quite plausible as the way redirecting of traffic to proxy is done in this case is by a firewall changing the destination address of a packet from the originating server to the proxy's own address(Destination NAT). How would in this case, if i want to intercept other protocols besides http know where the connection was intended to go so I can relay it to that destination ?
Traffic may be intercepted in many ways. It does not necessarily need to use NAT (which is not supported in IPv6). A transparent interception will surely not use NAT for example (transparent in the sense that the Proxy will not generate requests with his own address but with the client address, spoofing the IP address).
PMTUD is used to detect the largest MTU size available in the path between the client and server and vise versa, it is useful for avoiding fragmentation of Ip packets on the path between the client and server. When you use a Proxy in the middle, even if the MTU is detected, it not necessarily the same as the one from the client to the proxy and from the proxy to the server. But this is not always relevant, it depends on what traffic is being served and how the proxy is behaving.
If the proxy is authenticating in the client behalf, it needs to be aware of the authentication method, and it will probably need some cookies that exist in the client. Think of it this way... If a proxy can authenticate an access to a restricted resource on your behalf, it means anyone can do it on your behalf, and the purpose of a good authentication is to protect you from such possibilities.
I guess this was a very old post from the Squid guys, but the technology exists to redirect anything you want to a specific server. One simple way to do it is by placing your server as a Default Gateway for the network, then all packets pass through it and you could redirect the packets you like to your application (or another server). And you are not limited to HTTP, BUT you are limited to the way the application protocol works.

Resources