How an HTTP Injection bypass a firewall - http

For a while now many apps/application in almost all platforms (example this on android) that can be used to "inject a certain url or text" (usually of that one allowed address) to an blocked outgoing connection and magically bypass a firewall.
I have intercepted one of these connections with fiddler but cant seem to see the difference between an injected an non injected connection especially HTTPS. how to they fool ISPs? what does it take to forge connection like those?

Related

Check if unknown / remote server supports HTTPS

Is there a posibillity to check if a remote server supports https?
Currently im requesting https, if it doesnt work retry http and then display an error if this still does not work.
Is there a feature embedded in HTTP which indicates if https is supported?
By this I dont mean redirect etc. because these must be implemented on the server and arent always.
Silently falling back to HTTP sounds dangerous. An attacker (i.e. man-in-the-middle) might be able to force you to use the insecure channel by blocking your requests to HTTPS. Thus, I would not recommend this approach in general.
In general, you should let your users decide which protocol to use. If they specify https, you should not silently downgrade but throw an error. If they specify http however, it might be possible to also try https first and silently fall back to http if that fails (since they requested http in the first place).
An a general answer to your request: you can only try https to check if the server supports https. There is an HTTP(s) extension called HTTP Strict Transport Security (HSTS) which allows servers to indicate that all requests to them should always be performed via secure channels only. If you receive such a header in a response for an HTTPS request, you can force https in the future for the host. Note though that you have to ignore such headers receive over insecure HTTP.
In general, you can't trust any information you received over plaintext HTTP to give you any indication about security options (such as support for TLS) of the server since this information could be arbitrarily spoofed by man-in-the-middle attackers. In fact, preventing such undetectable changes is one of the main reasons to use TLS / HTTPS in the first place.

When implementing a web proxy, how should the server report lower-level protocol errors?

I'm implementing an HTTP proxy. Sometimes when a browser makes a request via my proxy, I get an error such as ECONNRESET, Address not found, and the like. These indicate errors below the HTTP level. I'm not talking about bugs in my program -- but how other servers behave when I send them an HTTP request.
Some servers might simply not exist, others close the socket, and still others not answer at all.
What is the best way to report these errors to the caller? Is there a standard method that, if I use it, browsers will convert my HTTP message to an appropriate error message? (i.e. they get a reply from the proxy that tells them ECONNRESET, and they act as though they received the ECONNRESET themselves).
If not, how should it be handled?
Motivations
I really want my proxy to be totally transparent and for the browser or other client to work exactly as if it wasn't connected to it, so I want to replicate the organic behavior of errors such as ECONNRESET instead of sending an HTTP message with an error code, which would be totally different behavior.
I kind of thought that was the intention when writing an HTTP proxy.
There are several things to keep in mind.
Firstly, if the client is configured to use the proxy (which actually I'd recommend) then fundamentally it will behave differently than if it were directly connecting out over the Internet. This is mostly invisible to the user, but affects things like:
FTP URLs
some caching differences
authentication to the proxy if required
reporting of connection errors etc <= your question.
In the case of reporting errors, a browser will show a connectivity error if it can't connect to the proxy, or open a tunnel via the proxy, but for upstream errors, the proxy will be providing a page (depending on the error, e.g. if a response has already been sent the proxy can't do much but close the connection). This page won't look anything like your browser page would.
If the browser is NOT configured to use a proxy, then you would need to divert or intercept the connection to the proxy. This can cause problems if you decide you want to authenticate your users against the proxy (to identify them / implement user-specific rules etc).
Secondly HTTPS can be a real pain in the neck. This problem is growing as more and more sites move to HTTPS only. There are several issues:
browsers configured to use a proxy, for HTTPS URLS will firstly open a tunnel via the proxy using the CONNECT method. If your proxy wants to prevent this then any information it provides in the block response is ignored by the browser, and instead you get the generic browser connectivity error page.
if you want to provide any other benefits one normally wishes from a proxy (e.g. caching / scanning etc) you need to implement a MitM (Man-in-the-middle) and spoof server SSL certificates etc. In fact you need to do this if you just want to send back a block-page to deny things.
There is a way a browser can act a bit more like it was directly connected via a proxy, and that's using SOCKS. SOCKS has a way to return an error code if there's an upstream connection error. It's not the actual socket error code however.
These are all reasons why we wrote the WinGate Internet Client, which is a LSP-based product for our product WinGate. Client applications then learn the actual upstream error codes etc.
It's not a favoured approach nowadays though, as it requires installation of software on the client computer.
I wouldn't provide them too much info. Report what you need through internal logs in case you have to solve the problem. Return a 400, 403 or 418. Why? Perhaps the're just hacking.

Nginx catch "broken header" when listening to proxy_protocol

I need to use http health checks on a Elastic Beanstalk application, with proxy protocol turned on. That is currently not possible, and the health check fails with a an error --> *58 broken header while reading PROXY protocol
I figured I have two options
Perform the health check on another port, and setup nginx to listen to http requests on that port and proxy to my app.
If it is possible to catch the broken header errors, or detect regular http requests in the proxy_protocol server block, then redirect those requests to a port that listens to http.
I would prefer the latter(#2), if possible. So is there any way to do this?
Ideally, I would prefer not to have to do any of this. A feature request to fix this has been submitted to AWS, but it has no ETA.
The proxy protocol specification says:
The receiver MUST be configured to only receive the protocol described in this
specification and MUST not try to guess whether the protocol header is present
or not. This means that the protocol explicitly prevents port sharing between
public and private access. Otherwise it would open a major security breach by
allowing untrusted parties to spoof their connection addresses.
I think this means that option 2 is a sufficiently bad idea that it's not even supported by conforming implementations of the proxy protocol.
Option 1, on the other hand, seems pretty reasonable. You can set up a security group so that only legitimate health checks can come in on the port without proxy protocol enabled.
Another couple of options spring to mind too:
Simply point your health checks at the thing that's adding the header (i.e. ELB?), rather than directly at your Nginx instance. Not sure if this is possible with Elastic Beanstalk, it's not a service I use.
Use something else to add the proxy protocol header before forwarding the health-check traffic on to your Nginx, which would avoid having to duplicate your Nginx config. For instance a HAProxy running on the same machine as your Nginx could do this. Again, use security groups to ensure that only legitimate traffic gets through.

Load balancer for websockets

I know how load balancers work for http requests. A client opens a connection with the LB, LB forwards the request to the backend servers, LB gets a response and from the same connection it sends the response to the client and closes the connection. I want to know the internal details of load balancer for websockets. How the connections are maintained and how the response is sent to the client. I read many questions on stackoverflow but none of them gave a clear picture of the internal implementation of LB
the LB just route the connection to a server behind it.
so as long you keep the connection open you will keep being connected to the same server and do not communicate with the LB again.
depending on the client, on reconnection you could be routed to another server.
I'm not sure how it works when some libraries fallback to JSON-P tho
Implementations of load balancers have great variety. There are load balancers that support websockets, like F5's BIG-IP (https://support.f5.com/kb/en-us/solutions/public/14000/700/sol14754.html), and LB's that I don't think support websocekts, like AWS ELB (there is a thread where somebody says they could make it with ELB but I suppose they added some other component behind ELB: How do you get Amazon's ELB with HTTPS/SSL to work with Web Sockets?.
Load Balancer's not only act as terminators of HTTP connections, they can terminate also HTTPS, SSL, and TCP connections. They can implement stickiness based on different parameters, like cookies, origin IP, etc. (like F5) In the case of ELB's they use only cookies, and it could be application generated cookies or LB generated cookies (both only with HTTP or HTTPS). Also stickiness can be kept for certain defined time, sometimes configurable.
Now, in order to forward data corresponding to websockets, they need to terminate, and forward, connections at level of SSL or TCP (not HTTP or HTTPS). Unless they understand websocket protocol (I don't know if any does it). Additionally, they need to keep stickiness to the server with which the connetion was opened. This is not possible with ELB but yes with more complex LB's like BIG-IP.

intercepting http proxy - disadvantages compared to a normal proxy

I would like to know how "realistic" is to consider implementing an intercepting proxy(with cache support) for the purpose of web filtering. I would like to support also IPv6, authentication of clients and caching.
Reading to the list of disadvantages from squid wiki http://wiki.squid-cache.org/SquidFaq/InterceptionProxy that implements an intercepting proxy, it mentions some things to consider as disadvantages when using it(that I want to clarify):
Requires IPv4 with NAT - proxy intercepting does not support IPv6, why ?
it causes path-MTU (PMTUD) to possibly fail - why ?
Proxy authentication does not work - client thinks it's talking directly to the originating server, in there a way to do authentication in this case ?
Interception Caching only supports the HTTP protocol, not gopher, SSL, or FTP. You cannot setup a redirection-rule to the proxy server for other protocols other than HTTP since it will not know how to deal with it - This seems quite plausible as the way redirecting of traffic to proxy is done in this case is by a firewall changing the destination address of a packet from the originating server to the proxy's own address(Destination NAT). How would in this case, if i want to intercept other protocols besides http know where the connection was intended to go so I can relay it to that destination ?
Traffic may be intercepted in many ways. It does not necessarily need to use NAT (which is not supported in IPv6). A transparent interception will surely not use NAT for example (transparent in the sense that the Proxy will not generate requests with his own address but with the client address, spoofing the IP address).
PMTUD is used to detect the largest MTU size available in the path between the client and server and vise versa, it is useful for avoiding fragmentation of Ip packets on the path between the client and server. When you use a Proxy in the middle, even if the MTU is detected, it not necessarily the same as the one from the client to the proxy and from the proxy to the server. But this is not always relevant, it depends on what traffic is being served and how the proxy is behaving.
If the proxy is authenticating in the client behalf, it needs to be aware of the authentication method, and it will probably need some cookies that exist in the client. Think of it this way... If a proxy can authenticate an access to a restricted resource on your behalf, it means anyone can do it on your behalf, and the purpose of a good authentication is to protect you from such possibilities.
I guess this was a very old post from the Squid guys, but the technology exists to redirect anything you want to a specific server. One simple way to do it is by placing your server as a Default Gateway for the network, then all packets pass through it and you could redirect the packets you like to your application (or another server). And you are not limited to HTTP, BUT you are limited to the way the application protocol works.

Resources