I want a cloud machine to send a message to a machine behind a corporate NAT / Firewall.
My idea is to install on the corporate machine a client which sends a long HTTP request to the cloud machine and when the cloud has a message it returns the response.
I thought I invented the wheel until I read about "http tunneling" (is this what I am doing?).
I also read that some firewalls block non html traffic even if it is on http.
So what is my chance to make it work?
I have also read that skype uses a more sophisticated machanism.
Is it because my idea does not work or because their idea is faster?
I can compromise on speed now - which approach works and easy to implement?
I know you'd like to do it with TCP/HTTP,
but the way I'd do it is use UDP to
NAT 'hole punch', thus establishing a UDP channel,
and then use UDP packets sent over that
channel as the signaling mechanism...
These may (or may not) be useful or relevant:
http://en.wikipedia.org/wiki/STUN
http://en.wikipedia.org/wiki/Hole_punching
http://en.wikipedia.org/wiki/UDP_hole_punching
http://en.wikipedia.org/wiki/TCP_hole_punching
Also -- if you really have to use HTTP, you could
simply issue a new HTTP request every X seconds...
HTTP Polling, if you will...
If they block non html on port 80, you could try port 443. Unless there is a SSL "man in the middle" proxy (unlikely), you'd be ok.
IIRC skype uses port hopping, so basically you'll need an algorithm to find an unfiltered (by brute force with intelligent guesses) port that you can connect to.
Related
We have received a DDOS attack with the same pattern with all requests:
Protocol HTTP
GET
Random IP Address
Loading the home page /
Our server was returning a 301 to all requests and we had problems with the performance, the server was down.
We have blocked all requests coming from the HTTP and we have stopped the attack, we would like to know why we are receiving the attack to our servers under HTTP and not HTTPS from different sources, we would like to know if the source IP could only be changed using HTTP requests?
What's the best way to prevent this kind of attacks?
Our server right now is only working with HTTPS without issues. Server running on Azure Web Apps.
We have blocked all requests coming from the HTTP and we have stopped the attack.
Please note, when people type in your URL in browser manually, the first hit is usually over HTTP. If you turn off HTTP, people will not be able to access the the site by simply typing in your domain name.
we would like to know why we are receiving the attack to our servers under HTTP and not HTTPS from different sources
This is by the attacker to decide. Most probably it was only coincidence that the attack went over HTTP only.
we would like to know if the source IP could only be changed using HTTP requests?
No. For an HTTP request to be performed you need to do TCP handshake first. This means, that you can not fake the IP address easily, as you need to actively participate in the communication and the routers must see you as a valid participants. You can fake the IP while being in the same local network but it would be only for one packet and would not allow to perform a TCP handshake correctly.
What's the best way to prevent this kind of attacks?
We're still struggling with DDOS and there is no 100% solution. An attack of sufficient scale can turn down the internet as it did in the past already. There are some things you can do like:
Rate limiting - put some brakes on the incoming traffic not to kill your infrastructure completely. You will loose some valid traffic, but you will be up and running.
Filtering - pain when dealing with DDOS attacks. Analyse which IP addresses are attacking you constantly. Filter them on your firewall. (Imagine the fun when you are being attacked by 100k IoT devices). A WAF (Web Application Firewall) may allow you to filter not only on IP addresses but also on other request parameters too.
Scaling up - more infrastructure can do more.
In most cases all you need to do is survive till the attack is over.
I want to open any arbitrary TCP socket on a server, but it's behind a proxy and I can only use a port that is intended for HTTP hosting only. Simply put, what is the most transparent way to wrap such a socket into an HTTP connection? Preferably I would call a *nix program through a shell script in the server that would take care of translating the requests.
I apologize if this was answered before, but I am struggling to find and understand anything.
I ended up going with Chisel, which provides a single binary for both servers and clients, with features like authentication and reverse port forwarding. For instance:
chisel server
will run an HTTP server in $PORT or 8080, and
chisel client server.com 4567:123
connects to server.com and maps the remote port 123 into the local port 4567.
Other solutions are still welcome, particularly if they involve more transparency, frequently preinstalled tools like netcat, and if they also provide support for other protocols like UDP.
I am having an issue with Server Sent events.
My endpoint is not available on mobile 3G network.
One observation I have is that a https endpoint like the one below which is available on my mobile network.
https://s-dal5-nss-32.firebaseio.com/s1.json?ns=iot-switch&sse=true
But the same endpoint when proxy passed using an nginx and accessed over http (without ssl) is not available on my mobile network.
http://aws.arpit.me/live/s1.json?ns=iot-switch&sse=true
This is available on my home/office broadband network though. Only creates an issue over my mobile 3g network.
Any ideas what might be going on?
I read that mobile networks use broken transparent proxies that might be causing this. But this is over HTTP.
Any help would be appreciated.
I suspect the mobile network is forcing use of an HTTP proxy that tries to buffer files before forwarding them to the browser. Buffering will make SSE messages wait in the buffer.
With SSE there are a few tricks to work around such proxies:
Close the connection on the server after sending a message. Proxies will observe end of the "file" and forward all messages they've buffered.
This will be equivalent to long polling, so it's not optimal. To avoid reducing performance for all clients you could do it only if you detect it's necessary, e.g. when a client connects always send a welcome message. The client should expect that message and if the message doesn't arrive soon enough report the problem via an AJAX request to the server.
Send between 4 and 16KB of data in SSE comments before or after a message. Some proxies have limited-size buffers, and this will overflow the buffer forcing messages out.
Use HTTPS. This bypasses all 3rd party proxies. It's the best solution if you can use HTTPS.
I would like to know how "realistic" is to consider implementing an intercepting proxy(with cache support) for the purpose of web filtering. I would like to support also IPv6, authentication of clients and caching.
Reading to the list of disadvantages from squid wiki http://wiki.squid-cache.org/SquidFaq/InterceptionProxy that implements an intercepting proxy, it mentions some things to consider as disadvantages when using it(that I want to clarify):
Requires IPv4 with NAT - proxy intercepting does not support IPv6, why ?
it causes path-MTU (PMTUD) to possibly fail - why ?
Proxy authentication does not work - client thinks it's talking directly to the originating server, in there a way to do authentication in this case ?
Interception Caching only supports the HTTP protocol, not gopher, SSL, or FTP. You cannot setup a redirection-rule to the proxy server for other protocols other than HTTP since it will not know how to deal with it - This seems quite plausible as the way redirecting of traffic to proxy is done in this case is by a firewall changing the destination address of a packet from the originating server to the proxy's own address(Destination NAT). How would in this case, if i want to intercept other protocols besides http know where the connection was intended to go so I can relay it to that destination ?
Traffic may be intercepted in many ways. It does not necessarily need to use NAT (which is not supported in IPv6). A transparent interception will surely not use NAT for example (transparent in the sense that the Proxy will not generate requests with his own address but with the client address, spoofing the IP address).
PMTUD is used to detect the largest MTU size available in the path between the client and server and vise versa, it is useful for avoiding fragmentation of Ip packets on the path between the client and server. When you use a Proxy in the middle, even if the MTU is detected, it not necessarily the same as the one from the client to the proxy and from the proxy to the server. But this is not always relevant, it depends on what traffic is being served and how the proxy is behaving.
If the proxy is authenticating in the client behalf, it needs to be aware of the authentication method, and it will probably need some cookies that exist in the client. Think of it this way... If a proxy can authenticate an access to a restricted resource on your behalf, it means anyone can do it on your behalf, and the purpose of a good authentication is to protect you from such possibilities.
I guess this was a very old post from the Squid guys, but the technology exists to redirect anything you want to a specific server. One simple way to do it is by placing your server as a Default Gateway for the network, then all packets pass through it and you could redirect the packets you like to your application (or another server). And you are not limited to HTTP, BUT you are limited to the way the application protocol works.
I'm creating a client-server application which communicates via a custom socket protocol. I'd like the client to be usable from within networks that have a restrictive firewall (corporate, school, etc.). Usually this is done by connecting via HTTP, since that's always available.
If I want to do that, do I really have to use HTTP or is it enough to use my custom protocol via server port 80?
The firewall may have more restricted checks than just restricting ports, and you might also have proxies along the way, and they will deal in HTTP.
Still, using a well-known port for something other than its normal use is still far better than so many schemes which do inherently non-HTTP stuff over HTTP, and essentially implement RFC 3093 (when people implement April Fools RFCs it normally shows a combination of humour and technical acumen, RFC 3093 is the exception).
To get around the proxy issue, you could use 443 rather than 80, as HTTPS traffic can't be proxied in quite the same way. Indeed, you often don't even need to use SSL, as the proxy will just assume that they can't see it.
None of this needs to be done with your application though. What your application needs to do is to have its port be configurable (that should be done with any server application anyway). The default should be something away from well-known ports, but the sysadmin will be able to use 80 or 443 or whatever if they need to.
If it is a custom socket protocol then it is not HTTP.
But you can still use TCP on port 80 to escape the firewall but then you would have to handle the proxy situation as well. Proxies are HTTP aware and custom TCP might not work and they probably would not forward your requests.
I do not know about the reasons you want to do this (if it is legal or not) but there are software that are used to bypass the filtering in countries such as Iran. One of the softwares (Haystack) uses a sophisticated encryption to masquerade the request as an innocent looking packet.
You would be better off investigating tunneling with SSH. It is designed for precisely this. An HTTP proxy isn't likely to work for a number of reasons including those given in the other answers.