Sorry for banal question, but i would like to know, while i launch many many http requests to a site, will site goes down or just i will be banned after n request limit (according to server settings)?
As is most always the case the answer is: it depends :)
Now, seriously, it depends on a few things but most likely if you're just hitting "a" server with a single client computer you most likely will not be able to make it go down as servers usually have more bandwidth (networking & processing powers) available to them than most client computers. Plus, when you're making requests to a website you might actually be making requests to a load-balanced network of servers which definitely means that a single client will not be able to outperform the site. If this is the case your IP will probably get blacklisted or maybe just ignored.
The other possibility is to make a lot of requests with a number of different clients at the same time - using different IPs. This is what is usually called a coordinated or distributed denial of service attack. In that case you might be able to make the web server(s) go down for a while.
This is just a simple answer but I hope you get the point.
Related
We published the game on russian server and 1% of people couldn't connect to server on 46xx port through raw TCP while they can load it's HTML page (through HTTP). Most of such people live in Germany, Israel....
Why is it so? What's the politics decisions lay behind it? We discovered that their such ports (which are free on IANA) are closed. Does it mean that such people cannot run Steam (and, then, play all games which you can buy through it), play WoW and many other modern games which use TCP through 4xxx ports?
Thank you.
ISPs have been known to filter certain ports for various reasons. Users should complain loudly to them (or switch) in order to send a signal that such is not to be tolerated. You can encourage them to do so but of course that doesn't solve your problem (or really answer your question).
Common reasons are:
- trying to block bittorrent traffic
- limit bandwidth usage (largely related to previous reason)
- security (mistaken)
- control (companies often don't want employees goofing off)
The easiest thing for you to do is run your game over port 443 (perhaps as an alternate). That's HTTPS and so will not generally be blocked. However, because HTTPS is encrypted, there's no way to inspect the stream to know if its web traffic or something else and thus you can run any data stream (encrypted or not) that you wish over it.
That's precisely correct. In fact every public web site would by default block all ports except the ones they expect to be running some traffic they would want to.
This is the reason many applications often try to encapsulate their programs to use port 80 which can't be blocked as long as some one wants http traffic to run.
They simply don't want any application that they haven't approved to run through their servers. If you have a sensitive server in public you surely won't want any one to use your machine for any apps that you don't allow. A common reason is applications that eat up bandwidth such as bittorent, edonkey, gnutella as well as streaming, voip and other high bandwidth consuming apps
I am building a service which requires me to dynamically launch and close servers at many locations around the world, (for example using AWS). When a user visits my domain they need to be assigned to a local server with the lowest latency.
By assignment, I mean that for example the client makes an ajax call to example.com/getData, it should go directly to one particular server that is has been assigned to. Different servers will be doing different computation, so it is not sufficient to have some kind of general load balancing.
What general mechanisms/technology would allow me to 1) Assess the latency between a particular client and any server under my control? 2) Assign a particular client to a particular server? I cannot use just the IP addresses for example, since javascript has domain name based restrictions.
Thanks
Note: I do not have enough reputation to link all the technologies in the response, therefore sometimes you will see the links copied in plain text.
1) Assign users to a local server with the lowest latency is not always possible.
Sometimes the geographically closest server to a user is unexpectedly the one with the highest latency.
To find the lowest latency between your (running) servers and the users is not an easy task.
There might be many different hops (routers) between the client and the server, and any of them at any time can have problems, routes update, packet congestions and so on.
The quickest way to assess the latency is a ping, but it can be that the firewalls block this.
So the best way to achieve this is to use the anycast
All the major CDN providers implement this method. Some use the TCP anycast, which seems to be not recommended, and others UDP anycast. It is an open debate.
Anyway in order to implement anycast you need to be able to peer with the ISP routers, and normally this is not possible. Additionally there are good peers and bad peers.
Finally All this requires a deep knowledge of the routing protocols and the TCP/IP stack.
A quick and dirty solution could be to use BIND with the GEO-IP patch.
So you can define specific dns query responses per country.
What I mean is that, for instance, if you have a server in UK and one in US you can configure BIND to respond to users coming from europe to hit the UK server and users coming from US to hit the US server.
2) To assign a particular client to a particular server you can use the technique I described on the point 1 or you can use a proxy and sticky sessions.
HA-Proxy is a good product to achieve this. (the URL: xy.1wt.eu )
3) if you use the point 1, you will not have problems with cross domain ajax calls. In fact it is completely transparant for the client. For instance for the same domain example.com a user coming from US will resolve it to 1.1.1.1 whereas a user coming from Germany will resolve example.com to 2.2.2.2 (ip addresses are fake and used just as an example).
On a side note, a solution to do cross domain ajax call is JSON-P which has though some drawbacks, like the lack of support for POST.
If I were you I would go with the BIND and GEO-IP, because it would solve all three problems in once. (a part for the latency because is not always true that the geographically closest server is the one with the lowest latency.)
I have a web app and I would like to prevent DOS attacks by blocking an IP address if it make many request in a short period of time.
For example, if the same IP address makes 100 request in a second, I can assume that it's some kind of attack and I would like to block this IP.
However, making this check in the application layer seems too expensive - what is the optimal way to make this check?
Should I make this kind of check at my:
firewall
router
apache config
someplace else entirely ...
If you want to block IP addresses when they make a certain number of requests, this is best done at the Network layer. This would suggest that you do this either in your host machine's network stack or using a router (which operates at the network layer).
Some things you might want to consider though are:
- Are you really wanting to block access to the entire host based on an IP address, or do you want to block access to a specific application running on a specific port.
- Sometimes, by using NATs, one IP address may be making requests on behalf of many real hosts.
With any security application you need to have many layers of defence, so it would be a good idea to invest in a good firewall as well.
Some apps for generating APIs in django implement some methods for limiting the amount of request per second.
For example django-piston use throttling method to do that.
django-piston throttling
Thats an easy way to solve the problem.
I'm new to web security.
Why would I want to use HTTP and then switch to HTTPS for some connections?
Why not stick with HTTPS all the way?
There are interesting configuration improvements that can make SSL/TLS less expensive, as described in this document (apparently based on work from a team from Google: Adam Langley, Nagendra Modadugu and Wan-Teh Chang): http://www.imperialviolet.org/2010/06/25/overclocking-ssl.html
If there's one point that we want to
communicate to the world, it's that
SSL/TLS is not computationally
expensive any more. Ten years ago it
might have been true, but it's just
not the case any more. You too can
afford to enable HTTPS for your users.
In January this year (2010), Gmail
switched to using HTTPS for everything
by default. Previously it had been
introduced as an option, but now all
of our users use HTTPS to secure their
email between their browsers and
Google, all the time. In order to do
this we had to deploy no additional
machines and no special hardware. On
our production frontend machines,
SSL/TLS accounts for less than 1% of
the CPU load, less than 10KB of memory
per connection and less than 2% of
network overhead. Many people believe
that SSL takes a lot of CPU time and
we hope the above numbers (public for
the first time) will help to dispel
that.
If you stop reading now you only need
to remember one thing: SSL/TLS is not
computationally expensive any more.
One false sense of security when using HTTPS only for login pages is that you leave the door open to session hijacking (admittedly, it's better than sending the username/password in clear anyway); this has recently made easier to do (or more popular) using Firesheep for example (although the problem itself has been there for much longer).
Another problem that can slow down HTTPS is the fact that some browsers might not cache the content they retrieve over HTTPS, so they would have to download them again (e.g. background images for the sites you visit frequently).
This being said, if you don't need the transport security (preventing attackers for seeing or altering the data that's exchanged, either way), plain HTTP is fine.
If you're not transmitting data that needs to be secure, the overhead of HTTPS isn't necessary.
Check this SO thread for a very detailed discussion of the differences.
HTTP vs HTTPS performance
Mostly performance reasons. SSL requires extra (server) CPU time.
Edit: However, this overhead is becoming less of a problem these days, some big sites already switched to HTTPS-per-default (e.g. GMail - see Bruno's answer).
And not less important thing. The firewall, don't forget that usually HTTPS implemented on port 443.
In some organization such ports are not configured in firewall or transparent proxies.
HTTPS can be very slow, and unnecessary for things like images.
Even on big-time sites such as Google, I sometimes make a request and the browser just sits there. The hourglass will turn indefinitely until I click again, after which I get a response instantly. So, the response or request is simply getting lost on the internet.
As a developer of ASP.NET web applications, is there any way for me to mitigate this problem, so that users of the sites I develop do not experience this issue? If there is, it seems like Google would do it. Still, I'm hopeful there is a solution.
Edit: I can verify, for our web applications, that every request actually reaching the server is served in a few seconds even in the absolute worst case (e.g. a complex report). I have an email notification sent out if a server ever takes more than 4 seconds to process a request, or if it fails to process a request, and have not received that email in 30 days.
It's possible that a request made from the client took a particular path which happened to not work at that particular moment. These are unavoidable - they're simply a result of the internet, which is built upon unstable components and which TCP manages to ensure a certain kind of guarantee for.
Like someone else said - make sure when a request hits your server, you'll be ready to reply. Everything else is out of your hands.
They get lost because the internet is a big place and sometimes packets get dropped or servers get overloaded. To give your users the best experience make sure you have plenty of hardware, robust software, and a very good network connection.
You cannot control the pipe from the client all the way to your server. There could be network connectivity issues anywhere along the pipeline, including from your PC to your ISP's router which is a likely place to look first.
The bottom line is if you are having issues bringing Google.com up in your browser then you are guaranteed to have the same issue with your own web application at least as often.
That's not to say an ASP application cannot generate the same sort of downtime experience completely on it's own... Test often and code defensively are the key phrases to keep in mind.
Let's not forget browser bugs. They aren't nearly perfect applications themselves...
This problem/situation isn't only ASP related, but it covers the whole concept of keeping your apps up and its called informally the "5 nines" or "99.999% availability".
The wikipedia article is here
If you lookup the 5 nines you'll find tons of useful information, which you can apply as needed to your apps.