Is there a way in IIS7 to limit the number of HTTP connections allowed from a single source? We occasionaly get mild denial of service attacks that we could prevent with some limitations on the number of connections allowed from any single IP. I understand this could impact some legitimate requests, but we'd set the threshold fairly high.
Ah! I think I found it. This http://blogs.iis.net/ruslany/archive/2009/02/16/dynamic-ip-restrictions-for-iis-7-0-beta.aspx is what I'm looking for.
Thanks all.
Mark
Here is the latest home of the extension: http://www.iis.net/downloads/microsoft/dynamic-ip-restrictions
Under properties for the website in IIS click on Performance and select the number of connections limited to.
Related
I know that the max concurrent requests per domain count vary depending on browser, and that it is a good practice to use CDNs to increase parallelism. But what is the reason for this? I can't find an answer for this anywhere.
Who and in what way would suffer if it were say 50 concurrent requests for a domain?
Each connection takes resources on the server (and other network infrastructure). The limit put in place by browsers is designed to try and avoid hammering the remote server too hard. Handling 1000 concurrent users with a limit of 4 per browser is much easier than with 50 (50k connections open).
I'm looking for a mechanism to limit the number of concurrent connections to a service exposed using ASP.NET WebAPI.
Why? Because this service is performing operations that are expensive on the hardware resources and I would like to prevent degradation under stress.
More info:
I don't know how many requests will be issued per period of time.
This service runs in its own IIS application pool and limiting the maximum connections on the parent site in IIS is not an option.
I found this suite, but the supported algorithms do not include the one that I'm interested in.
I'm looking for something out of the box (something as straightforward as an IIS config setting) but I could not find exactly what I need.
Any clues?
Thanks!
Scaling your service would probably be a better idea than limiting the number of requests. You could send the heavy processing to some background jobs and keep your API servicing requests.
But assuming the above cannot be done, you will need to use one of the throttling package available or write your own if none meets your requirements.
I suggest starting with the ThrottlingHandler from WebApiContrib
You might be able to meet your needs by properly implementing the GetUserIdentifier method.
If not, you will need to implement your own MessageHandler and the handler mentioned would be a good starting point.
The scenario is a network when multimedia file transfer is very common.
I have some web applications in that network and I want to create a rule maybe in the Mikrotik router in order to avoid the webapplication slow down when a file transferring is occurring.
Is that possible to avoid and how?
May be creating a limit udp bandwidth rule.
Your description of problem is too overall but maybe this will indicate you solution.
If you want to slow down some connections you should use queue,
when you are using queues you can try to configure BURST, this feature allows to limit long time connections. Usefull in advance configurations of the queues are mark-packet and mark-connections
Sometimes is better to use something like ratelimit in webserver but it all depends on the situation.
Sorry for banal question, but i would like to know, while i launch many many http requests to a site, will site goes down or just i will be banned after n request limit (according to server settings)?
As is most always the case the answer is: it depends :)
Now, seriously, it depends on a few things but most likely if you're just hitting "a" server with a single client computer you most likely will not be able to make it go down as servers usually have more bandwidth (networking & processing powers) available to them than most client computers. Plus, when you're making requests to a website you might actually be making requests to a load-balanced network of servers which definitely means that a single client will not be able to outperform the site. If this is the case your IP will probably get blacklisted or maybe just ignored.
The other possibility is to make a lot of requests with a number of different clients at the same time - using different IPs. This is what is usually called a coordinated or distributed denial of service attack. In that case you might be able to make the web server(s) go down for a while.
This is just a simple answer but I hope you get the point.
While tracing the active connection on my db i found that some times the connections exceeds 100, is that normal?
and after few minutes it return back to 20 or 25 active connection
more details about my problem
Traffic on the site is around 200 visitor per day.
Why i am asking? because the default MaxPool in the asp.net connection string is 100
Also i am using Connection in the website IIS
That really depends on your site and your traffic. I've seen a site peek out at over 350 active connections to SQL during its peak time. That was for roughly 7,000 concurent web users, on two web servers, plus various backend processes.
Edit
Some additional information that we need to give you a better answer:
How many Web Processes hit your sql
server? For example are you using web
gardens? Do you have multiple servers
how many if you do? This is important because then you can calculate how many connections you can have by figuring out how many worker threads per process you have configured. Assume worse case, each thread is running which would add a connection to the pool.
Are you using connection pooling? If so your going to see the connections stick around after the user's request ends. By default its enabled.
How many concurent users do you have?
But, I think your going after this wrong, your having an issue with no free connections available in your pool. The first thing I'd look for is any leaked connections (connections being held open for longer then they should). For example passing a data reader up to the Web Page, could be a sign of this.
Next thing is to evaluate the default settings. Maybee you should run a web garden which should give you more connections, or increase the number of connections available.
The last thing I would do is try to opitmize queries like in your last question. Let's say you cut those queries in half, all you've done is bought yourself more time until more users come onto the system, and your right back here, only this time you might not be able to optimize that query yet again.
You're leaving out some details making it difficult to answer correctly but...
It depends, really. If you're not using connection pooling then each time a page is hit that requires access to the database a new connection is going to be opened. So sure, it could be perfectly normal.
I would also look into caching. Cache pages, cache query results, etc. You might be surprised how many times you go back to the database to get a list of US States...