I'm making a WinRT application and I found a strange behavior. I can't open more than few parallel http requests to my server. The number is about 4-6 requests ( I don't know exact number).
New requests stuck somewhere inside client app.
I have independent instances of HttpClient and seems they share this limit, so it's not per-client, it's per app.
I aware of http connections limit in browsers, has WinRT same behavior? How can it be tuned?
This looks to be different in Windows 8.1 as you can set the maximum number of connections via HttpBaseProtocolFilter.MaxConnectionsPerServer
Note that this requires you use the new HttpClient in Windows.Web.Http
It seems that limit is per-domain.
So I set subdomains for my server`s domain and call server in round robin.
Related
I have an ASP.NET Web API application running behind a load balancer. Some clients keep an HTTP busy connection alive for too much time, creating unnecessary affinity and causing high load on some server instances. In order to fix that, I wish to gracefully close a connection that is doing too much requests in a short period of time (thus forcing the client to reconnect and pick a different server instance) while at same time keeping low traffic connections alive indefinitely. Hence I cannot use a static configuration.
Is there some API that I can call to flag a request to "answer this then close the connection" ? Or can I simply add the Connection: close HTTP header that ASP.NET will see and close the connection for me?
It looks like the good solution for your situation will be the built-in IIS functionality called Dynamic IP restriction. "To provide this protection, the module temporarily blocks IP addresses of HTTP clients that make an unusually high number of concurrent requests or that make a large number of requests over small period of time."
It is supported by Azure Web Apps:
https://azure.microsoft.com/en-us/blog/confirming-dynamic-ip-address-restrictions-in-windows-azure-web-sites/
If that is the helpful answer, please mark it as a helpful or mark it as the answer. Thanks!
I am not 100% sure this would work in your situation, but in the past I have had to block people coming from specific IP addresses geographically and people coming from common proxies. I created an Authorized Attribute class following:
http://www.asp.net/web-api/overview/security/authentication-filters
In would dump the person out based on their IP address by returning a HttpStatusCode.BadRequest. On every request you would have to check a list of bad ips in the database and go from there. Maybe you can handle the rest client side, because they are going to get a ton of errors.
Write an action filter that returns a 302 Found response for the 'blocked' IP address. I would hope, the client would close the current connection and try again on the new location (which could just be the same URL as the original request).
If I make an http request from internet explorer on a windows phone that takes around a minute or more to respond the request fails. I wrote a simple express app that just sleeps for 80 seconds and then responds with a 200 and I can't load it from any windows phone device. It loads just fine from IE9 on desktop though.
Does anyone know of any official documentation that would explain this? Are there any work arounds for dealing with very slow APIs on a windows phone?
There is indeed official documentation explaining this:
By default, Internet Explorer has a KeepAliveTimeout value of one
minute and an additional limiting factor (ServerInfoTimeout) of two
minutes. Either setting can cause Internet Explorer to reset the
socket.
If either the client browser (Internet Explorer) or the Web server has
a lower KeepAlive value, it is the limiting factor. For example, if
the client has a two-minute timeout, and the Web server has a
one-minute timeout, the maximum timeout is one minute. Either the
client or the server can be the limiting factor.
To workaround a request timing out / dealing with slow APIs you need the server to return something periodically to let the browser know the server is still alive / hasn't died and something should actually be received. The how on this is a whole different question which is really on a case by case (or category) basis.
Some related resources I recommend you go over:
HTTP persistent connection
Push technology
The Streaming APIs
We are using application request routing with 5 servers running IIS 7.5 and have just recently implemented a messaging system using SignalR in our application.
The SignalR connections are working as we expect (with the only drawback being that a message sent from one server doesn't get activated on the other 4).
The problem(?) we are having is that the Response Time of some requests on IIS that are shown in the load balancer (ARR) are coming up as 2-3 minutes sometimes, which I am assuming are because of connections using something like long-polling.
Our ARR is set to load balance using lowest response time, but it seems like this metric will be completely incorrect because of these SignalR connections. Is there any way to fix these connections so they don't get used in the ARR calculation for response time? Are we stuck having to move the SignalR messages to a separate server to avoid this type of thing (which admittedly would solve other things as well)?
I think the article below will help you. Try setting the "response buffer threshold" to 0 in ARR
http://matthewmanela.com/blog/using-signalr-in-an-arr-cluster/
I was wondering how can I find the "number of connections limit" for a web server.
Most of the cases I encountered it is limited to 6 connections (Meaning I can have 6 connections to this webserver working at the same time).
Is there any request I can send over HTTP?
Could you be more precise ? What kind of server ? Any ? For which OS ?
If it's an Apache http server, you should have a look in the settings file (should be /etc/httpd/conf/httpd.conf under Linux). Search for MaxClients option.
For example, I use a small apache server at home which can process 300 simultaneous requests (connections).
EDIT :
I think you won't be able to get the server specifications. You should try to overload it in order to guess its limits.
There's nothing like this in the HTTP standard, it aims to isolate HTTP requests from each other as much as possible. There might be a server-specific way to query this.
Depending on the architecture of your server, there could be a far greater number of TCP connections accepted than worker threads generating the HTTP responses, so you need to ask yourself what exactly you are interested in, and then just measure it with jmeter.
I'm writing an ASP.NET web application which will run on Windows Server 2008 (IIS7).
Each page's codebehind will need to make at least one synchronous web service call to an external server using HttpWebRequest and GET.
My question - is there any limit to the number of outbound HttpWebRequest calls I can make? (assume that the server I'm calling has no limit)
Is there any means to pool these connections to make the app scale better? Would a web garden configuration help?
By default, an HTTP/1.1 server is limited to two connection, and a HTTP/1.0 server is limited to four connections. So, your ASP.NEt app will have serious throughput problems if you are trying to issue more than two outstanding requests to an HTTP/1.1 server, for eg.
You will need to increase the connection limit, either per server, or globally.
For eg, globally:
ServicePointManager.DefaultConnectionLimit = 10; // allow 10 outstanding connections
Hope this helps.
I think your question should be geared toward network configurations.
I'd say you are asking for trouble if every page is dependent on a synchronous external call. What if you get N number of request that get hung on the external web service(s)? You will have some issues on your end then and you can do nothing about it.
Have you considered async calls with callbacks?
EDIT: Asynchronous Pages in ASP.NET 2.0
The following link points to a really great article for optimizing Asp.net.
http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx
Hope it helps ;)