slow HTTP Options in ASP.NET - asp.net

does somebody have any information how a HTTP OPTION is handled in ASP.NET.
We use the OPTION to healthcheck our system which is fast (around 1ms) but there are times where the response is too slow (2s) which triggers our healthcheck system. Any idea where to look for the reason? (IIS logs, ASP.NET Events etc.)
Thanks

Depends on which version of ASP.NET. But I would assume it is handled pretty similarly to all other HTTP requests. I think you may be experiencing some issues with a slow first start request on IIS.

Related

How to limit maximum concurrent connections / requests / sessions in ASP.NET WebAPI

I'm looking for a mechanism to limit the number of concurrent connections to a service exposed using ASP.NET WebAPI.
Why? Because this service is performing operations that are expensive on the hardware resources and I would like to prevent degradation under stress.
More info:
I don't know how many requests will be issued per period of time.
This service runs in its own IIS application pool and limiting the maximum connections on the parent site in IIS is not an option.
I found this suite, but the supported algorithms do not include the one that I'm interested in.
I'm looking for something out of the box (something as straightforward as an IIS config setting) but I could not find exactly what I need.
Any clues?
Thanks!
Scaling your service would probably be a better idea than limiting the number of requests. You could send the heavy processing to some background jobs and keep your API servicing requests.
But assuming the above cannot be done, you will need to use one of the throttling package available or write your own if none meets your requirements.
I suggest starting with the ThrottlingHandler from WebApiContrib
You might be able to meet your needs by properly implementing the GetUserIdentifier method.
If not, you will need to implement your own MessageHandler and the handler mentioned would be a good starting point.

How are threads tied to requests through Http.sys, IIS and ASP.NET

I'm currently reading a lot about node.js. There is a frequent comparison between servers using a traditional thread per request model (Apache), and servers that use an event loop (Nginx, node, Tornado).
I would like to learn in detail about how a request is processed in ASP.NET - from the point it is received in http.sys all the way up to it being processed in ASP.NET itself. I've found the MSDN documentation on http.sys and IIS a little lacking, but perhaps my google-fu is weak today. So far, the best resource I have found is a post on Thomas Marquardt's Blog.
Could anyone shed more light on the topic, or point me to any other resources?
(For the purposes of this question I'm only interested in IIS7 with a typical integrated pipeline)
From my research so far, its my understanding that when a request comes in it gets put into a kernel-mode request queue. According to this, this avoids many of the problems with context switching when there are massive amounts of requests (or processes or threads...), providing similar benefits to evented IO.
Quoted from the article:
"Each request queue corresponds to one
application pool. An application pool
corresponds to one request queue
within HTTP.sys and one or more worker
processes."
So according to that, every request queue may have more than one "Worker Process." (Google cache) More on worker processes
From my understanding:
IIS Opens creates a request queue
(see the http.sys api below)
A "Web Site" configured in IIS corresponds to one Worker Process
A Web Site/Worker Process shares the Thread Pool.
A thread is handed a request from the request queue.
Here is a lot of great information about IIS7's architecture
Here is some more information about http.sys.
HTTP Server I/O Completion Stuff
Typical Server Tasks
Open questions i still have:
How the heck does IIS change the Server header if it Uses HTTP.SYS? (See this question)
Note: I am not sure if/how a "Kernel-mode request queue" corresponds to an IO completion port, I would assume that each request would have its own but I don't know, so I truly hope someone will answer this more thoroughly. I just stumbled on this question and it seems that http.sys does in fact use IO Completion ports, which should provide nearly all of the same benifits that evented IO (node.js, nginx, lighttpd, C10K, etc...) have.

Frequent socket/connection time out in SOAP/HTTP protocol

i am working in a web application project which has .NET based Web 2.0-based features in GUI(means lot of AJAX calls) and axis1 based web services at business layer to serve data...
i see a performance issue in webservice protocol: SOAP/HTTP...since there is going to be lot of AJAX calls i.e. HTTP requests to web server..we may see frequent socket/connection time out issues in production...I want to know does any one have any prior experience in this kind of issue? Any idea how to rectify this?
I googled and found persistent HTTP Connections would improve it...but would like to know your views.
Here is my enviroment details:-
front end: .NET
backend:
tomcat 6.0
axis1
oracle10g
windows XP
Yes, persistent HTTP connections help to avoid creating new connections, and this is the first thing that comes to mind. Another way is to set socket timeout values on client/server sockets, increase backlog value on the server socket(s) (I'm not sure how to do it in Axis*).

Http requests / concurrency?

Say a website on my localhost takes about 3 seconds to do each request. This is fine, and as expected (as it is doing some fancy networking behind the scenes).
However, if i open the same url in tabs (in firefox), then reload them all at the same time, it appears to load each page sequentially rather than all at the same time. What is this all about?
Have tried it on windows server 2008 iis and windows 7 iis
It really depends on the web browser you are using and how tab support in it has been programmed.
It is probably using a single thread to load each tab in turn, which would explain your observation.
Edit:
As others have mentioned, it is also a very real possibility the the webserver running on your localhost is single threaded.
If I remember correctly HTTP standard limits the number of concurrent conections to the same host to 2. This is the reason highload websites use CDNs (content delivery networks).
network.http.max-connections 60
network.http.max-connections-per-server 30
The above two values determine how many connections Firefox makes to a server. If threshold is breached, it will pipeline the requests.
Each browser implements it in its own way. The requests are made in such a way to maximize the performance. Moreover, it also depends on the server (localhost which is slower).
Your local web server configuration might have only one thread, so every next request will wait for the previous to finish

Wcf ThreadPool and async

I've got an asp.net web page that is making 7 async requests to a WCF service on another server. Both boxes are clean without anything else installed.
I've also increased maxconnections in web.config to 20.
I run a single call through the system and the page returns in 800ms. The long and short of it is I think that the threadpool is being being overwhelmed as, once placed underload I cannot get more that 8 requests per second, even though both quad core boxes are running at 20% CPU load and the sql server it's connected to is returning the querys in under 10ms per call.
I've changed the service behaviour to concurrency.multiple but that's not seeming to help.
Any ideas anyone.
There are many different factors that could be in play here. Taking a stab at the remark that changing your instancing model on the service had zero effect (big IF here) then its possible the 'bottleneck' is upstream from the service. Either at the web server, or the client load generator.
You've got several areas to review for tuning: client, web server, wcf service server - that's assuming there are no network devices in the middle. Pick an end and work towards the other end. Since I'm already making an assumption that its not the service, then I'd start at the client and work my way towards the wcf service.
Client
What machine is driving the load against the web server? A laptop? A desktop? A dedicated test agent, or a shared one? The client acting as the load generator for purposes of this test is also susceptible to maxConnections limitation as this is a client setting.
What is the CPU utilization of the client generating load? Could it be that the test driver is just unable to generate enough load to push these boxes? Can you add additional test clients to your test?
Web Server
What does the system.net/processModel element look like in machine.config on the ASP.NET web server? Try setting autoConfig = true. This will allow the configuration to auto size based on the 'size' of the machine its running on.
WCF Service
Review WCF service for any throttling defaults that might be in play and tweak appropriately. See ServiceThrottlingBehavior on MSDN.
Let us know any changes in behavior you might observe (if any) if you make any changes!
The real answer here that everyone missed is that you're using an ASP.NET web page. That means your client is some form of web browser. Modern web browsers have a limit of 2 concurrent async requests at any time. This means that 5 of your requests were queued up and waiting for the first two to finish. Once those first two, it served the next two, then the next two, then the last one.
All of these round-trips and handshakes simply take time. I'm guessing that your roundtrip time is around 200ms, unfortunately you have to do it 4 times.
I also really dislike the "max 2" browser limitation on making webservice calls.
Is this service hosted in IIS, WAS or a Windows Service?
You should try to set Windows to run services on a higher priority. Your WCF Service is probably creating the threads it needs but they should be running at a low priority.
Hope that helps.

Resources