What do ASP.NET performance counters mean? - asp.net

I'm trying to get a better handle on how threads work in ASP.NET, so I have a test site with a few pages, and I have a test WinForms client that creates 40 roughly concurrent requests to the test site. The requests take about 5-10 seconds to complete--they call a web service on another server. When I run the test client, I can use Fiddler to see that the requests are being made concurrently. However, when I look at Performance Monitor on the web server, with counters "ASP.NET Apps v2.0.xxx/Requests Executing", "ASP.NET/Requests Current", "ASP.NET Requests Queued", these counters never display more than 2. This is the case regardless of whether the test page I'm requesting is set up with Async=True and using the Begin/End pattern of calling the web service, or if it's set up to make the call synchronously. Judging by what I see in Fiddler, I would think I should be seeing a total of 40 requests in one of those states, but I don't. Why is that? Do these counters not mean what I think they mean?

Related

IIS/ASP.NET intentionally responds simultaneous requests slower than single ones?

IIS (or maybe ASP.NET) takes longer time to respond requests when they are sent simultaneously with other requests. For example if a web page sends request A simultaneously along with 20 other requests, it takes 500 ms but when this request is sent lonely, it takes 400 ms.
Is there a name for this feature? It is in IIS or ASP.NET? Can I disable or change it? Is there any benefits using it?
Notes:
I am seeing this issue on a ASP.NET Web API application.
I have checked IIS settings (IIS 8.5 on Windows Server 2012 R2) and found nothing that limit its throughput. All constraints like band-with and CPU throttlings are at high number. Also the server have good hardware.
Update 1:
All requests are going to read something from database. I have checked them in Chrome developers' console. Also created a simple C# application that makes multiple parallel requests to the server. When they are really parallel, they take a large time, but when makes wait between each call, the response time decreases dramatically.
Update 2:
I have a simple method in my application that just sends an Ok:
[AllowAnonymous]
public IHttpActionResult CheckOnline()
{
return Ok();
}
Same behavior exists here. In my custom C# tester, if I call this route multiple times simultaneously it tokes more than 1000 ms to complete but when wait 5 seconds between each call, response time drops below 20 ms.
This method is not IO or CPU bound. Seems that IIS detects that these requests are from a single specific user/client so do not make too much attention to it.
If you use ASP.NET Session in your application, requests are queued and processed one by one. So, the last request can stay holt in the queue while the previous requests are being processed.
Another possible reason is that all threads in the ASP.NET Thread Pool are busy. In this case, a new thread will be created to process a new request that takes additional time.
This is just a theory (or my thoughts). Any other cause is possible.

What is the maximum number of thread allowed to run on a web page

I have developed a page using ExtJS and ASP.NET. The page got multiple widgets. Each widget sends multiple AJAX request at the time of load. I assume each AJAX request runs over a new thread. I just want to understand:
How many such threads are allowed to run from a single window? How many request a broswer can send to the server, without getting queued at the broswer itself. I know request can be queued at the server side depending on various parameters.
Does this behavior changes from browser to browser ?
Is there any way to trace the number of active request?
I am not talking about number of threads that can run on Server-Side. I am not talking about number of request a server can process.
How many such threads are allowed to run from a single window?
It depends on your server's hardware capabilities and configuration.
Does this behavior changes from browser to browser ?
No.
Is there any way to trace the number of active request?
Microsoft
http://msdn.microsoft.com/en-us/library/bb386420(v=vs.100).aspx
Examples & Tips
http://www.dotnetperls.com/trace
A server typically can handle several simultaneous requests, even with a small number of threads.
On IIS you configure this using
<system.web>
<applicationPool maxConcurrentRequestsPerCPU="50" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000"/>
</system.web>
Refer: Microsoft best practices
On a browser when using javascript, you should know that javascript doesn't have any multi-threading capabilities.
The number of simultaneous ajax(or other) requests to a server is browser dependent.
Usually 2, 6, 8 threads are allowed per domain name. This should not bother you too much, cause the rest will be queued.
Refer: BrowserScope for the latest network data

Why could Web Service call be slower than a web POST request

I have a .NET web service hosted in IIS. The web service has been used by clients over the past few years and there has been occassional timeout events when the client is on a slow connection (e.g. GPRS). On the other hand the clients sometimes have to POST some data to another web page (part of an ASP.NET web app) and usually the size of the data in the POST requests is bigger than the actual payloads in the web service calls. However the POST requests are far quicker as compared to the web service calls.
To establish this further I created a test web service with one method and another single web page with exactly the same operation i.e. receive 100K and send back 100K (random bytes) and I used a test client to call the web service method as well as did a post to the web page and got a response back using the same client. The difference in receiving a reply back from the web service and a response back from the web post request is huge i.e. about 1200 ms. Why is that the case? Is there any such configuration on the web service that would make such a big difference? Is it SOAP call stack? Serialization/Desrialization?
A number of factors could be contributing to this.
The first thing that leaps to mind for me is that SOAP could be considered a verbose protocol. That is, there's a LOT of data in the XML payload going both ways. XML is verbose in and of itself, and it's not exactly the fastest thing in the universe to process. Sure, you can use an optimized library to process it's data, but it'll be parsed out into object trees, then you can walk the nodes to drill down to the data you want. Unless you're using XPath, which will just do the same darned thing.
This is all presuming that you're actually using SOAP. And that your WebService is correctly configured. And that no packet loss is occurring while connecting to the Web Service. And that your firewall isn't creating issues. And that there's no encryption/decryption overhead.
In my own experience, one thing that frequently causes signficant slowdowns server side is one or more thrown exceptions. Try a Fiddler trace.

asp.net infinite loop - can this be done?

This question is about limits imposed to me by ASP.NET (like script timeout etc').
I have a service running under ASP.NET and I want to create a counterpart service for monitoring.
The main service's data is located at a database.
I was thinking about having the monitor service query the database in intervals of 1 second, within a loop, issued by an http request done by the remote client.
Now the actual serving of this monitoring will be done by a client http request, which will make the script loop (written in C#) and when new data is detected it'll aggregate that data into that one looping request output buffer, send it, and exit the loop, thus finishing the request.
The client will have to issue a new request in order to keep getting updates.
This is actually exactly like TCP (precisely like Windows IOCP); You request the service for data and wait for it. When it arrives you fire another request.
My actual question is: Have you done it before? How did it go? Am I limited by some (configurable) limits imposed by the IIS/ASP.NET framework? What are my limits in such situation, or, what are better options without complicating things too much?
Note that I do not expect many such monitoring requests at a time, maybe a few dozens.
This means however that 10 such concurrent monitoring requests will keep 10 threads busy, and the question is; Can it hurt IIS/performance? How will IIS handle 10 busy threads? Will it issue more? What are the limits? This is just one example of a limit I can think of.
I think you main concern in this situation would be timeouts, which are pretty much configurable. But I think that it is a wrong solution - you'd be better of with some background service, running constantly/periodically, and writing the monitoring data to some data store and then your monitoring page would just return it upon request.
if you want your page to display something only if the monitorign data is available- implement it with ajax - on page load query monitoring service, then if some monitoring events are available- render them, if not- sleep and query again.
IMO this would be a much better solution than a reallu long running requests.
I think it won't be a very good idea to monitor a service using ASP.NET due to the following reasons...
What happens when your application pool crashes?
What if you decide to do IISReset? Which application will come up first... the main app, or the monitoring app?
What if the monitoring application hangs due to load?
What if the load is already high on the Main Service. Wouldn't monitoring it every 1 sec, increase the load on the Primary Service, as well as IIS?
You get the idea...

Using a remote, external web service instead of a database

I am building an ASP.NET web application that will be deployed to a 4-node web farm.
My web application's farm is located in California.
Instead of a database for back-end data, I plan to use a set of web services served from a data center in New York.
I have a page /show-web-service-result.aspx that works like this:
1) User requests page /show-web-service-result.aspx?s=foo
2) Page's codebehind queries a web service that is hosted by the third party in New York.
3) When web service returns, the returned data is formatted and displayed to user in page response.
Does this architecture have potential scalability problems? Suppose I am getting hundreds of unique hits per second, e.g.
/show-web-service-result.aspx?s=foo1
/show-web-service-result.aspx?s=foo2
/show-web-service-result.aspx?s=foo3
etc...
Is it typical for web servers in a farm to be using web services for data instead of database? Any personal experience?
What change should I make to the architecture to improve scalability?
You have most definitely a scalability problem: the third-party web service. Unless you have a service-level agreement with that service (agreeing on the number of requests that you can submit per second), chances are real that you overload that service with your anticipated load. That you have four nodes yourself doesn't help you then.
So you should a) come up with an agreement with the third party, and b) test what the actual load is that they can take.
In addition, you need to make sure that your framework can use parallel connections for accessing the remote service. Suppose you have a round-trip time of 20ms from California to New York (which would be fairly good), you can not make more than 50 requests over a single TCP connection. Likewise, starting new TCP connections for every request will also kill performance, so you want pooling on these parallel connections.
I don't see a problem with this approach, we use it quite a bit where I work. However, here are some things to consider:
Is your page rendering going to be blocked while waiting for the web service to respond?
What if the response never comes, i.e. the service is down?
For the first problem I would look into using AJAX to update the page after you get a response back from the web service. You'll also want to consider how to handle the no response or timeout condition.
Finally, you should really think about how you could cache the web service data locally. For example if you are calling a stock quoting service then unless you have a real-time feed, there is no reason to call the web service with every request you get. Store the data locally for a period of time and return that until it becomes stale.
You may have scalability problems but most of these can be carefully engineered around.
I recommend you use ASP.NET's asynchronous tasks so that the web service is queued up, the thread is released while the request waits for the web service to respond, and then another thread picks up when the web service is done to finish off the request.
MSDN Magazine - Wicked Code - Asynchronous Pages in ASP.NET 2.0
Local caching is an absolute must. The fewer times you have to go from California to New York, the better. You might want to look into Microsoft's Velocity (although that's still in CTP) or NCache, or another distributed cache, so that each of your 4 web servers don't all have to make and cache the same data from the web service - once one server gets it, it should be available to all.
Microsoft Project Code Named "Velocity"
NCache
Other things that can go wrong that you should engineer around:
The web service is down (obviously) and data falls out of cache, and you can't get it back. Try to make it so that the data is not actually dropped from cache until you're sure you have an update available. Then the only risk is if the service is down and your application pool is reset, so don't reset it as a first-line troubleshooting maneuver!
There are two different timeouts on web requests, a connect and an overall timeout. Make sure both are set extremely low and you handle both of them timing out. If the service's DNS goes down, this can look like quite a different failure.
Watch perfmon for ASP.NET Queued Requests. This number will rise rapidly if the service goes down and you're not covering it properly.
Research and adjust ASP.NET performance registry settings so you have a highly optimized ASP.NET thread pool. I don't remember the specifics, but I seem to remember that there's a limit on IO Completion Ports and something else of that nature that are absurdly low for the powerful hardware I'm assuming you have on hand.
the trendy answer is REST. Any GET request can be HTTP Response cached (with lots of options on how that is configured) and it will be cached by the internet itself (your ISP, essentially).
Your project has an architecture that reflects they direction that Microsoft and many others in the SOA world want to take us. That said, many people try to avoid this type of real-time risk introduced by the web service.
Your system will have a huge dependency on the web service working in an efficient manner. If it doesn't work, or is slow, people will just see that your page isn't working properly.
At the very least, I would get a web stress tool and performance test your web service to at least the traffic levels you expect to get at peaks, and likely beyond this. When does it break (if ever?), when does it start to slow down? These are good metrics to know.
Other options to look at: perhaps you can get daily batches of data from the web service to a local database and hit the database for your web site. Then, if for some reason the web service is down or slow, you could use the most recently obtained data (if this is feasible for your data).
Overall, it should be doable, but you want to understand and measure the risks, and explore any potential options to minimize those risks.
It's fine. There are some scalability issues. Primarily, with the number of calls you are allowed to make to the external web service per second. Some web services (Yahoo shopping for example) limit how often you can call their service and will lock out your account if you call too often. If you have a large farm and lots of traffic, you might have to throttle your requests.
Also, it's typical in these situations to use an interstitial page that forks off a worker thread to go and do the web service call and redirects to the results page when the call returns. (Think a travel site when you do search, you get an interstitial page while they call out to an external source for the flight data and then you get redirected to a results page when the call completes). This may be unnecessary if your web service call returns quickly.
I recommend you be certain to use WCF, and not the legacy ASMX web services technology as the client. Use "Add Service Reference" instead of "Add Web Reference".
One other issue you need to consider, depending on the type of application and/or data you're pulling down: security.
Specifically, I'm referring to authentication and authorization, both of your end users, and the web application itself. Where are these things handled? All in the web app? by the WS? Or maybe the front-end app is authenticating the users, and flowing the user's identity to the back end WS, allowing that to verify that the user is allowed? How do you verify this? Since many other responders here mention a local data cache on the front end app (an EXCELLENT idea, BTW), this gets even MORE complicated: do you cache data that is allowed to userA, but not for userB? if so, how do you verify that userB cannot access data from the cache? What if the authorization is checked by the WS, how do you cache the permissions then?
On the other hand, how are you verifying that only your web app is allowed to access the WS (and an attacker doesn't directly access your WS data over the Internet, for instance)? For that matter, how do you ensure that your web app contacts the CORRECT WS server, and not a bogus one? And of course I assume that all the connection to the WS is only over TLS/SSL... (but of course also programmatically verify the cert applies to the accessed server...)
In short, its complicated, and many elements to consider here.... but it is NOT insurmountable.
(as far as input validation goes, that's actually NOT an issue, since this should be done by BOTH the front end app AND the back end WS...)
Another aspect here, as mentioned by #Martin, is the need for an SLA on whatever provider/hosting service you have for the NY WS, not just for performance, but also to cover availability. I.e. what happens if the server is inaccessible how quickly they commit to getting it back up, what happens if its down for extended periods of time, etc. That's the only way to legitimately transfer the risk of your availability being controlled by an externality.

Resources