Web Api High Response Latency (Asp.net, Azure) - asp.net-core-webapi

After deploying my Web Api to Azure, I noticed that I have a very high response latency. I stopwatched a method that awaits a Http request to a controller that just returns a "Hello" string. The times I measure are also not consistent, but most of the time I get something around 0.9 seconds. The problem is, that my database queries take forever, even the least fancy ones take something around two seconds (and when my UI updates multiple elements, it takes up to 4 seconds until the whole thing is loaded).
I have really no idea where to start diagnosing this issue, so any help (even the most basic) would be highly appreciated!

I found my problem, I used Database.EnsureCreated() on every service request, since it was called in the constructor of the DbContext I inject on Startup. Seems pretty unefficient. Whoopsie!

Related

ASP.NET: Multiple ajax requests in short period of time are much slower

I know there is some number of request each user of the application can execute at the time, and this number is big enough. Also, I thought ajax request are completely async. There is also number of requests browser can handle at time, but this number is 6 or more. I realize this: if I call my ASP.NET web api services "gently" meaning I call one method than wait a little bit then click again they execute for about 200ms,300ms (other details are not important, so please don't ask to send the code, question is general and I noticed this behavoir in asp.net asmx services, WCF...) This is what is important: If I call this services "faster" (for example I call mywebsite.com/api/results?page=1 than almost immediately mywebsite.com/api/results?page=2 etc. they start to execute much slower (2,3,4 seconds) (that is 20 times slower). My question is: what is "not async" in "asnyc" request? The more async response I try to get they are slower to execute.

WebAPI Lifecycle/Request Queue

I have an AngularJS app that calls WebAPI. If I log the time I initiatiate a request (in my angluar controller) and log the time OnActionExecuting runs (in an action filter in my WebAPI controller), I notice at times a ~2 second gap. I'm assuming nothing else is running before this filter and this is due to requests being blocked/queued. The reason I assume this is because if I remove all my other data calls, I do not see this gap.
What is the number of parallel requests that WebAPI can handle at once? I tried looking at the ASP.NET performance monitors but couldn't find where I could see this data. Can someone shed some insight into this?
There's no straight answer for this but the shortest one is ...
There is no limit to this for WebApi the limits come from what your server can handle and how efficient the code you have it run is.
...
But since you asked, lets consider some basic things that we can assume about our server and our application ...
concurrent connections
A typical server is known for issues like "c10k" ... https://en.wikipedia.org/wiki/C10k_problem ...so that puts a hard limit on the number of concurrent connections.
Assuming each WebApi call is made from say, some AJAX call on a web page, that gives us a limit of around 10k connections before things get evil.
2.Dependency related overheads
If we then consider the complexity of the code in question you may then have a bottleneck in doing things like SQL queries, I have often written WebApi controllers that have business logic that runs 10+ db queries, the overhead here may be your problem?
Feed in Overhead
What about network bandwidth to the server?
Lets assume we are streaming 1MB of data for each call, it wont take long to choke a 1Gb/s ethernet line with messages that size.
Processing Overhead
Assuming you wrote an Api that does complex calculations (e.g mesh generation for complex 3D data) you could easily choke your CPU for some time on each request.
Timeouts
Assuming the server could accept your request and the request was made asynchronously the biggest issue then is, how long are you prepared to wait for your response? Assuming this is quite short you would reduce the number of problems you have time to solve before each request then needed a response.
...
So as you can see, this is by no means an exhaustive list but it outlines the complexity of the question you asked. That said, I would argue that WebApi (the framework) has no limits, it's really down to the infrastructure around it that has limitations in order to determine what can be possible.

How increase timeout on ASP.NET HTTP processes?

We have a web page that calls a stored procedure. The stored procedure takes ~ 5 minutes to run. When called from ASP.NET, it times out at ~ 2 minutes and 40 seconds with an HTTP execution timeout error.
I tried setting an HTTP timeout property in my web.config file as:
<httpRuntime executionTimeout="600">
But it didn't help.
Any ideas appreciated. thanks
You should not create a web application with a page that could require such a long response time from the server. As a general rule, anything that you know will take longer than 10 seconds or so should be done as an asynchronous process. You've probably seen websites that display a "please wait" screen for long running processes, most times these pages work by delegating the long-running job to a background process or message queue, then polling until the job either completes successfully or errors out.
I know this may seem like a tall order if you've not done it before, but it really is the professional way to handle the scenario you're faced with. In some cases, your clients may be working from networks with proxy servers set up to abort the HTTP request regardless of what you've set your timeouts to.
This is a dated link, and I believe the .NET framework has introduced other ways of doing this, but I actually still use the following approach today in certain scenarios.
http://www.devx.com/asp/Article/29617

Classic ASP 'Requests Executing' never greater than 1

We have a complex app that serves AJAX JSON streams (using ADO to grab the data) using brief ASP servlets. Any given session can fire up from 10-20 of these requests simultaneously. We encountered a significant performance problem way earlier than we expected as load built. (Server is a dual-XEON, RAID 5, 4gb, etc). Sleuthing around in perfmon we noticed that the 'Requests Executing' figure is perpetually stuck at 1. Never gets any higher. Research indicates that numbers of 20-50 are not uncommon. Requests Queued will hover around 10-20 and Wait Time climbs as well.
We have fiddled with ASPProcessorThreadMax set to 40 from default of 25 with no effect. It seems to be only able to work a single request at a time, which, needless to say, won't work. I can't find anything that describes this particular problem. Anny help is greatly appreciated.
ASP Session object is constrained to a Single Threaded Apartment (STA). As a result requests to ASP scripts for the same session can only be processed sequentially.
An additional reason why you might only ever see 1 executing ASP script even across multiple sessions is where debugging has be enabled for ASP. This causes the ASP processing to ignore ASPProcessorThreadMax and pretend it were set to 1.
To eliminate the problem ensure debugging is not enabled and turn off "Enable Session State". If you are using the Session object in your code you will need to find an alternative, like DB backed state.
However, how many active concurrent sessions are you expecting in the live production? Perhaps the overall user experience will not truely be impacted by the serialisation of requests per session.

How useful is Response.IsClientConnected?

I was wondering if anyone had experience they could share using the Response.IsClientConnected property as a performance optimization for asp.net websites.
The reason I ask is that I am a bit skeptical on how effective it would be in real life scenarios. I understand the concept of checking the value before performing a large task but I just can't see how useful this would be as clients could disconnect at any point time.
I think the main usage would be for optimizing the delivery of long processes. For example, if you had to generate a huge report or something, you might run the report in a separate thread and then periodically check to see if the user is still connnected. If not, you could kill this long running process so that it is not running needlessly since the user is no longer expecting a response.
This helps to prevent users from starting long processes and then making more requests over and over because they might think it is slow or something. If you were not doing this type of checking, you could tax your server due to all the requests even though all but one is valid. This scenario could be handled by allowing only one user to run one long running task, but it would also help in a multi-user environment as well to make sure you are only spending time serving up requests where the user is still connected and waiting for the response.
Note: I have never actually used this before, this is just based on my very basic understanding of what I have read.
I have used this extensively in my applications and it can give you a huge saving on resources.
Try this: create a page that needs -some- time to complete and try refresh it many many times before it complete. You will see that requests are queued to be executed. Imagine a user that has a slow connection and refreshes his page many many times thinking this will fetch the page (a very common issue from what a site can die out of resources when all users are connected and for some reason it becomes slow).
Now, change it and at the start of each page load, (or sooner at page init) check if HttpContext.Current.Response.IsClientConnected and in the case that he is not connetced throw a threadabord exception. You will see, your site will respond much sooner.
Actually I check if client is connected before any heavy action on the page so as to prevent needless executions. In production environments, I have seen that especially in cases where the system becomes slow, this validation will help much.

Resources