I am facing a problem with my ASP NET configuration which is hosted on the azure app service. When I call my ASP NET controller from the frontend page with an AJAX call, the controller is trying to process some external requests, DB calls, etc. There is a huge amount of logic that takes some time and it's up to 4-5 minutes or even more. So after 2 minutes or more, the status code of 504 gateway timeout is returned. That's normal because the time of operation on the controller exceeded the default asp net maximum call time. From the user's perspective, there is no problem with waiting 5 or even 10 minutes to complete the whole operation on the controller and return status code 200. I want the user to wait till it's completed but 504 occurs and the whole operation crashes. Is there any chance to change that default behavior and force the ASP NET controller to not return 504 but wait a few minutes or more till the operation is complete? I know, maybe I should use azure functions queue endpoint or something like this but it does not make any sense in that case due to the need to be synchronously called from the frontend with the result of the process presented for the user.
Related
IIS (or maybe ASP.NET) takes longer time to respond requests when they are sent simultaneously with other requests. For example if a web page sends request A simultaneously along with 20 other requests, it takes 500 ms but when this request is sent lonely, it takes 400 ms.
Is there a name for this feature? It is in IIS or ASP.NET? Can I disable or change it? Is there any benefits using it?
Notes:
I am seeing this issue on a ASP.NET Web API application.
I have checked IIS settings (IIS 8.5 on Windows Server 2012 R2) and found nothing that limit its throughput. All constraints like band-with and CPU throttlings are at high number. Also the server have good hardware.
Update 1:
All requests are going to read something from database. I have checked them in Chrome developers' console. Also created a simple C# application that makes multiple parallel requests to the server. When they are really parallel, they take a large time, but when makes wait between each call, the response time decreases dramatically.
Update 2:
I have a simple method in my application that just sends an Ok:
[AllowAnonymous]
public IHttpActionResult CheckOnline()
{
return Ok();
}
Same behavior exists here. In my custom C# tester, if I call this route multiple times simultaneously it tokes more than 1000 ms to complete but when wait 5 seconds between each call, response time drops below 20 ms.
This method is not IO or CPU bound. Seems that IIS detects that these requests are from a single specific user/client so do not make too much attention to it.
If you use ASP.NET Session in your application, requests are queued and processed one by one. So, the last request can stay holt in the queue while the previous requests are being processed.
Another possible reason is that all threads in the ASP.NET Thread Pool are busy. In this case, a new thread will be created to process a new request that takes additional time.
This is just a theory (or my thoughts). Any other cause is possible.
I have an application where requests to a controller will take a while to process. The controller starts a thread per request and eventually returns some data to a database. I need to limit how many requests can be processed. So let's say our limit is 100, if the controller is already processing 100 requests, the 101st request will return 503 status till at least one request is completed.
I could use an application wide static counter to keep count of current processes but is there a better way to do this ?
EDIT:
The reason why the controller takes a while to respond is because the controller calls another API, which is a large database spanning several TB of geostationary data. Even if I could optimize this in theory, its not something I have control over. To make matters worse, the third party API simply times out if I have more than 10 concurrent requests. I am already dropping incoming requests to a servicebus queue. I just need a good way on my api controller to keep a global count of how many requests are coming in and returning 503 whenever it exceeds a set number of requests.
The requests to the API controller should not be limited. An idea would be to take requests and store the list of processes that need completing (database, queue etc)
Then create something outside the web request that processes this work, this is where you can manage how many are processed at once using parallel processing/multi-threading etc. (using windows service /Worker Role / Hangfire etc)
Once processed, you could communicate back to the page via SignalR to then get the data required to display once processed, or show status.
The benefit of this is that you can always go back to the page or refresh and get some kind of status, without re-running the whole process.
Recently we changed a dedicated server from Windows Server 2003 to a more powerful server, based on Window Server 2012. A problem appeared on the new server, that sometimes requests are running too slow.
I did an additional logging at various stages of requests, and get confusing data. For example, a call to web service method takes 7 seconds between PreRequestHandlerExecute and PostRequestHandlerExecute (tracked time in Global.asax), but at the same time there are log records made inside the called method, that show execution time of this method was less than a second (log records and the start and end of the method have same milliseconds). So it means that the method itself executed fast. The question is what consumed 7 seconds between PreRequestHandlerExecute and PostRequestHandlerExecute?
Note that the problem is not replicatable, I can never replicate it myself, but only see this in log happending to other people (I programmed an email notification sent to me when it happens that request takes more than 3 seconds).
Sometimes the execution time on some pages goes to such crazy values as 2 minutes, and from log records I have on different stages of the page execusion I cannot see what consumes that time. The problem did not exist on the old 2003 server.
I have RESTful services running that are getting some strange intermittent 500 errors that are really generic when being called asynchronously from the webclient. Trying to figure out what may be causing the issues.
Note:
When the API gets a request message, it tries to validate a token
through another service call to an endpoint at
"/Security/AG/v1/token".
This may take several seconds as you can see from the filtered IIS
logs included.
This token verification service talks to a nosql db as well as a sql
server db to validate the token.
When it finishes, the parent service will then continue with it's
logic if it is a successful validation, format a message response and
sends it back to the web client.
Somehow it sends back some 500 errors that I can't seem to pinpoint. By default, I have try catches that send appropriate error codes in the message response with stack traces when there is a server error.
Sure would like to see if anyone has any ideas where I may look.
Could it be a request timeout? Unlikely since the IIS default is to "keepalive" for about 2 mins I believe.
IIS log data (these are the 500 errors that have been logged): uri | time | millisecs
/Security/AG/v1/token 4:50:14 PM 21593
/Mighty/xxxx/problems/528f6c42072ef708ecd43f59 4:50:14 PM 21655
/Security/AG/v1/token 4:57:07 PM 19156
/Mighty/xxxx/problems/528f6c42072ef708ecd43f59 4:57:07 PM 19218
/Security/AG/v1/token 5:11:02 PM 19171
/Mighty/xxxx/cohorts/ 5:11:02 PM 19218
PS - these calls eventually succeed. Since the browser seems to send several calls repeatedly for the idempotent calls. I just want to know where the 500s might coming from and why.
Technology stack: IIS7, ASP.NET, Servicestackv3, C#, mongodb, sserver, Chrome
We have a SOAP web service running on ASP.NET Web Services over IIS6 that exhibits a strange behavior when it processes a request that results in long server-time processing. Essentially if a request takes more than about 5 minutes to process on the server then IIS never sends the response back to the client, even though, from ASP.NET's perspective, the call completed. We know this because we write entries to an application log at the beginning and end of a web method call and those log entries do get written. However, no response is actually sent and the connection remains open, seemingly indefinitely. In one test, we saw the connection stay open for over 24 hours before we manually stopped the test client.
We have a test SOAP client that is able to detect the moment a response starts streaming down to it from the server and in the case where the server processing time takes too long, nothing is ever streamed down. Even with a large response payload, we should start seeing that response trickle down shortly after the web method's "end" application log entry is written, but we never see it.
The exact server processing time where things behave in this manor is hard to determine. We have one long-running test call that results in about 2.5 minutes of server processing time and that call results in a successful response to the client. We have another that takes about 8 minutes and that one fails as described above. So the threshold must be somewhere in between.
I suggest that you call a web method to start the execution of your needed task, than use another method to poll the server for the task's completion. I had the same problem 2 years ago and i solved it this way. The client queues a task on the server then after some specified intervals asks the server for the result of the task.
I hope that helps.