I am facing a problem with my ASP NET configuration which is hosted on the azure app service. When I call my ASP NET controller from the frontend page with an AJAX call, the controller is trying to process some external requests, DB calls, etc. There is a huge amount of logic that takes some time and it's up to 4-5 minutes or even more. So after 2 minutes or more, the status code of 504 gateway timeout is returned. That's normal because the time of operation on the controller exceeded the default asp net maximum call time. From the user's perspective, there is no problem with waiting 5 or even 10 minutes to complete the whole operation on the controller and return status code 200. I want the user to wait till it's completed but 504 occurs and the whole operation crashes. Is there any chance to change that default behavior and force the ASP NET controller to not return 504 but wait a few minutes or more till the operation is complete? I know, maybe I should use azure functions queue endpoint or something like this but it does not make any sense in that case due to the need to be synchronously called from the frontend with the result of the process presented for the user.
Hi We subscribe the AWS SNS with our API service for guarantee execution and retry mechanism ,unfortunately our API call takes more than 30 sec to complete the task , as SNS waits for response less then 30 sec it treats as fail and reties the API again even my first API call is success after 30 sec, is there any way to increase the SNS response time like wait response to 2 or 3 mins or stop the retry of SNS dynamically or please suggest some other mechanism to run this background jobs with retry policy.
For this type of use-case, you might want to consider publishing to a SQS queue from your SNS topic and then have your application poll from the queue to find jobs to execute. As SNS won't be calling your service directly, there is no timeout and you're free to take as much time as needed to complete the job.
We're going to create a new API in dotnet core 2.1, this Web API will be a high traffic/transactions like 10,000 per minute or higher. I usually create my API like below.
[HttpGet("some")]
public IActionResult SomeTask(int id)
{
var result = _repository.GetData(id)
return Ok(result);
}
If we implement our Web API like below, what would be the beneficial?
[HttpGet("some")]
public async Task<IActionResult> SomeTask(int id)
{
await result = _repository.GetData(id);
return Ok(result);
}
We're also going to use the EF core for this new API, should we use the EF async as well if we do the async Task
What you're really asking is the difference between sync and async. In very basic terms, async allows the possibility of a thread switch, i.e. work begins on one thread, but finishes on another, whereas sync holds onto the same thread.
That in itself doesn't really mean much without the context of what's happening in a particular application. In the case of a web application, you have a thread pool. The thread pool is generally comprised of a 1000 threads, as that's the typical default across web servers. That number can be less or more; it's not really important to this discussion. Though, it is important to note that there is a very real physical limit to the maximum number of threads in a pool. Since each one consumes some amount of system resources.
This thread pool, then, is often also referred to as the "max requests", since generally speaking one request = one thread. Therefore, if you have a thread pool of 1000, you can theoretically serve 1000 simultaneous requests. Anything over that gets queued and will be handled once one of the threads is made available. That is where async comes in.
Async work is pretty much I/O work: querying a database, read/writing to a file system, making a request to another service, such as an API, etc. With all of those, there's generally some period of idle time. For example, with a database query, you make the query, and then you wait. It takes some amount of time for the query to make it to the database server, for the database server to process it and generate the result set, and then for the database server to send the result back. Async allows the active thread to be returned to the pool during such periods, where it can then service other requests. As such, assuming you have an action like this that is making a database query, if it was sync and you received 1001 simultaneous requests to that action, the first 1000 would begin processing and the last one would be queued. That last request could not be handled until one of the other 1000 completely finished. Whereas, with async, as soon as one of the thousand handed off the query to the database server, it could be returned to the thread pool to handle that waiting request.
This is all a little high level. In actuality, there's a lot that goes into this and it's really not so simple. Async doesn't guarantee that the thread will be released. Certain work, particular CPU-bound work, can never be async, so even if you do it in an async method, it runs as if it was sync. However, generally speaking, async will handle more requests than sync in a scenario where your thread-starved. It does come at a cost though. That extra work of switching between threads adds some amount of overhead, even if it's miniscule, so async will almost invariably be slower than sync, even if only by nanoseconds. However, async is about scale, not performance, and the performance hit is generally an acceptable trade for the increased ability to scale.
I'm wondering if it's okay to use setTimeout in Firebase Cloud Functions? I mean it's kinda working for me locally, but it has a very weird behavior: Unpredictable execution of the timeout functions.
Example: I set the timeout with a duration of 5 minutes. So after 5 minutes execute my callback. Most of the time it does that correctly, but sometimes the callback gets executed a lot later than 5 minutes.
But it's only doing so on my local computer. Is this behavior also happening when I'm deploying my functions to firebase?
Cloud Functions have a maximum time they can run, which is documented in time limits. If your timeout makes its callback after that time limit expired, the function will likely already have been terminated. The way expiration happens may be different between the local emulator and the hosted environments.
In general I'd recommend against any setTimeout of more than a few seconds. In Cloud Functions you're being billed for as long as your function is active. If you have a setTimeout of a few minutes, you're being billed for all that time, even when all your code is doing is waiting for a clock to expire. It's likely more (cost) efficient to see if the service you're waiting for has the ability to call a webhook, or to use a cron-job to check if it has completed
Recently we changed a dedicated server from Windows Server 2003 to a more powerful server, based on Window Server 2012. A problem appeared on the new server, that sometimes requests are running too slow.
I did an additional logging at various stages of requests, and get confusing data. For example, a call to web service method takes 7 seconds between PreRequestHandlerExecute and PostRequestHandlerExecute (tracked time in Global.asax), but at the same time there are log records made inside the called method, that show execution time of this method was less than a second (log records and the start and end of the method have same milliseconds). So it means that the method itself executed fast. The question is what consumed 7 seconds between PreRequestHandlerExecute and PostRequestHandlerExecute?
Note that the problem is not replicatable, I can never replicate it myself, but only see this in log happending to other people (I programmed an email notification sent to me when it happens that request takes more than 3 seconds).
Sometimes the execution time on some pages goes to such crazy values as 2 minutes, and from log records I have on different stages of the page execusion I cannot see what consumes that time. The problem did not exist on the old 2003 server.