I have an application where requests to a controller will take a while to process. The controller starts a thread per request and eventually returns some data to a database. I need to limit how many requests can be processed. So let's say our limit is 100, if the controller is already processing 100 requests, the 101st request will return 503 status till at least one request is completed.
I could use an application wide static counter to keep count of current processes but is there a better way to do this ?
EDIT:
The reason why the controller takes a while to respond is because the controller calls another API, which is a large database spanning several TB of geostationary data. Even if I could optimize this in theory, its not something I have control over. To make matters worse, the third party API simply times out if I have more than 10 concurrent requests. I am already dropping incoming requests to a servicebus queue. I just need a good way on my api controller to keep a global count of how many requests are coming in and returning 503 whenever it exceeds a set number of requests.
The requests to the API controller should not be limited. An idea would be to take requests and store the list of processes that need completing (database, queue etc)
Then create something outside the web request that processes this work, this is where you can manage how many are processed at once using parallel processing/multi-threading etc. (using windows service /Worker Role / Hangfire etc)
Once processed, you could communicate back to the page via SignalR to then get the data required to display once processed, or show status.
The benefit of this is that you can always go back to the page or refresh and get some kind of status, without re-running the whole process.
Related
I have an application that retrieves data from a specific database and through a SOA client sends this data to the integration, I have several threads sending instantiating this client and sending this data in parallel. However, the amount of submissions is being limited to 1,000,000 per hour, so when I reach this limit, I will have to send the registrations in the next submission and so on. What implementation/technology can I use to ensure that all records are submitted?
Sounds like any persistent queue would be helpful here. I'd make it so that all the request behave the same, that is the server would only reply with a place to get the data (or the client would give a callback to where the data should be sent) and all the server will do on request would be to queue the request and return the next step. It can then have separate process read from the queue and process the requests at whichever way makes sense
IIS (or maybe ASP.NET) takes longer time to respond requests when they are sent simultaneously with other requests. For example if a web page sends request A simultaneously along with 20 other requests, it takes 500 ms but when this request is sent lonely, it takes 400 ms.
Is there a name for this feature? It is in IIS or ASP.NET? Can I disable or change it? Is there any benefits using it?
Notes:
I am seeing this issue on a ASP.NET Web API application.
I have checked IIS settings (IIS 8.5 on Windows Server 2012 R2) and found nothing that limit its throughput. All constraints like band-with and CPU throttlings are at high number. Also the server have good hardware.
Update 1:
All requests are going to read something from database. I have checked them in Chrome developers' console. Also created a simple C# application that makes multiple parallel requests to the server. When they are really parallel, they take a large time, but when makes wait between each call, the response time decreases dramatically.
Update 2:
I have a simple method in my application that just sends an Ok:
[AllowAnonymous]
public IHttpActionResult CheckOnline()
{
return Ok();
}
Same behavior exists here. In my custom C# tester, if I call this route multiple times simultaneously it tokes more than 1000 ms to complete but when wait 5 seconds between each call, response time drops below 20 ms.
This method is not IO or CPU bound. Seems that IIS detects that these requests are from a single specific user/client so do not make too much attention to it.
If you use ASP.NET Session in your application, requests are queued and processed one by one. So, the last request can stay holt in the queue while the previous requests are being processed.
Another possible reason is that all threads in the ASP.NET Thread Pool are busy. In this case, a new thread will be created to process a new request that takes additional time.
This is just a theory (or my thoughts). Any other cause is possible.
I have written a simple 'analytics' tracking tool for my site, which has boolean columns such as
Visited_Store
Visited_Homepage
Checkout_Started
MainVideo_Played
MainVideo_Completed
I am also using Google Analytics but wanted a secondary place to montor activity.
I had been testing my application primarily in Chrome which of course will use web sockets by default. I switched to long polling because I wanted to be able to monitor the requests in Fiddler.
The way the hub works is pretty simple. The SignalR client sends events which sets flags (columns) when a particular event has completed. So on invocation it does the following :
Find row for user - or create if non existent
Set flags
Save row
I had no concurrency issues until I switched to long polling - when I found instant deadlocks.
My client will often send multiple events simultaneously (separate issue to fix - yes) and when using web sockets they are nicely queued and executed one by one. So obviously any deadlocks are going to be extremely unlikely.
Long polling is a different story - I suddenly found that my hub method was being entered multiple times and trying to create multiple rows, and deadlocks and 'row modified' errors all over the place.
One simple solution is just to lock(lockObj) when making a request, but if I have many clients I'd rather not do that. Another is to catch the deadlock and re-execute the request which right now occurs on just about every page load.
Is there perhaps a way to configure SignalR long polling to not send requests all at once? Or some other way to execute requests in turn (like ASP.NET does when you use SessionState).
Task:
I'm using static classes, so everyone shares data that's been already loaded. If someone makes a change, that request will put an item in a list with an incremental ID, and my idea would be that every client has it's version on client side and requests if there's any change.
My solution: For this I use a $.post with a timeout of 5000ms and sending the client version, on server side I have a 500 cycle for loop which checks if there's something newer and breaks the loop, returns the changes and have a 10ms Thread.Sleep in every cycle so it wouldn't hog the cpu. Either if on the client it times out, has an error, I call the post again, if it succeeds I process the return data, than call the post again. This way I always should get the changes almost instantly without an overwhelming number of requests, and if something fails, I only need to wait 5secs for it to resume.
My problem is that when this loop runs, other requests aren't handled. With asp.net development server, that's okay, because it's single threaded. But that's also the case with win7hp iis7.5.
What I tried: Set it in the registry (HKLM\SOFTWARE\Microsoft\ASP.NET\4.0.30319.0\MaxConcurrentRequestsPerCPU), increasing the worker threads for the application pool, updating the aspnet.config file with maxConcurrentRequestsPerCPU="12" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" settings, and I also read that my win7hp should be able to use 3 threads. I also thought it's an optimization, that I use same variables in one request so it queues the others, so I commented those lines, left the for loop with the sleep only, but same result.
Don't use Thread.Sleep on threads handling requests in ASP.Net. This will essentially consume thread and prevent more requests to be started. There is restriction on number of threads ASP.Net will create to handle requests that you've tried to change, but high number of threads will make process less responsive and can easily cause OutOfMemeoryException for 32bit proceses - so it is not a good route.
There are several other threads discussing implemeting long poll requests with ASP.Net like this one - Can ASP.NET MVC's AsyncController be used to service large number of concurrent hanging requests (long poll)? and obviously Comet questions like this - Comet implementation for ASP.NET?.
Let's imaging there are 2 pages on the web site: quick and slow. Requests to slow page are executed for a 1 minute, request to quick 5 seconds.
Whole my development career I thought that if 1st started request is slow: he will do a (synchronous) call to DB... wait answer... If during this time request to quick page will be done, this request will be processed while system is waiting for response from DB.
But today I've found:
http://msdn.microsoft.com/en-us/library/system.web.httpapplication.aspx
One instance of the HttpApplication class is used to process many requests in its lifetime. However, it can process only one request at a time. Thus, member variables can be used to store per-request data.
Does it mean that my original thoughts are wrong?
Could you please clarify what they mean? I am pretty sure that thing are as I expect...
The requests have to be be processed in the sequential order on the server side if the both request use the same session state with read/write access, because of asp.net session locking.
You can find more information here:
http://msdn.microsoft.com/en-us/library/ie/ms178581.aspx
Concurrent Requests and Session State
Access to ASP.NET session state is exclusive per session, which means that if two different users make concurrent requests, access to each separate session is granted concurrently. However, if two concurrent requests are made for the same session (by using the same SessionID value), the first request gets exclusive access to the session information. The second request executes only after the first request is finished. (The second session can also get access if the exclusive lock on the information is freed because the first request exceeds the lock time-out.) If the EnableSessionState value in the # Page directive is set to ReadOnly, a request for the read-only session information does not result in an exclusive lock on the session data. However, read-only requests for session data might still have to wait for a lock set by a read-write request for session data to clear.
Your original thoughts are right, and so is the documentation. The IIS worker process can spawn many threads, each with their own instance of the HttpApplication class.
ASP .NET will host multiple AppDomains for your web application under a single worker process (w3wp.exe). It may even share AppDomains for different web applications under the same worker process (if they are assigned to the same app pool).
Each AppDomain that ASP .NET creates can host multiple HttpApplication instances which serve requests and walk through the ASP .NET lifecycle. Each HttpApplication can (as you've said) respond to one request at a time.