ASP.NET (MVC) Outputcache and concurrent requests - asp.net

Let's say that, theoratically, I have a page / controller action in my website that does some very heavy stuff. It takes about 10 seconds to complete it's operation.
Now, I use .NET's outputcache mechanism to cache it for 15 minutes (for examle, I use [OutputCache(Duration = 900)]) What happens if, after 15 minutes, the cache is expired and 100 users request the page again within those 10 seconds that it takes to do the heavy processing?
The heavy stuff is done only the first time, and there is some locking mechanism so that the other 99 users will get the cache result
The heavy stuff is done 100 times (and the server is crippled as it can take up to 100 * 10 seconds)
Easy question maybe, but I'm not 100% sure. I hope it is number one, though :-)
Thanks!

Well, it depends upon how you have IIS configured. If you have less than 100 worker threads (let's say, 50), then the "heavy stuff" is done 50 times, crippling your server, and then the remaining 50 requests will be served from cache.
But no, there is no "locking mechanism" on a cached action result; that would be counterproductive, for the most part.
Edit: I believe this to be true, but Nick's tests say otherwise, and I don't have time to test now. Try it yourself! The rest of the answer is not dependent on the above, though, and I think it's more important.
Generally speaking, however, no web request, cached or otherwise, should take 10 seconds to return. If I were in your shoes, I would look at somehow pre-computing the hard part of the request. You can still cache the action result if you want to cache the HTML, but it sounds like your problem is somewhat bigger than that.
You might also want to consider asynchronous controllers. Finally, note that although IIS and ASP.NET MVC will not lock on this heavy computation, you could. If you use asynchronous controllers combined with a lock on the computation, then you would get effectively the behavior you're asking for. I can't really say if that's the best solution without knowing more about what your doing.

It seems to lock here, doing a simple test:
<%# OutputCache Duration="10" VaryByParam="*" %>
protected void Page_Load(object sender, EventArgs e)
{
System.Threading.Thread.Sleep(new Random().Next(1000, 30000));
}
The first page hits the a breakpoint there, even though it's left sleeping...no other request hits a breakpoint in the Page_Load method...it waits for the first one to complete and returns that result to everyone who's requested that page.
Note: this was simpler to test in a webforms scenario, but given this is a shared aspect of the frameworks, you can do the same test in MVC with the same result.
Here's an alternative way to test:
<asp:Literal ID="litCount" runat="server" />
public static int Count = 0;
protected void Page_Load(object sender, EventArgs e)
{
litCount.Text = Count++.ToString();
System.Threading.Thread.Sleep(10000);
}
All pages queued up while the first request goes to sleep will have the same count output.

Old question, but I ran in to this problem, and did some investigation.
Example code:
public static int Count;
[OutputCache(Duration = 20, VaryByParam = "*")]
public ActionResult Test()
{
var i = Int32.MaxValue;
System.Threading.Thread.Sleep(4000);
return Content(Count++);
}
Run it in one browser, and it seems to lock and wait.
Run it in different browsers (I tested in IE and firefox) and the requests are not put on hold.
So the "correct" behaviour has more to do with which browser you are using than the function in IIS.
Edit: To clarify - No lock. The server gets hit by all requests that manage to get in before the first result is cached, possibly resulting in a hard hit on the server for heavy requests. (Or if you call an external system, that system could be brought down if your server serves many requests...)

I made a small test that might help. I believe what I've discovered is that the uncached requests do not block, and each request that comes in while the cache is expired and before the task is completed ALSO trigger that task.
For example, the code below takes about 6-9 seconds on my system using Cassini. If you send two requests, approximately 2 seconds apart (i.e. two browser tabs), both will receive unique results. The last request to finish is also the response that gets cached for subsequent requests.
// CachedController.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
namespace HttpCacheTest.Controllers
{
public class CachedController : Controller
{
//
// GET: /Cached/
[OutputCache(Duration=20, VaryByParam="*")]
public ActionResult Index()
{
var start = DateTime.Now;
var i = Int32.MaxValue;
while (i > 0)
{
i--;
}
var end = DateTime.Now;
return Content( end.Subtract(start).ToString() );
}
}
}

You should check this information here:
"You have a single client making multiple concurrent requests to the server. The default behavior is that these requests will be serialized;"
So, if the concurrent request from the single client is serialized, the subsequent request will use the cache. That explain some behavior seem in some answer above (#mats-nilsson and #nick-craver)
The context that you showed us is multiple users, that will hit you Server in the same time, and you server will get busy until have completed at least one request and created the output cache, and use it for the next request. So if you want to serialize multiple users requesting the same resource, we need to understand how the serialized request works for single user. Is that what you want?

Related

Web API: Chrome Network tab reports huge request processing time, but the request finishes in milliseconds

I'm working on some performance issues on an ASP.NET / Angular website, and I noticed something quite interesting in Google Chrome that I can't understand.
The time it takes the .NET Web Api to finish the request takes a few milliseconds, but Google Chrome reports it taking a few seconds.
Here are some screenshots to illustrate what I mean:
As you can see the request took 2,59 seconds to complete.
If I take out this request and do it solo (right clicking and selecting 'Open in new Tab', I get a completely different result, like seen here:
This time, the request took 81 ms, which is a huge difference from the 2,59 seconds reported in the first screenshot.
Debugging the app has shown me that there's nothing special going on in this request. It simply returns a list. The query that does this is FAST. I can also run this request 100x and it remains fast, yet in the app it seems to be slow.
I'm at a loss at what could be going on.
I have investigated the following:
I thought my web api was handling the requests coming in one by one, first finishing the first request and then moving on to the second one. I have tested this by putting breakpoints on both web api methods and they are fired at the same time and complete intermittently.
I thought it may be due to the first request in Entity Framework being slow and afterwards being fast. It's not, because if I load the page a first time, and then a second time, I see the exact same results.
The request is done with Angular, like so:
return $http.get(service.url + '/organisationsorts').success(function(data)
{ //data is processed here });
Does anyone have any suggestions on how to further look at this? What things I can try?
The technologies used are:
.NET Web Api
Entity Framework
Angular JS to do the request
An example of the Web Api Request:
[Route("organisationsorts")]
[HttpGet]
public IList<SortOrganizationListDto> GetAllOrganisationSorts()
{
return _service.GetAllOrganisationSorts();
}
And the corresponding implementation:
public IList<SortOrganizationListDto> GetAllOrganisationSorts()
{
try
{
return Uow.GetStandardRepo<SortOrganisation>().GetAll().OrderBy(x => x.Name).Project().To<SortOrganizationListDto>().ToList();
}
catch (Exception ex)
{
LogError("GetAllOrganisationSorts has failed", ex);
return null;
}
}
This selects 5 rows from the table using Entity Framework and projecting it via AutoMapper to a ListDto.

Faking MVC Server.Transfer: Response.End() does not end my thread

I have two issues here, the second one is irrelevant if the first one got answered, but still technically interesting in my opinion... I will try to be as clear as possible:
1st question: my goal is to fake a Server.Transfer in MVC, is there any descent way to do that, I found quite a few articles about it, but most where about redirecting / rerouting, which is not possible in my case (not that I can think of at least).
Here is the context, we have two versions of our website, a "desktop" one and a mobile one. Our marketing guy wants both versions of the home page to be served on the same url (because the SEO expert said so).
This sounds trivial and simple, and it kind of is in most cases, except... Our desktop site is a .NET 4.0 ASPX site, and our mobile site is MVC, both run in the same site (same project, same apppool, same app).
Because the desktop version represents about 95% of our traffic, this should be the default, and we want to "transfer" (hence same url) from the ASPX code behind to the MVC view only if user is on a mobile device or really wants to see the mobile version. As far as I saw so far, there is no easy way to do that (Server.Transfer only executes a new handler - hence page - if there is a physical file for it). Hence question has any one done that in a proper way so far?
And which brings me to:
2nd question: I did build my own transfer to MVC mechanism, but then figured out that a Response.End() does not actually ends the running thread anymore, does anyone have a clue why?
Obviously, I don't expect any answer out of the blue, so here is what I am doing:
in the page(s) which needs transfering to mobile, I do something like:
protected override void OnPreInit(EventArgs e) {
base.OnPreInit(e);
MobileUri = "/auto/intro/index"; // the MVC url to transfer to
//Identifies correct flow based on certain conditions 1-Desktop 2-Mobile
BrowserCheck.RedirectToMobileIfRequired(MobileUri);
}
and my actual TransferToMobile method called by RedirectToMobileIfRequired (I skipped the detection part as it is quite irrelevant) looks like:
/// <summary>
/// Does a transfer to the mobile (MVC) action. While keeping the same url.
/// </summary>
private static void TransferToMobile(string uri) {
var cUrl = HttpContext.Current.Request.Url;
// build an absolute url from relative uri passed as parameter
string url = String.Format("{0}://{1}/{2}", cUrl.Scheme, cUrl.Authority, uri.TrimStart('/'));
// fake a context for the mvc redirect (in order to read the routeData).
var fakeContext = new HttpContextWrapper(new HttpContext(new HttpRequest("", url, ""), HttpContext.Current.Response));
var routeData = RouteTable.Routes.GetRouteData(fakeContext);
// get the proper controller
IController ctrl = ControllerBuilder.Current.GetControllerFactory().CreateController(fakeContext.Request.RequestContext, (string)routeData.Values["controller"]);
// We still need to set routeData in the request context, as execute does not seem to use the passed route data.
HttpContext.Current.Request.RequestContext.RouteData.DataTokens["Area"] = routeData.DataTokens["Area"];
HttpContext.Current.Request.RequestContext.RouteData.Values["controller"] = routeData.Values["controller"];
HttpContext.Current.Request.RequestContext.RouteData.Values["action"] = routeData.Values["action"];
// Execute the MVC controller action
ctrl.Execute(new RequestContext(new HttpContextWrapper(HttpContext.Current), routeData));
if (ctrl is IDisposable) {
((IDisposable)ctrl).Dispose(); // does not help
}
// end the request.
HttpContext.Current.Response.End();
// fakeContext.Response.End(); // does not add anything
// HttpContext.Current.Response.Close(); // does not help
// fakeContext.Response.Close(); // does not help
// Thread.CurrentThread.Abort(); // causes infinite loading in FF
}
At this point, I would expect the Response.End() call to end the thread as well (and it does if I skip the whole faking the controller execution bit) but it doesn't.
I therefore suspect that either my faked context (was the only way I found to be able to passed my current context with a new url) or the controller prevents the thread to be killed.
fakeContext.Response is same as CurrentContext.Response, and the few attempts at ending the fake context's response or killing the thread didn't really help me.
Whatever code is running after the Response.End() will NOT actually be rendered to the client (which is a small victory), as the Response stream (and the connection, no "infinite loading" in the client) is being closed. But code is still running and that is no good (also obviously generates loads of errors when trying to write the ASPX page, write headers, etc.).
So any new lead would be more than welcome!
To sum it up:
- does anyone have a less hacky way to achieve sharing a ASPX page and a MVC view on the same url?
- if not, does anyone have a clue how I can ensure that my Response is really being ended?
Many thanks in advance!
Well,
for whoever is interested, I at least have answer to question 1 :).
When I first worked on that feature, I looked at the following (and very close) question:
How to simulate Server.Transfer in ASP.NET MVC?
And tried both the Transfer Method created by Stan (using httpHandler.ProcessRequest) and Server.TransferRequest methods. Both had desadvantages for me:
the first one does not work in IIS, (because I need to call that in a page, and that seems too late already).
the second one makes it terribly annoying for developers who all need to run their site in IIS (no biggy, but still...).
Seeing that my solution obviously wasn't optimal, I had to come back to the IIS solution, which seems to be the neatest for production environment.
This solution worked for a page and triggered an infinite loop on another one...
That's when I got pointed to what I had lazily discarded as not being the cause: our url redirect module. It uses Request.RawUrl to match a rule, and oh surprise, Server.TransferRequest keeps the original Request.RawUrl, while app.Request.Url.AbsolutePath will contain the transfered-to url. So basically our url rewrite module was always redirecting to the original requested which was trying to transfer to the new one, etc.
Changed that in the url rewriting module, and will hope that everything still works like a charm (obviously a lot of testing will follow such a change)...
In order to fix the developers issue, I chose to combine both solutions, which might make it a bit more of a risk for different behaviors between development and production, but that's what we have test servers for...
so here is my transfer method looks like in the end:
Once again this is meant to transfer from an ASPX page to a MVC action, from MVC to MVC you probably don't need anything that complex, as you can use a TransferResult or just return a different view, call another action, etc.
private static void Transfer(string url) {
if (HttpRuntime.UsingIntegratedPipeline) {
// IIS 7 integrated pipeline, does not work in VS dev server.
HttpContext.Current.Server.TransferRequest(url, true);
}
// for VS dev server, does not work in IIS
var cUrl = HttpContext.Current.Request.Url;
// Create URI builder
var uriBuilder = new UriBuilder(cUrl.Scheme, cUrl.Host, cUrl.Port, HttpContext.Current.Request.ApplicationPath);
// Add destination URI
uriBuilder.Path += url;
// Because UriBuilder escapes URI decode before passing as an argument
string path = HttpContext.Current.Server.UrlDecode(uriBuilder.Uri.PathAndQuery);
// Rewrite path
HttpContext.Current.RewritePath(path, true);
IHttpHandler httpHandler = new MvcHttpHandler();
// Process request
httpHandler.ProcessRequest(HttpContext.Current);
}
I haven't done much research, but here's what seems to be happening upon Response.End():
public void End()
{
if (this._context.IsInCancellablePeriod)
{
InternalSecurityPermissions.ControlThread.Assert();
Thread.CurrentThread.Abort(new HttpApplication.CancelModuleException(false));
}
else if (!this._flushing)
{
this.Flush();
this._ended = true;
if (this._context.ApplicationInstance != null)
{
this._context.ApplicationInstance.CompleteRequest();
}
}
}
That could at least provide the "Why" (_context.IsInCancellablePeriod). You could try to trace that using your favourite CLR decompiler.

Asp.Net Asynchronous Http Handler for Image Resizing

I am using c# method Image.GetThumbnail() to generate thumbnail of an Image. I have to generate this thumbnail dynamically. I have to generate 100 thumbnails for a single galleryId. So I added an HttpHandler to generate the thumbnail dynamically. The problem is when I click a gallery Id There is 100 request goes to my Http handler. So the thumbnails loads very slowly. I have some questios
Can I get the performance improvement with the implementation of Asynchronous Http Handler? I am not familiar with the asynchronous programming in c#. How Can I generate thumbnail using Http Asynchronous Handler?
Is there any alternative way to get better performance than asynchronous programming model? I mean add multiple handlers for serving the request like
Can anyone please help me.
Another way to solve this problem is to avoid it in the first place.
Generate the thumbnail when the image is uploaded and then just serve the ready thumbnail with content expiry set appropriately.
You will save quite a lot of processing and, what is more important, shift it in time, so when users are viewing the gallery you can serve the thumbnail as quickly as possible.
Here you need to define the real issue of the delay. Is because you call it 100 times at the same moment, or is because your handler is blocked by session lock ?
So first think is to remove the session from your handler - if you use it.
Second, if your problem is because you call it many times together you can limit this by using mutex and a simple trick. You can lock the handler to simulate only create let say 6 thumbnails simultaneously using mutex.
Here is a simple code that use mutex and can left at the same time n threads to run
static var random = new Random(DateTime.Now.Ticks);
public void ProcessRequest_NoCatch (HttpContext context)
{
// here we made names like ThubNum_0, ThubNum_1, ThubNum_2 ... ThubNum_4
// with 4 you have average 4 simulated thubs
string sMyMutexName = string.Format("ThubNum_{0}", random.Next(0, 4))
var mut = new Mutex(true, sMyMutexName);
try
{
// Wait until it is safe to enter.
mut.WaitOne();
// here you create your thubs
}
finally
{
// Release the Mutex.
mut.ReleaseMutex();
}
}
See how session block each other pages:
Web app blocked while processing another web app on sharing same session
Replacing ASP.Net's session entirely
cache
Of cource you need to cache your thumbnail's to the disk, and set also cache for the images for the browser. There is no reason to create them again and again.

Async web calls bottlenecking and running sequencially

I have a web site which makes frequent requests to an external web service, and I'd like these calls to be async and parallel to avoid blocking and to speed up the site a bit. Basically, I have 8 widgets, each of which has to make its own web call(s).
For some reason, only the first 3 or so of them truly load async, and then the threads don't free up in time, and the rest of the widgets load sequencially. If i could get 3 of them to load in parallel, then 3 more in parallel, then 2 more in parallel, i'd be happy. So the issue is really that the threads aren't freeing up in time.
I'm guessing the answer has to do with some IIS configuration. I'm testing on a non-server OS, so maybe that's part of it.
Edit for #jon skeet:
I'm using reflection to invoke the web calls like this:
output = methodInfo.Invoke(webservice, parameters);
The widget actions (which eventually call the web service) are called via a jquery $.each() loop and the .load function (maybe this causes a bottleneck?). The widget actions are set up as async methods in an async controller.
Here is the code for one of the async methods (they are all set up like this):
public void MarketTradeWidgetAsync()
{
AsyncManager.OutstandingOperations.Increment();
//a bunch of market trade logic
//this eventually calls the web service
PlanUISetting uiSettingMarketQuotesConfig = WebSettingsProviderManager.Provider.GetMarketQuotes(System.Configuration.ConfigurationManager.AppSettings["Theme"], SessionValues<String>.GlobalPlanID, SessionValues<String>.ParticipantID, "MARKETQUOTES");
AsyncManager.OutstandingOperations.Decrement();
}
public ActionResult MarketTradeWidgetCompleted(MarketTradeTool markettradetool)
{
if (Session.IsNewSession)
return PartialView("../Error/AjaxSessionExpired");
else
{
ViewData["MarketData"] = markettradetool;
return PartialView(markettradetool);
}
}
And, like I said, these methods are called via jquery. My thinking is that since the action methods are async, they should give control back to the jquery after they get called, right?
SessionState = "readonly" for the page at hand fixed this issue. Evidently session locking was the issue.

FLEX Cairngorm commands... odd behaviour

while trying to solve my problems in serializing the execution of cairngorm commands, I tried to bypass completely the event dispatching and simply instantiated the command I wanted to execute, then called it's execute method. In this method there's a call to a delegate that calls ServiceUtils that performs the HTTPService.send thing...
Now, those commands should be run in the exact order I call them.
And, since the server (RAILS) is only one, all requests should return in the same order.
This isn't so.. the order varies upon different executions.. why?!?
Just because you send requests in a certain order doesn't mean the responses will return in that order. HTTPService calls are asynchronous. For example, assume the following three requests are sent at the same time:
Request 1 (takes 4 seconds on the server to process)
Request 2 (takes 0.5 seconds to process)
Request 3 (takes 2 seconds to process)
Assuming network speed is constant (and a lot of other environment issues being constant), you will get the response for Request 2 back first, then Request 3, then Request 1.
If you need to call them in serial, you should do something like this:
protected function doWork():void {
request1.send();
}
protected function onRequest1Complete(e:ResultEvent):void {
request2.send();
}
protected function onRequest2Complete(e:ResultEvent):void {
request3.send();
}
protected function onRequest3Complete(e:ResultEvent):void {
// you are done at this point
}
...
<mx:HTTPService id="request1" url="http://example.com/service1" result="onRequest1Complete(event)" />
<mx:HTTPService id="request2" url="http://example.com/service2" result="onRequest2Complete(event)" />
<mx:HTTPService id="request3" url="http://example.com/service3" result="onRequest3Complete(event)" />
Hope that helps.
RJ's answer covers it very well. Just to add to it:
Your commands will create asynchronous requests via the services you use. If you want to "simulate" synchronous execution of commands, the subsequent command will have to be executed in the resultHandler of the previous commands request.
Although this may not always be the cleanest way of doing things, it may be suitable for your scenario. I'll need more information about the nature of service calls and the app in general to make a call whether this is the best method for you or not.
HTH,
Sri

Resources