Asp.Net Asynchronous Http Handler for Image Resizing - asp.net

I am using c# method Image.GetThumbnail() to generate thumbnail of an Image. I have to generate this thumbnail dynamically. I have to generate 100 thumbnails for a single galleryId. So I added an HttpHandler to generate the thumbnail dynamically. The problem is when I click a gallery Id There is 100 request goes to my Http handler. So the thumbnails loads very slowly. I have some questios
Can I get the performance improvement with the implementation of Asynchronous Http Handler? I am not familiar with the asynchronous programming in c#. How Can I generate thumbnail using Http Asynchronous Handler?
Is there any alternative way to get better performance than asynchronous programming model? I mean add multiple handlers for serving the request like
Can anyone please help me.

Another way to solve this problem is to avoid it in the first place.
Generate the thumbnail when the image is uploaded and then just serve the ready thumbnail with content expiry set appropriately.
You will save quite a lot of processing and, what is more important, shift it in time, so when users are viewing the gallery you can serve the thumbnail as quickly as possible.

Here you need to define the real issue of the delay. Is because you call it 100 times at the same moment, or is because your handler is blocked by session lock ?
So first think is to remove the session from your handler - if you use it.
Second, if your problem is because you call it many times together you can limit this by using mutex and a simple trick. You can lock the handler to simulate only create let say 6 thumbnails simultaneously using mutex.
Here is a simple code that use mutex and can left at the same time n threads to run
static var random = new Random(DateTime.Now.Ticks);
public void ProcessRequest_NoCatch (HttpContext context)
{
// here we made names like ThubNum_0, ThubNum_1, ThubNum_2 ... ThubNum_4
// with 4 you have average 4 simulated thubs
string sMyMutexName = string.Format("ThubNum_{0}", random.Next(0, 4))
var mut = new Mutex(true, sMyMutexName);
try
{
// Wait until it is safe to enter.
mut.WaitOne();
// here you create your thubs
}
finally
{
// Release the Mutex.
mut.ReleaseMutex();
}
}
See how session block each other pages:
Web app blocked while processing another web app on sharing same session
Replacing ASP.Net's session entirely
cache
Of cource you need to cache your thumbnail's to the disk, and set also cache for the images for the browser. There is no reason to create them again and again.

Related

Faking MVC Server.Transfer: Response.End() does not end my thread

I have two issues here, the second one is irrelevant if the first one got answered, but still technically interesting in my opinion... I will try to be as clear as possible:
1st question: my goal is to fake a Server.Transfer in MVC, is there any descent way to do that, I found quite a few articles about it, but most where about redirecting / rerouting, which is not possible in my case (not that I can think of at least).
Here is the context, we have two versions of our website, a "desktop" one and a mobile one. Our marketing guy wants both versions of the home page to be served on the same url (because the SEO expert said so).
This sounds trivial and simple, and it kind of is in most cases, except... Our desktop site is a .NET 4.0 ASPX site, and our mobile site is MVC, both run in the same site (same project, same apppool, same app).
Because the desktop version represents about 95% of our traffic, this should be the default, and we want to "transfer" (hence same url) from the ASPX code behind to the MVC view only if user is on a mobile device or really wants to see the mobile version. As far as I saw so far, there is no easy way to do that (Server.Transfer only executes a new handler - hence page - if there is a physical file for it). Hence question has any one done that in a proper way so far?
And which brings me to:
2nd question: I did build my own transfer to MVC mechanism, but then figured out that a Response.End() does not actually ends the running thread anymore, does anyone have a clue why?
Obviously, I don't expect any answer out of the blue, so here is what I am doing:
in the page(s) which needs transfering to mobile, I do something like:
protected override void OnPreInit(EventArgs e) {
base.OnPreInit(e);
MobileUri = "/auto/intro/index"; // the MVC url to transfer to
//Identifies correct flow based on certain conditions 1-Desktop 2-Mobile
BrowserCheck.RedirectToMobileIfRequired(MobileUri);
}
and my actual TransferToMobile method called by RedirectToMobileIfRequired (I skipped the detection part as it is quite irrelevant) looks like:
/// <summary>
/// Does a transfer to the mobile (MVC) action. While keeping the same url.
/// </summary>
private static void TransferToMobile(string uri) {
var cUrl = HttpContext.Current.Request.Url;
// build an absolute url from relative uri passed as parameter
string url = String.Format("{0}://{1}/{2}", cUrl.Scheme, cUrl.Authority, uri.TrimStart('/'));
// fake a context for the mvc redirect (in order to read the routeData).
var fakeContext = new HttpContextWrapper(new HttpContext(new HttpRequest("", url, ""), HttpContext.Current.Response));
var routeData = RouteTable.Routes.GetRouteData(fakeContext);
// get the proper controller
IController ctrl = ControllerBuilder.Current.GetControllerFactory().CreateController(fakeContext.Request.RequestContext, (string)routeData.Values["controller"]);
// We still need to set routeData in the request context, as execute does not seem to use the passed route data.
HttpContext.Current.Request.RequestContext.RouteData.DataTokens["Area"] = routeData.DataTokens["Area"];
HttpContext.Current.Request.RequestContext.RouteData.Values["controller"] = routeData.Values["controller"];
HttpContext.Current.Request.RequestContext.RouteData.Values["action"] = routeData.Values["action"];
// Execute the MVC controller action
ctrl.Execute(new RequestContext(new HttpContextWrapper(HttpContext.Current), routeData));
if (ctrl is IDisposable) {
((IDisposable)ctrl).Dispose(); // does not help
}
// end the request.
HttpContext.Current.Response.End();
// fakeContext.Response.End(); // does not add anything
// HttpContext.Current.Response.Close(); // does not help
// fakeContext.Response.Close(); // does not help
// Thread.CurrentThread.Abort(); // causes infinite loading in FF
}
At this point, I would expect the Response.End() call to end the thread as well (and it does if I skip the whole faking the controller execution bit) but it doesn't.
I therefore suspect that either my faked context (was the only way I found to be able to passed my current context with a new url) or the controller prevents the thread to be killed.
fakeContext.Response is same as CurrentContext.Response, and the few attempts at ending the fake context's response or killing the thread didn't really help me.
Whatever code is running after the Response.End() will NOT actually be rendered to the client (which is a small victory), as the Response stream (and the connection, no "infinite loading" in the client) is being closed. But code is still running and that is no good (also obviously generates loads of errors when trying to write the ASPX page, write headers, etc.).
So any new lead would be more than welcome!
To sum it up:
- does anyone have a less hacky way to achieve sharing a ASPX page and a MVC view on the same url?
- if not, does anyone have a clue how I can ensure that my Response is really being ended?
Many thanks in advance!
Well,
for whoever is interested, I at least have answer to question 1 :).
When I first worked on that feature, I looked at the following (and very close) question:
How to simulate Server.Transfer in ASP.NET MVC?
And tried both the Transfer Method created by Stan (using httpHandler.ProcessRequest) and Server.TransferRequest methods. Both had desadvantages for me:
the first one does not work in IIS, (because I need to call that in a page, and that seems too late already).
the second one makes it terribly annoying for developers who all need to run their site in IIS (no biggy, but still...).
Seeing that my solution obviously wasn't optimal, I had to come back to the IIS solution, which seems to be the neatest for production environment.
This solution worked for a page and triggered an infinite loop on another one...
That's when I got pointed to what I had lazily discarded as not being the cause: our url redirect module. It uses Request.RawUrl to match a rule, and oh surprise, Server.TransferRequest keeps the original Request.RawUrl, while app.Request.Url.AbsolutePath will contain the transfered-to url. So basically our url rewrite module was always redirecting to the original requested which was trying to transfer to the new one, etc.
Changed that in the url rewriting module, and will hope that everything still works like a charm (obviously a lot of testing will follow such a change)...
In order to fix the developers issue, I chose to combine both solutions, which might make it a bit more of a risk for different behaviors between development and production, but that's what we have test servers for...
so here is my transfer method looks like in the end:
Once again this is meant to transfer from an ASPX page to a MVC action, from MVC to MVC you probably don't need anything that complex, as you can use a TransferResult or just return a different view, call another action, etc.
private static void Transfer(string url) {
if (HttpRuntime.UsingIntegratedPipeline) {
// IIS 7 integrated pipeline, does not work in VS dev server.
HttpContext.Current.Server.TransferRequest(url, true);
}
// for VS dev server, does not work in IIS
var cUrl = HttpContext.Current.Request.Url;
// Create URI builder
var uriBuilder = new UriBuilder(cUrl.Scheme, cUrl.Host, cUrl.Port, HttpContext.Current.Request.ApplicationPath);
// Add destination URI
uriBuilder.Path += url;
// Because UriBuilder escapes URI decode before passing as an argument
string path = HttpContext.Current.Server.UrlDecode(uriBuilder.Uri.PathAndQuery);
// Rewrite path
HttpContext.Current.RewritePath(path, true);
IHttpHandler httpHandler = new MvcHttpHandler();
// Process request
httpHandler.ProcessRequest(HttpContext.Current);
}
I haven't done much research, but here's what seems to be happening upon Response.End():
public void End()
{
if (this._context.IsInCancellablePeriod)
{
InternalSecurityPermissions.ControlThread.Assert();
Thread.CurrentThread.Abort(new HttpApplication.CancelModuleException(false));
}
else if (!this._flushing)
{
this.Flush();
this._ended = true;
if (this._context.ApplicationInstance != null)
{
this._context.ApplicationInstance.CompleteRequest();
}
}
}
That could at least provide the "Why" (_context.IsInCancellablePeriod). You could try to trace that using your favourite CLR decompiler.

Async web calls bottlenecking and running sequencially

I have a web site which makes frequent requests to an external web service, and I'd like these calls to be async and parallel to avoid blocking and to speed up the site a bit. Basically, I have 8 widgets, each of which has to make its own web call(s).
For some reason, only the first 3 or so of them truly load async, and then the threads don't free up in time, and the rest of the widgets load sequencially. If i could get 3 of them to load in parallel, then 3 more in parallel, then 2 more in parallel, i'd be happy. So the issue is really that the threads aren't freeing up in time.
I'm guessing the answer has to do with some IIS configuration. I'm testing on a non-server OS, so maybe that's part of it.
Edit for #jon skeet:
I'm using reflection to invoke the web calls like this:
output = methodInfo.Invoke(webservice, parameters);
The widget actions (which eventually call the web service) are called via a jquery $.each() loop and the .load function (maybe this causes a bottleneck?). The widget actions are set up as async methods in an async controller.
Here is the code for one of the async methods (they are all set up like this):
public void MarketTradeWidgetAsync()
{
AsyncManager.OutstandingOperations.Increment();
//a bunch of market trade logic
//this eventually calls the web service
PlanUISetting uiSettingMarketQuotesConfig = WebSettingsProviderManager.Provider.GetMarketQuotes(System.Configuration.ConfigurationManager.AppSettings["Theme"], SessionValues<String>.GlobalPlanID, SessionValues<String>.ParticipantID, "MARKETQUOTES");
AsyncManager.OutstandingOperations.Decrement();
}
public ActionResult MarketTradeWidgetCompleted(MarketTradeTool markettradetool)
{
if (Session.IsNewSession)
return PartialView("../Error/AjaxSessionExpired");
else
{
ViewData["MarketData"] = markettradetool;
return PartialView(markettradetool);
}
}
And, like I said, these methods are called via jquery. My thinking is that since the action methods are async, they should give control back to the jquery after they get called, right?
SessionState = "readonly" for the page at hand fixed this issue. Evidently session locking was the issue.

ASP.NET Async Tasks - how to use WebClient.DownloadStringAsync with Page.RegisterAsyncTask

A common task I have to do for a site I work on is the following:
Download data from some third-party API
Process the data in some fashion
Display the results on the page
I was initially using WebClient.DownloadStringAsync and doing my processing on the result. However I was finding that DownloadStringAsync was not respecting the AsyncTimeout parameter, which I sort of expected once I did a little reading about how this works.
I ended up adapting the code from the example on how to use PageAsyncTask to use DownloadString() there - please note, it's the synchronous version. This is probably okay, because the task is now asynchronous. The tasks now properly time out and I can get the data by PreRender() time - and I can easily genericize this and put it on any page I need this functionality.
However I'm just worried it's not 'clean'. The page isn't notified when the task is done like the DownloadStringAsync method would do - I just have to scoop the results (stored in a field in the class) up at the end in my PreRender event.
Is there any way to get the Webclient's Async methods to work with RegisterPageTask, or is a helper class the best I can do?
Notes: No MVC - this is vanilla asp.net 4.0.
If you want an event handler on your Page called when the async task completes, you need only hook one up. To expand on the MSDN "how to" article you linked:
Modify the "SlowTask" class to include an event, like - public event EventHandler Finished;
Call that EventHandler in the "OnEnd" method, like - if (Finished != null)
{
Finished(this, EventArgs.Empty);
}
Register an event handler in your page for SlowTask.Finished, like - mytask.Finished += new EventHandler(mytask_Finished);
Regarding ExecuteRegisteredAsyncTasks() being a blocking call, that's based only on my experience. It's not documented explicitly as such in the MSDN - http://msdn.microsoft.com/en-us/library/system.web.ui.page.executeregisteredasynctasks.aspx
That said, it wouldn't be all that practical for it be anything BUT a blocking call, given that it doesn't return a WaitHandle or similar. If it didn't block the pipeline, the Page would render and be returned to the client before the async task(s) completed, making it a little difficult to get the results of the task back to the client.

ASP.NET (MVC) Outputcache and concurrent requests

Let's say that, theoratically, I have a page / controller action in my website that does some very heavy stuff. It takes about 10 seconds to complete it's operation.
Now, I use .NET's outputcache mechanism to cache it for 15 minutes (for examle, I use [OutputCache(Duration = 900)]) What happens if, after 15 minutes, the cache is expired and 100 users request the page again within those 10 seconds that it takes to do the heavy processing?
The heavy stuff is done only the first time, and there is some locking mechanism so that the other 99 users will get the cache result
The heavy stuff is done 100 times (and the server is crippled as it can take up to 100 * 10 seconds)
Easy question maybe, but I'm not 100% sure. I hope it is number one, though :-)
Thanks!
Well, it depends upon how you have IIS configured. If you have less than 100 worker threads (let's say, 50), then the "heavy stuff" is done 50 times, crippling your server, and then the remaining 50 requests will be served from cache.
But no, there is no "locking mechanism" on a cached action result; that would be counterproductive, for the most part.
Edit: I believe this to be true, but Nick's tests say otherwise, and I don't have time to test now. Try it yourself! The rest of the answer is not dependent on the above, though, and I think it's more important.
Generally speaking, however, no web request, cached or otherwise, should take 10 seconds to return. If I were in your shoes, I would look at somehow pre-computing the hard part of the request. You can still cache the action result if you want to cache the HTML, but it sounds like your problem is somewhat bigger than that.
You might also want to consider asynchronous controllers. Finally, note that although IIS and ASP.NET MVC will not lock on this heavy computation, you could. If you use asynchronous controllers combined with a lock on the computation, then you would get effectively the behavior you're asking for. I can't really say if that's the best solution without knowing more about what your doing.
It seems to lock here, doing a simple test:
<%# OutputCache Duration="10" VaryByParam="*" %>
protected void Page_Load(object sender, EventArgs e)
{
System.Threading.Thread.Sleep(new Random().Next(1000, 30000));
}
The first page hits the a breakpoint there, even though it's left sleeping...no other request hits a breakpoint in the Page_Load method...it waits for the first one to complete and returns that result to everyone who's requested that page.
Note: this was simpler to test in a webforms scenario, but given this is a shared aspect of the frameworks, you can do the same test in MVC with the same result.
Here's an alternative way to test:
<asp:Literal ID="litCount" runat="server" />
public static int Count = 0;
protected void Page_Load(object sender, EventArgs e)
{
litCount.Text = Count++.ToString();
System.Threading.Thread.Sleep(10000);
}
All pages queued up while the first request goes to sleep will have the same count output.
Old question, but I ran in to this problem, and did some investigation.
Example code:
public static int Count;
[OutputCache(Duration = 20, VaryByParam = "*")]
public ActionResult Test()
{
var i = Int32.MaxValue;
System.Threading.Thread.Sleep(4000);
return Content(Count++);
}
Run it in one browser, and it seems to lock and wait.
Run it in different browsers (I tested in IE and firefox) and the requests are not put on hold.
So the "correct" behaviour has more to do with which browser you are using than the function in IIS.
Edit: To clarify - No lock. The server gets hit by all requests that manage to get in before the first result is cached, possibly resulting in a hard hit on the server for heavy requests. (Or if you call an external system, that system could be brought down if your server serves many requests...)
I made a small test that might help. I believe what I've discovered is that the uncached requests do not block, and each request that comes in while the cache is expired and before the task is completed ALSO trigger that task.
For example, the code below takes about 6-9 seconds on my system using Cassini. If you send two requests, approximately 2 seconds apart (i.e. two browser tabs), both will receive unique results. The last request to finish is also the response that gets cached for subsequent requests.
// CachedController.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
namespace HttpCacheTest.Controllers
{
public class CachedController : Controller
{
//
// GET: /Cached/
[OutputCache(Duration=20, VaryByParam="*")]
public ActionResult Index()
{
var start = DateTime.Now;
var i = Int32.MaxValue;
while (i > 0)
{
i--;
}
var end = DateTime.Now;
return Content( end.Subtract(start).ToString() );
}
}
}
You should check this information here:
"You have a single client making multiple concurrent requests to the server. The default behavior is that these requests will be serialized;"
So, if the concurrent request from the single client is serialized, the subsequent request will use the cache. That explain some behavior seem in some answer above (#mats-nilsson and #nick-craver)
The context that you showed us is multiple users, that will hit you Server in the same time, and you server will get busy until have completed at least one request and created the output cache, and use it for the next request. So if you want to serialize multiple users requesting the same resource, we need to understand how the serialized request works for single user. Is that what you want?

Page View Counter like on StackOverFlow

What is the best way to implement the page view counter like the ones they have here on the site where each question has a "Views" counter?
Factoring in Performance and Scalability issues.
I've made two observations on the stackoverflow views counter:
There's a link element in the header that handles triggering the count update. For this question, the markup looks like this:
<link href="/questions/246919/increment-view-count" type="text/css" rel="stylesheet" />
I imagine you could hit that url to update the viewcount without ever actually viewing the page, but I haven't tried it.
I had a uservoice ticket, where the response from Jeff indicated that views are not incremented from the same ip twice in a row.
The counter i optimized works like this:
UPDATE page_views SET counter = counter + 1 WHERE page_id = x
if (affected_rows == 0 ) {
INSERT INTO page_views (page_id, counter) VALUES (x, 1)
}
This way you run 2 query for the first view, the other views require only 1 query.
An efficient way may be :
Store your counters in the Application object, you may persist it to file/DB periodically and on application close.
Instead of making a database call everytime the database is hit, I would increment a counter using a cache object and depending on how many visits you get to your site every day you have send the page hits to the database every 100th hit to the site. This is waay faster then updating the database on every single hit.
Or another solution is analyzing the IIS log file and updating hits every 30min through a windows service. This is what I have implemented and it work wonders.
You can implement an IHttpHandler to do that.
I'm a fan of #Guillaume's style of implementation. I use a transparent GIF handler and in-memory queues to batch-up sets of changes that are then periodically flushed using a seperate thread created in global.asax.
The handler implements IHttpHandler, processes the request parameters e.g. page id, language etc., updates the queue, then response.writes the transparent GIF.
By moving persistent changes to a seperate thread than the user-request you also deal much better with potential serialization issues from running multiple servers etc.
Of course you could just pay someone else to do the work too e.g. with transparent gifs.
For me the best way is to have a field in the question table and update it when the question is accessed
UPDATE Questions SET views = views + 1 WHERE QuestionID = x
Application Object: IMO is not scalable because the may end with lots of memory consumption as more questions are accessed.
Page_views table: no need to, you have to do a costly join after

Resources