Using Application_BeginRequest for url rewriting - asp.net

Up till now we've been rewriting URL's using a custon 404 page: the url would not map to any file in the site, and we configured the IIS to send 404 error to a aspx page which redirected those url's to the correct URL.
Now we want to stop using redirects, so after reading Scott Guthrie's article on Url Rewriting, I want to use the Application_BeginRequest in Global.asax. The thing is that a lot of our url's are not rewrites, and can get to the right place without any intervention. I'm worried that now every single request is going to have to go through the Application_BeginRequest method (even the un-rewritten url's), and I'm afraid it will slow down their loading time.
What do you think? Is loading time an issue when using Application_BeginRequest?

Every request goes through Application_BeginRequest anyway.
You'll need to add some logic so only those pages that need to be rewritten are changed.
That small bit of logic won't be very expensive.
I've used it, and didn't notice performance suffering at all.

There are very robust solutions out there if you are going to use it more frequently and that imitate Apache module mod_rewrite, i like this one because i have used it and it gave me no problem:
http://www.codeplex.com/IIRF
or:
http://urlrewriter.net/
http://www.managedfusion.com/products/url-rewriter/
You can read more options in this post:
ASP.NET URL Rewriting
As Josh says the essential article is:
http://weblogs.asp.net/scottgu/archive/2007/02/26/tip-trick-url-rewriting-with-asp-net.aspx

Just a note to others who may be having trouble. Make sure to have
<modules runAllManagedModulesForAllRequests="true">
in your web.config

That article is a bit old... and there are better approaches in the .NET framework now. What's funny is I used to do exactly what you are doing (hijacking the Error handler).
http://www.singingeels.com/Blogs/Nullable/2007/09/14/URL_ReWriting_The_Right_Way_Its_Easy.aspx
That's what I think you want to be doing now. Oh, and about performance... that adds about 0.00001 seconds to your page time.

Scott Guthrie's article is a good one, but I am curious as to why you are choosing to do this via the Global.asax instead of using an HttpModule as he suggests. Also, the Asp.Net page lifecycle runs through each and every one of those events in the Global.asax anyway.
HttpModule events run every single request and as long as you are not doing anything crazy in your logic, then you should be good to go. Even database lookups in the Application_BeginRequest method can be mitigated by proper caching.
And when in doubt, write some information out to the Trace in order to find out exactly how long your routine takes. I think you will find that compared to your most expensive operations (database lookups), the time will be negligible.

Related

How can I detect if an ISAPI rewrite has occurred

I've inherited an old system after starting a new job, none of the previous developers work here any more and none of them documented all that much. Fun times.
The system uses an old, defunct CMS and I've just finished a large ordeal whereby I could not for the life of me figure out how routing worked (the client wanted a URL changing). It later turned out that the previous developers had been using a completely separate program called "Helicon ISAPI rewrite" and had been doing all of the site's URL management from there.
My question is: How could I have figured this out more quickly (e.g. are there external tools I could have used or logs I don't know about that would have revealed how this routing was working)?
I spent a whole afternoon picking through 10 years worth of code when the answer wasn't even in there! Right now I'm feeling that I had no chance of figuring that out quickly but I'm wondering if I'm missing something.
I think I understand what you're asking, to discover the rewriting in the first place. Tony's answer is right if you knew about ISAPI_Rewrite up front, but hindsight is 20-20. I'm a big fan of ISAPI_Rewrite and Helicon Ape, so I might have suspected it. However, if the rewriting was being done by IIS7's .NET web.config, I wouldn't have looked there (although I guess web.config should be a place to start for anything IIS-screwy). With a legacy CMS or something like WordPress, I wouldn't know where to start, so I would probably start with the code like you did.
I suppose the real starting point is the top of IIS, before the request even gets to the web code.
Looking around in IIS7, I see Handler Mappings, with a whole bunch of stuff in there, intercepting various requests. These could all "do things" to the request before it hits the website. e.g., I see Microsoft's ExtensionlessUrlHandler... which gave us troubles as a breaking changing when upgrading to .NET 4.0. We had to dig around for this, wondering what was putting eurl.axd into our urls.
IIS6 has an ISAPI Filters tab on the website properties. Mine has ISAPI_Rewrite and ASP.NET_2.0.... in it. There's also an HTTP Headers tabl with MIME types, that can be a culprit for diverting requests.
Knowing this now, perhaps a list of all rewriting software would be handy. Just search the system for any of them installed - might be the fastest way to a get the first clue.
And actually, if you spent an afternoon in 10 years of code, that's not too bad! So you may not have had a chance of figuring this out quickly - any legacy system is going to have buried secrets.
If it's ISAPI_Rewrite v3, you can enable logging in httpd.conf in ISAPI_Rewrite installation folder by putting the following lines:
RewriteLogLevel 9
LogLevel debug
Then after you make some test request, the rewrite.log and error.log file will appear in ISAPI_Rewrite installation folder. error.log shows general issues, while rewrite.log shows how and if the rules are applied and what the resulting URL is.

Priming the asp.net output cache

Is there a way to programmatically prime the asp.net output cache? I've investigated the caching API and can't seem to find an obvious way to do this. Has anyone tried something like this? If so, what method did you use?
I gave some thought to this last year and ended up concluding that it was not that important for the case, but if it's important for you website, all you have to do is to simply call the webpages from somewhere like Application_Start (after all code has run) event but you shouldn't stop there!
The cache will eventually expire and to avoid that you should set up some way to cache the pages again before any clients requests that page.
Make the outputcache dependent on someother object in cache and set an expiration callback.
Thus, when that cache object expires, so does your pages and you should make http requests to the pages you want to recache and so on.
I'm answering to this question, but the amount of effort and question marks I still have in my mind lead me to advise not to go through with this...
UPDATE
The only kind of dependency you may set in outputcache is sql dependency. Use it if you want, but if you would need to depend your outputcache on some other business object, then this might get very difficult. I could tell you that you could set a database object and depend your database on it and expire it yourself using some kind of timer.
Man, the longer I write the more solutions and difficulties I find! I can't write a book for something that is not worthy your precious time. Believe me you that the usefulness for this will be nearly zero.
Priming the cache is as others have suggested as easy as requesting the pages you want cached. Of course if you do this programmaticly it will only request the HTML and not all the linked resources (CSS, JavaScript, Images...) which is a good thing to avoid wasted bandwidth.
For many websites the items that are cached which consume the biggest performance penalties are common to many or all pages. For example a navigation system on a large CMS or storefront may query the database and do a bunch of rendering work which can then be cached for all pages. Also a big part of the initial load in ASP.net is when the website if first accessed and loaded into memory. Both of these issues can be addressed by even calling a single page on your site, but there is nothing stopping you from making a list of URLs and calling each one periodically.
If your cache policy is set for a 20 minutes timeout, maybe request each page once every 17-18 minutes.
Here are some resources with source code to help you get started:
Good Simple Primer on requesting web URL in C#
Website Monitoring Windows Service
Asyncronous Website Monitor
As I mentioned before, you can easily extend these to "foreach" over an array or list of URLs to be requested.

Application_AuthenticateRequest hit for all requests, including images and js files

In my MVC3 application, I'm using Application_AuthenticateRequest to create my custom user context and create the session. However, I notice that this is getting fired for every file per page request, including images, js, css, etc.
Is this the right method to do what I'm trying to do, or should I be doing this somewhere else (i.e. action filter)? Or, is this the right place, I just need to put some checks and/or configuration to ensure this method (or my block of code) is just executed for page requests instead of requests for static files?
I searched for a while trying to find the answer, and found one specific to IIS7, but this is happening for me on my ASP.NET dev server (debugging) on WinXP. Other than that, I couldn't find much, which leads me to think I may be way off on something here, possibly overlooking something simple.
Thanks in advance.
Jerad,
You are correct that you would be better off creating an action filter to handle your user context. You can decorate those controllers where the user context is required.
This is a better solution than using code to investigate the request, just so you can ignore particular requests.
counsellorben

Does IIS throw away the URL fragment on custom error pages?

I'm using the old 404-rewrite method on a certain site that is tied to IIS6 *.
So if I enter
http://example.com/non-existent/path
it calls my error page like so
http://example.com/catch.aspx?404;http://example.com/non-existent/path
Great.
Except if I call the page with a fragment, like
http://example.com/non-existent/path#with-fragment
I get the same result as above. I can't find the fragment anywhere:
Request.Url
Request.Url.OriginalString
Request.UrlReferrer
Request.RawUrl
headers, server variables, etc
This has come up because I want to resolve paths created by AJAX to their server-side versions.
Is there any way for me to retrieve the original path from my handler?
Thanks.
(*) Please don't suggest I change platform. Obviously I would if I could.
No, there isn't.
The portion of URL after # is never passed to the server per HTTP spec. Has nothing to do with platform.
To work with info after # in javascript you should look at Javascript History plugins/functionality. jQuery has history plugin, asp.net ajax and mvc ajax (partial views et al) have that. Mind you, it's not a very easy thing to implement, you have to get into undo/redo mindset.
It probably won't work if you are trying to handle 404's on the server - server doesn't know that there was something after #. Not sure what you want to do though, 404 handling, or "resolve paths created by AJAX"? What exactly is the goal?

asp.net security issue, customErrors

we were all recently alerted by scottgu with this security vulnerability. http://weblogs.asp.net/scottgu/archive/2010/09/18/important-asp-net-security-vulnerability.aspx
I'm wondering, since I've been redirecting errors via Global.asax on the Application_Error event, I was wondering if that can suffice the fix for this issue or do I still need to place a setting on the web.config?
The problem is that (according MS) you need to respond ALWAYS in the same way, no matter the specific error you have.
You'd need to redirect the user to the same page on errors 404 and 500. That's why the easiest way would be using the web.config setting.
They say that this would be temporary and you could revert it once they release a patch for this.
This is the Scott answer to a similar question:
I would recommend temporarily updating the module to always redirect to the search page. One of the ways this attack works is that looks for differentiation between 404s and 500 errors. Always returning the same HTTP code and sending them to the same place is one way to help block it.
Note that when the patch comes out to fix this, you won't need to do this (and can revert back to the old behavior). But for right now I'd recommend not differentiating between 404s and 500s to clients.

Resources