We have enabled the wildcard mapping for an existing classic ASP site to handle through aspnet_isapi.dll. Ever since then, the performance of the site has dropped? Does the .asp files get compiled before it served by IIS? Any help is greatly appreciated
Jyothish George
Ever since then, the performance of
the site has dropped?
Häh? You did that on your server. You should KNOW whether the performance dropped or not. Afte rall, you are the guy with access to your own servers logfiles and performance counters.
Does the .asp files get compiled
before it served by IIS?
Since ASp is not a language to start with, it is not compiled. The language in question (possibly / likely VBScript) is handled by the Windows Active Scripting Host... and for the major languages (you can have various)... this is not compiled. I could be, but I know of not a single compiled language.
What server version are you using?
Enabling wildcard mapping will PUSH all files even .gifs, .css, .jpgs, .js etc through the aspnet_isapi.dll. This is probably where your degradation in performance is being seen.
You may want to consider moving all static files into a sub directory that does not have wildcard mapping enabled.
I was the program manager for some parts of ASP on IIS4 and IIS5, so I know a couple of things about this stuff!!
Yes, you can have a perf impact becuase you're telling IIS to route more file request through an ISAPI. When IIS gets a request it has two options - bypass all intermediate code and access the file directly from disk and serve to the user. This is very very fast and allows for caching. The second option is to pass it to some code (an ISAPI) for processing and then potentially serve the result. This is a much slower code path. Adding wildcard mapping means more request will go thru the ISAPI path. Hence a potential perf drop.
The best action is to disable wildcard mapping and measure perf; then turn it on and re-measure perf. If there is a perf drop consider limiting which files are mapped. For example, mapping foo*.bar is probably not going to lead to much perf degradation; but .b most certainly will.
Hope that helps!
Related
I googled forever, and I couldn't find an answer to this; the answer is either obvious (and I need more training) or it's buried deep in documentation (or not documented). Somebody must know this.
I've been arguing with somebody who insisted on caching some static files on an ASP.NET site, where I thought it's not necessary for a simple fact that all other files that produce dynamic HTML are not cached (by default; let's ignore output caching for now; let's also ignore the caching mechanism that person had in mind [in-memory or out on network]). In other words, why cache some xml file (regardless on how frequently it's accessed) when all aspx files are read from disk on every request that map to them? If I'm right, by caching such static files very little would be gained (less disk-read operations), but more memory would be spent (if cached in memory) or more network operations would be caused (if cached on external machine). Does somebody know what in fact happens when an aspx file is [normally] requested? Thank you.
If I'm not mistaken ASPX files are compiled at run-time, on first access. After the page is compiled into an in-memory instance of a Page class, requests to the same resource (ASPX page) are serviced against the object in memory. So in essence, they are cached with respect to disk-access.
Obviously the dynamic content is generated for every request, unless otherwise cached using output caching mechanisms.
Regarding memory consumption vs disk access time, I have to say that from the performance stand point it makes sense to store objects in memory rather than reading them from disk every time if they are used often. Disk access is 2 orders of magnitude slower than access in RAM. Although inappropriate caching strategies could push frequently used objects out of memory to make room for seldom used objects which could hurt performance for obvious reasons. That being said, caching is really important for a high-performance website or web application.
As an update, consider this:
Typical DRAM access times are between 50 - 200 nano-seconds
Average disk-access times are in the range of 10 - 20 milliseconds
That means that without caching a hit against disk will be ~200 times slower than accessing RAM. Of course, the operating system, the hard-drive and possible other components in between may do some caching of their own so the slow-down may only occur on first hit if you only have a couple such files you're reading from.
Finally, the only way to be certain is to do some benchmarking. Stress-test both implementations and choose the version that works best in your case!
IIS does a large amount of caching, so directly, no. But, IIS checks for ANY changes in the web directory and reloads any changed files as they get changed. Sometimes IIS gets borked and you have to restart it to detect changes, but usually it works pretty good.
P.S. The caching mechanisms may flush data frequently based on server usage, but the caching works for all files in the web directory. Any detected changes to source code causes IIS to flush the web applicaiton and re-compile/re-load as well.
I believe that the answer to your question depends on both the version of IIS you're using, and configuration settings.
But I believe that it's possible to configure some combinations of IIS/.Net to avoid checking the files - there's an option to pre-compile sites, so no code actually needs to be deployed to the web server.
I'm looking for feedback as to where I should store application configuration data and default text values that will have the best performance overall. For example, I've been sticking stuff like default URL values and the default text for error messages or application instructions inside the web.config, but now I'm wondering if that will scale....and if it's even the right thing to do in the first place.
As mentioned before, this really shouldn't matter - the settings, be they in the web.config or in a database, should be 'read-once' and then cached, so this really shouldn't matter.
I can almost guarantee that there will be other parts of your code much slower than this.
As a side note, and not performance related, but if you need to worry about site uptime, you can edit configuration in a database on the fly, but changing the web.config will cause an appdomain restart and subsequent loss of sessions.
If it's a single server setup (as opposed to a Web farm) store it in the Web.Config file. There is a new release of the Web Farm framework and you could check details for that type of scenario here:
http://weblogs.asp.net/scottgu/archive/2010/09/08/introducing-the-microsoft-web-farm-framework.aspx
If the database is on the same machine performance difference may be neglect-able. If you need to cross a wire to get to some database it's most likely slower then a directly accessible and not too large web.config.
I really would prefer keeping things simple here, just use the web.config. It probably is already getting cached in memory by some system component. If you feel it's too slow measure it and then perhaps go for a more complicated solution.
Having said that you could simply read in all configuration at application start-up and hold it in memory. This way any performance difference is mitigated to just the application's start-up time. You also get the added benefit of being able to validate the configuration files at start-up.
Not sure about your defaults but just ask yourself if they are environment specific and is it really something you want to change without recompilation. Being configuration files, what is the use case? Is some operations guy or developer going to set these defaults? Are you planning to document them? Are you going to make your application robust against whatever is configured?
Are these defaults going to be used on multiple environments/installations (used for localization for instance)? Then perhaps it's better to use a different and separate file which you can redeploy separately when needed.
How often are the settings going to change?
If they never change, you can probably put them in web.config, and configure IIS so that it doesn't monitor the file for changes. That will impose a small startup penalty, but after that there should be no performance penalty.
But honestly, there are probably dozens of other places to improve before you start worrying about this - remember, "Premature Optimization is the root of all evil" :)
I realise that this is going to be a fairly niche requirement and will almost certainly raise a few "WTF's" but here goes...
Within an ASP.NET Webforms application I need to serve static content from a local client machine in order to reduce up-front bandwidth requirements as much as possible (Security policy has disabled all Browser caching). The idea is to serve CSS, images and JavaScript files from a location on the local file system referenced by filesystem links from within the Web application (Yes, I know, WTF's galore but that's how it is). The application itself will effectively be an Intranet app that's hosted externally from a client but restricted by IP range along with standard username/password security. So it's pretty much a hybrid Internet/Intranet application but we can easily roll out packages of files to client machines. I am not suggesting that we expect nor require public clients to download packages of files. We have control to an extent over the client machines in terms of the local filesystem and so on but we cannot change the caching policy.
We're using UpdatePanel controls to perform partial page updates which obviously means that we need to Microsoft AJAX JavaScript files. Presently these are being served (as standard) by a standard resource handler within IIS/ASP.NET. Ideally I would like to be able to take these JS files and reference them statically from a client machine, and no longer serve them via an AXD.
My questions are:Is this possible?If it is possible, how do we go about doing so?
In order to attempt to pre-empt some WTF's the requirement stems from attempting to service a requirement with as little time and effort as possible whilst a more suitable solution is developed. I'm aware that we can lighten the load, we can switch to jQuery AJAX updates, we can rewrite the front-end in MVC etc. but my question is related to what we can quickly deploy with our existing application architecture.
Many thanks in advance :)
Lorna,
maybe your security team is going crazy. What is the difference between serving a dynamic HTML generated by the server and a dynamic JS generated by the server?
It does not make any sense. You should try talking them out of it.
what is the average size of pages and viewstate data. you might need to store viewstate in sqlserver rather than sending it to client browser every time.
From an earlier post about trying to improve my sites performance I have been looking at HTTP compression. I have read about setting it up in IIS but it seems to be a global thing for all IIS application pools I may not be allowed to do this as there is another site running on it as well. I then saw some code to put in global.asax to achieve the same thing on a per website basis.
See Here http://www.stardeveloper.com/articles/display.html?article=2007110401&page=1]1
Is this as good as the setup in IIS? How dramatic is the effect? Any known issues?
If you move forward with this, I'd suggest implementing a HttpModule versus global.asax. The HttpModule allows you to disable compression with a config change versus rebuilding and allows you to monkey with your compression Assembly separate from your web app.
Rich Crane has a pretty nice 2.0 module here: http://www.codeplex.com/httpcompression/ if you want to get up and running fast.
The blowery project Steven Rogers mentioned is a HttpModule as well.
Otherwise, writing your own is pretty straightforward. A HttpModule gives you the same events as global.asax - BeginRequest, EndRequest, and finer grained events like PostReleaseRequestState and PreSendRequestHeaders which you may need to iron out all the wrinkles.
As far as IIS compression verus HttpModule, IIS is definitely easier since you don't have to fuss with yet another Assembly. I've used both methods with business apps and both perform about equally under load testing. If IIS is available, I'd say use it.
Between 60 and 80% compression for HTML, JS, CSS, and XML files is common with gzip. Keep in mind a lot of your payload may be images and multimedia objects which are much harder to compress.
http://blowery.org/httpcompress/
We've used this compression utility at my job for a while. Pretty good.
I think the Global.asax option will be a good if you are in a shared hosting environment for example, where you don't have access to IIS configuration.
IIS 6 provides basic compression support, but if you're already in IIS 7, it provides you great HTTP compression support, you can define which files get compressed based on their MIME type in your configuration files...
It achieves essentially the same thing as IIS compression - both end up sending the response with gzip compression. I've recently implemented this method, and it consistently reduces response size by 60% with no performance impact worth worrying about.
There are a few possible issues. Firstly, you need to be careful about output caching. You need to use a custom VaryBy to make sure that different versions are cached for requests with different Accept-Encoding headers. Otherwise, if the compressed version is cached then all users will receive it, whether or not their browser can accept it.
Secondly, GZipStream will sometimes truncate the last few characters from the response if you use Response.End or Response.Flush, because the stream isn't closed until too late. I'm not aware of any nice solution to this.
Finally, this will only compress your HTML. Any CSS or Javascript files will be served normally. You'd need to, for example, serve those files via a custom IHttpHandler to compress them.
There are issues with JavaScript and VBScript. The JavaScript problem has been confirmed in a comment by xxldaniel on a codinghorror article, and I had issues with VBScript (for M$ Office automation) using a JSON-like "Scripting.Dictionary" with "Microsoft.XMLHTTP" request.
You may try mod_gzip modules. It uses managed ZLib version and allows highly adjustable configuration. Syntax is compatible with the same named Apache module and even extended. So for example you could set different compression level for different mime types and so on.
I am interested in using UrlRewriter.NET and noticed in the config page for IIS 6.0 on Win2k3, that they say to map all requests through the ASP.NET ISAPI.
That's fine, but I am wondering if anyone has good or bad things to say about this performance wise? Is my web server going to be dragged down to its knees by doing this or will it be more of a small step up in server load?
My server currently has room to breathe now, so some performance hit is expected and acceptable.
Wildcard mapping does have a huge impact on performance, mostly because it uses the app thread pool not for page request processing, but for all the content. Let's assume you have a usual page with at least 10 additional resources like images, css and javascript - then you're blocking other potential request by serving the static content directly from the pool. More info on asp.net IIS 6 threading can be found here.
One way to go over it (how i did it) is to unable wildcard mapping to the folders that hold the static content - after that you'll receive only valid app request as you would do in an ordinary situation.
The way to unmap the static directories is to create an application of each one of them and then do the unmapping, and after that to undo the application creating thing. You can find more details on Steve Sanderson's blog.
According to this: http://mvolo.com/blogs/serverside/archive/2006/11/10/Stopping-hot_2D00_linking-with-IIS-and-ASP.NET.aspx ... we are talking of 30% impact of the resources used for serving images.
Update 1: It will depend on the amount of dynamic vs. static content you have. If you have enough spare capacity, you can just turn it on and follow closely the performance impact. If it starts to degrade too much you can just turn it off. After that you can proceed with confidence on doing the extension-less changes.
Maybe take a look at IIS 6.0 wildcard mapping benchmarks?
It would appear to show what i've experienced in the wild over many years - the overhead when using the ASPNet dll is negligable. If you have enough traffic for it to be an issue, there will be a hundred things that are causing a larger bottleneck before it will be the ASPNet dll