ASP.NET AJAX Load Balancing Issues - asp.net

This would be a question for anyone who has code in the App_Code folder and uses a hardware load balancer. Its true the hardware load balancer could be set to sticky sessions to solve the issue, but in a perfect world, I would like the feature turned off.
When a file in the App_Code folder, and the site is not pre-compiled iis will generate random file names for these files.
server1 "/ajax/SomeControl, App_Code.tjazq3hb.ashx"
server2 "/ajax/SomeControl, App_Code.wzp3akyu.ashx"
So when a user posts the page and gets transfered to the other server nothing works.
Does anyone have a solution for this? I could change to a pre-compiled web-site, but we would lose the ability for our QA department to just promote the changed files.

Do you have the <machinekey> node on both servers set to the same value?
You can override the machine.config file in web.config to set this. This needs to match otherwise you can get strange situations like this.

Does your load balancer supports sticky sessions? With this on, the balancer will route the same IP to the same server over and over within a certain time window. This way, all requests (AJAX or otherwise) from one client would always hit the same server in the cluster/farm.

Ok, first things first... the MachineKey thing is true. That should absolutely be set to the same on all of the load balanced machines. I don't remember everything it affects, but do it anyway.
Second, go ahead and precompile the site. You can actually still push out new versions, whenever there is a .cs file for a page that page gets recompiled. What gets tricky is the app_code files which get compiled into a single dll. However, if a change is made in there, you can upload the new dll and again everything should be fine.
To make things even easier, enable the "Used fixed naming and single page assemblies" option. This will ensure things have the same name on each compilation, so you just test and then replace the changed .dll files.
All of that said, you shouldn't be having an issue as is. The request goes to IIS, which just serves up the page and compiles as needed. If the code behind is different on each machine it really shouldn't matter, the code is the same and that machine will reference it's own code. The actual request/postback doesn't know or care about any of that. Everything I said above should help simplify things, but it should be working anyway... so it's probably a machinekey issue.

You could move whatever is in your app_code to an external class library if your QA dept can promote that entire library. I think you are stuck with sticky sessions if you can't find a convenient or tolerable way to switch to a pre-compiled site.

If it's a hardware load balancer, you shouldn't have an issue, because all that is known there is the request URL, in which the server would compile the requested page and serve it.
the only issue i can think of that you might have is with session and view state.

Its true the hardware load balancer could be set to sticky sessions to solve the issue, but in a perfect world, I would like the feature turned off.

It appears that the is only for ViewState encryption. It doesn't affect the file names for auto compiled assemblies.

I think asp.net model has quite a bit dependency for encryption and machine specific storage, so I am not sure if it works to avoid sticky IP for session.
I don't know about ASP.NET AJAX (I use MonoRail NJS approach instead), but session state could be an issue for you.
You have to make sure session states are serializable, and don't use InMemory session. You probably need to run ASP.NET Session State Server to make sure the whole frontend farm are using the same session storage. In such case session has to be perfectly serializable (thats why no object in session is preferred, you have to always use ID, and I bet MS stick on this limitation when they do AJAX library development)

Related

What exact settings required in iis for asp.net pages to process it faster?

I am having a simple blank page without any source code.The page also taking very long time to come.I am not able to understand the reason behind this.
The domain is getting a high requests.
What exact settings needs to be done in iis 7.0 so that it will be faster.
Please help.
ASP.NET pages always have an initial delay when the first request is made after the file has been created/edited/uploaded because the server needs to recompile them, however it shouldn't be more than 2-3 seconds in practice, and does not affect subsequent pageloads.
The only thing I can think of is an overloaded server. Assuming you're on a shared hosting package then I recommend you find another ISP. If not, then I'm afraid there's a lot more to it than just a "page pages load faster" switch hidden away.

How do I stop IIS from executing cached ASP.NET classes that have been updated or deleted?

After updating some ashx files and related dlls, my server is still executing the old versions. It continues to execute them even if I remove all of the files from the server.
What can I do to fix this?
EDIT
After 10-15 minutes the server reloads the application. That is a livable situation for a production server, but for development, it makes it very hard to get things done.
I have not been able to fix the server, but I came up with a hackish workaround. I created a page that calls HttpRuntime.UnloadAppDomain(). I just need to load that page when I push files to my development server.
Try removing the temporary ASP.NET folder contents.
Even in shared hosting you should be able to restart the app pools of your website. Can you try this?
If you cannot do this try this other trick: change the Url adding an unused parameters, i.e. ?foo=1 (I normally used this trick for other reasons but might also work in this context).
Try changing the Web.config file (any change, doesn't matter what). This will force the ASP.NET app to reload.
Changing the dll's in bin should have done this too, but it's worth trying to also change the Web.config file.

Clear ASP.NET OutputCache across web applications

Is it possible to clear the output cache of one asp.net web application from inside another asp.net web application?
Reason being... We have several wep applications structured like...
http://www.website.com/intranet/cms/
http://www.website.com/area1/
http://www.website.com/area2/
Pages in /area1/ and /area2/ are cached and are managed through /intranet/cms/. When a page is edited using /intranet/cms/ I want to clear it out of the cache in the appropriate /area#/ application.
I already tried using a VaryByCustom that looks up a guid stored in the HttpContext.Cache but that seems to be cached per web application, that doesn't work.
Really if there were any way of passing data between web applications on a single server, that would solve my problem, since I can use that + VaryByCustom.
Thanks!
-Mike Thomas
The way I've done this in the past is to have a "hidden" page (in each of the /areaX sites) that does the flushing, reloading, etc. The page validates a shared secret query parameter before doing anything (to avoid DoS attacks). If valid the page would output an "OK" message once the operation is complete; generates a 404 error if the secret is invalid.
If you want the flush to be on a per-item or per-group basis then add a second parameter that identifies that item/group.
This method is also server technology independent, and can be triggered by other management tools if required.
One way I know of doing this is by using a shared resource as a dependency, usually a file. When the file is changed, the cache is cleared. I think you can use HttpResponse.AddFileDependency for this.
However, in these cases it's usually better to use an out-of-process cache such as memcached. I haven't tested it myself, but this link deals on using memcached with OutputCache.

Should I embed CSS/JavaScript files in a web application?

I've recently started embedding JavaScript and CSS files into our common library DLLs to make deployment and versioning a lot simpler. I was just wondering if there is any reason one might want to do the same thing with a web application, or if it's always best to just leave them as regular files in the web application, and only use embedded resources for shared components?
Would there be any advantage to embedding them?
I had to make this same decision once. The reason I chose to embed my JavaScript/CSS resources into my DLL was to prevent tampering of these files (by curious end users who've purchased my web application) once the application's deployed.
I doubting and questioning the validity of Easement's comment about how browsers download JavaScript files. I'm pretty sure that the embedded JavaScript/CSS files are recreated temporarily by ASP.NET before the page is sent to the browser in order for the browser to be able to download and use them. I'm curious about this and I'm going to run my own tests. I'll let you know how it goes....
-Frinny
Of course if anyone who knew what they were doing could use the assembly Reflector and extract the JS or CSS. But that would be a heck of a lot more work than just using something like FireBug to get at this information. A regular end user is unlikely to have the desire to go to all of this trouble just to mess with the resources. Anyone who's interested in this type of thing is likely to be a malicious user, not the end user. You have probably got a lot of other problems with regards to security if a user is able to use a tool like the assembly reflector on your DLL because by that point your server's already been compromised. Security was not the factor in my decision for embedding the resources.
The point was to keep users from doing something silly with these resources, like delete them thinking they aren't needed or otherwise tamper with them.
It's also a lot easier to package the application for deployment purposes because there are less files involved.
It's true that the DLL (class library) used by the pages is bigger, but this does not make the pages any bigger. ASP.NET generates the content that needs to be sent down to the client (the browser). There is no more content being sent to the client than what is needed for the page to work. I do not see how the class library helping to serve these pages will have any effect on the size of data being sent between the client and server.
However, Rjlopes has a point, it might be true that the browser is not able to cache embedded JavaScript/CSS resources. I'll have to check it out but I suspect that Rjlopes is correct: the JavaScript/CSS files will have to be downloaded each time a full-page postback is made to the server. If this proves to be true, this performance hit should be a factor in your decision.
I still haven't been able to test the performance differences between using embedded resources, resex, and single files because I've been busy with my on endeavors. Hopefully I'll get to it later today because I am very curious about this and the browser caching point Rjlopes has raised.
Reason for embedding: Browsers don't download JavaScript files in parallel. You have a locking condition until the file is downloaded.
Reason against embedding: You may not need all of the JavaScript code. So you could be increasing the bandwidth/processing unnecessarily.
Regarding the browser cache, as far as I've noticed, response on WebRecource.axd says "304 not modified". So, I guess, they've been taken from cache.
I had to make this same decision once. The reason I chose to embed my JavaScript/CSS resources into my DLL was to prevent tampering of these files (by curious end users who've purchased my web application) once the application's deployed.
Reason against embedding: You may not need all of the JavaScript code. So you could be increasing the bandwidth/processing unnecessarily.
You know that if somebody wants to tamper your JS or CSS they just have to open the assembly with Reflector, go to the Resources and edit what they want (probably takes a lot more work if the assemblies are signed).
If you embed the js and css on the page you make the page bigger (more KB to download on each request) and the browser can't cache the JS and CSS for next requests. The good news is that you have fewer requests (at least 2 if you are like me and combine multiple js and css and one), plus javascripts have the problem of beeing downloaded serially.

Should you code ASP.NET sites so they can live in a VDIR?

Back in the days before Visual Studio Web Server we would host our local dev inside IIS. If you have a workstation version of IIS that means only 1 website. What if you are working on several websites. Easy: create them in VDIRs, e.g. http://localhost/ProjectA, http://localhost/ProjectB.
Living in a VDIR doesn't sound so hard. Make sure all your images/CSS/links are relative paths, use the "~" a lot. Sounds like a good practise. Hardcoding images etc so they only work when the application is served from "/" sounds like a bad practise.
There are some nuances anywhere you have to build a link (mostly not common scenarios):
eg; PROD: http://prodserver.com/images/up.jpg -> DEV: http://localhost/ProjectA/images/up.jpg
links / images in emails
flash/javascript/silverlight requests data from the server which contains links to images
The full link to the a PayPal IPN (the page paypal POSTs their response too)
So.. Are you doing this? Advantages / disadvantages? Any other gotchas I've missed?
I always avoid hardcoded paths, URLs, etc. unless there's a specific reason to do otherwise. Things inevitably change, and there's always the jump from your dev site to production.
The part that is usually the biggest nuisance would be in reusable client behaviors that need to reference other paths, and themselves can be reused in pages across the application's directory structure.
I like the idea handler that responds to "globalvars.ashx" (or something like that; there are many ways to handle this) which dynamically emits (and allows caching) properties regarding the global application properties.
Say the handler responsible for globalvars.ashx writes the result of something like this:
String.Format("var ApplicationProperties = {{ RootPath:{0} }};", Request.ApplicationPath);
Your JS behaviors could theoretically reference that property object at any point via ApplicationProperties.RootPath.
In short, yes. The disadvantages of not doing so outweigh the benefits. I actually think your first two points can also be mostly mitigated from using app-relative paths ("~"), but nevertheless, some scenarios such as "integration-level" ones (like PayPal) may indeed prove tricky.
But at the end of the day, if you need to host your app in a virtual directory you are virtually guaranteed problems if you haven't coded your app to be vdir-friendly from the get go. I know I have.
Some background/context: My current production environment at work is almost always a virtual directory anyway, so I do this by necessity. And I've never had a problem when an application was created as a root-level website. This certainly wouldn't be the case if it was the other way around.

Resources