We inherited a website that is ~40GB in size (mostly from user-submitted content) that has a mixture of classic ASP, inline .Net and and compiled .Net associated with it. There are technically two sites associated with this and to conserve disk space I intentionally setup IIS to have two physical sites pointing to the same folder, but with independent application pools to separate the worker processes.
The problem we're running into is occasionally when you visit one of the sites it seems to pickup Application variables from the other site somehow. From everything I've seen on here/Bing, the worker processes should be separate because of the independent application pools but I'm wondering if classic ASP is throwing that off somehow? Given the mixture of everything imaginable (there are ~4,200 physical .asp/.aspx files with the latter mostly starting off with 03_ because that was the method chosen to start migrating to .Net before I got involved), is it better to have these as independent sites and folders? I can't seem to figure out why Application variables are getting flipped mid-stream, but it's causing numerous problems - specifically because connection strings are also application variables and there are two databases behind this thing.
Any tips? Does classic ASP work differently from an Application variable/worker process perspective?
Side note - I know Application variables are a terrible choice in the .Net world, especially for connection strings. I'm in the process of trying to rectify this, but it's a massive undertaking where zero documentation or comments exist and these things are used everywhere.
Unless there is intentional exchange between Classic ASP site and NET application** even running on the same site(!!!) but separate app pools for obvious reason, for example using application variable for storing values, there is no way that Classic asp would pass variable to NET application. Even session variable is not shared between Classic ASP and NET. It has to be done intentionally in code.
For example if you using iframe and passing variable in url string from NET app to classic ASP page or back or just calling NET app from classic ASP page and passing variables in URL string or verse versus.
** Or using cookies to share values between application if you running under same domain.
Related
I was tasked with introducing some new functionality into a legacy system which would be a great cost saving to the business.
An initial investigation revealed it was a Classic ASP project written with VB. Not being overly experienced with Classic ASP I researched if I could use ASP.Net for my additions and found they work together quite nicely. I introduced some simple new functionality, completed some initial tests and confident it would work set about the remainder of the project.
During development I found that I couldn't access data set in the Application_OnStart event of the global.asa file from my .ASPX pages and as a workaround I created a Global.asax file which sets the same values on the Application_Start event. This has meant a little configuration duplication / loss of DRYness since these files contain the database details rather than reading them in from a separate config file (these in essence ARE the config files) which was deemed an acceptable trade-off for time saved separating them out.
However, in moving from our DEV to our UAT environment I'm now finding that changes in the global.asa file on the UAT server are reflected immediately whilst changes in the Global.asax file require a recompile and redeploy of the .dll to take effect. Changes to .aspx pages are reflected immediately with no need to recompile.
Is this always the case or have I inadvertently introduced this issue during the setup and development of the project? If so can you explain how so?
Classic ASP and ASP.net are different technologies, but you can use them in the same website (in the same way you can use Classic ASP and PHP in the same website if you really want to). One consequence of this - as you have discovered - is that each needs its own database connection. Another which frustrates many people is that session variables created in Classic pages are invisible to .net pages and vice versa.
As you have also discovered, Classic uses code which is executed at runtime while .net uses compiled code. The need to recompile the project following changes is part and parcel of ASP.net development.
I should also mention that Classic uses VBScript. This is a similar, but not identical language to VB, or VB.net
Classic ASP is regarded as "legacy" - if you're unfamiliar with it then it makes sense to add any new functionality in a technology with which you are more familiar
I'm trying to eliminate (or at least minimize) startup/warmup times for my .NET applications. I'm not really sure on how to do this even though it's a common concern.
There's a ton of questions about slow startup of .NET applications. These are easily explained by pool recycles, worker process startup, dynamic compilation of .aspx files, JIT etc. In addition, there are more things that may need to be initialized within the application such as EntityFramework and application caches.
I've found alot of different solutions such as:
ASP.NET Precompilation
IIS 8 Application Initialization (and for IIS 7.5)
Auto-Start ASP.NET Applications
However, I'm not entirely satisfied with any of the solutions above. Furthermore I'm deploying my applications to Azure Websites (in most cases) so I have limited access to the IIS.
I know that there are some custom "warmup scripts" that uses various methods for sending requests to the application (e.g. wget/curl). My idea is to create a "Warmup.aspx" page in each of my ASP.NET applications. Then I have a warmup service that sends an HTTP GET to the Warmup.aspx of each site every ... 5 minutes. This service could be a WorkerRole in Azure or a Windows Service in an on-premise installation. Warmup.aspx will will then do the following:
Send an HTTP GET to each .aspx-file within the application (to
dynamically compile the page)
This could be avoided by precompiling the .aspx pages using aspnet_compiler.exe
Send a query to the database to
initialize EntityFramework
Initialize application caches etc
So, my final question is whether there are better alternatives than my "Warmup.aspx" script? And is it a good approach or do you recommend some other method? I would really like some official method that would handle the above criteria.
Any and all suggestions are welcome, thanks!
Did you try this IIS Auto-Start feature described here ?
https://www.simple-talk.com/blogs/2013/03/05/speeding-up-your-application-with-the-iis-auto-start-feature/
You could have two instances of the site. When you need to deploy a new version, and therefore suffer a startup cycle, remove one instance out of load balancer rotation, deploy and start it, set it in and do the same for instance 2. A rolling deployment.
Can I convert a non-MVC asp.net application to be Azure compatible ? Or If i want to create an Azure web application, should it be MVC one ?
The other answers answered your question about converting your app to MVC for deployment to Azure (you don't need to).
If you're creating a new web application and go with ASP.NET MVC (which I'd recommend), just remember if you go with MVC3, you may have to make some of the MVC3 DLL's CopyLocal for your deployment, as it won't be part of your web role instance. At least that's how I still understand it. The 1.4 SDK of the Azure SDK doesn't have a MVC3 Web Role template yet.
See this post on steps to get your MVC3 app Azure-ready.
Hope this helps.
You may take a look at the following blog post for migrating an existing ASP.NET application to Azure. It should not necessarily be an ASP.NET MVC application. Any ASP.NET application will work.
azure has 2 roles
1. a webrole
2. worker role
web role is nothing but an asp.net app. so no need to convert it into an MVC app just any asp.net thing will do fine
Yes, you can. But you need to be aware of certain limitations too, none of which were mentioned in the answers already given:
Your application should be stateless, unless you are running a single instance (for most apps 99,9% reliability is OK, but there are some where you want 99,95%, so you need at least two instances + it gives you additional benefits of a load balancer, etc.). The reason for this is that if you have more than one instance, the load balancer will deliver the request to a different instance. You can use AppFabric Cache to solve this.
You don't have a file system - this is not entirely true, but in reality you should never rely on having local files. All you image uploads (e.g. user profile pictures) should be uploaded to a blob storage and linked to there. How you do this is another matter, and one that can be approached differently depending on the architecture of your existing application. You can get away with files, by using Azure Drive, but it's slow as hell.
No Event Log / RDP - this is also only partially true, but you should rely on other ways of getting diagnostics information from your role. While you can RDP to your role instance, there are better ways (e.g. Azure Diagnostics storage).
Database should be chosen carefully. Sure, you have SQL Azure available, but it's expensive (1 GB = 10 USD/ month). If you can get away with stuff like Table Storage, you may save on some costs. Again, this depends a lot on the architecture.
As for the second part of your answer. MVC as a pattern is nice. It saves you a lot of time, it's much more adapt for the Web as WebForms ever will be. The event based system was designed for Desktop applications, and it was forced onto the web. However, going to Azure does not imply a requirement to go to MVC. What I suggest you do however, is treat it as a nice jump-start opportunity to look into MVC and see how it could help you write your apps better & faster.
As with any other case involving architecture of apps, it depends. If you used common patterns (e.g. IOC, Repository), you will have a really easy time moving any app to Azure.
I have a need to run an application in classic mode for backwards compatibility with a specific application, and am trying to understand what kind of impact that will have on the performance of an MVC application that is running on the site.
If we put a few static file maps (for .js, .css, .png, etc) above the ASP.NET wildcard map to reduce the amount of processing by the ASP.NET handler, will we be approaching the integrated mode in terms of performance?
The thing i'm primarily concerned with is any effect this might have on output caching. I understand that integrated mode might (?) allow for the output cache to handle non ASP.NET content, but that isn't really a concern. We're more interested in ensuring that the MVC application has full use of the output cache.
Empirically i've found that the two configurations operate on par when things go well, but if the page references resources that are not available, the integrated mode tends to fail much more quickly than the classic mode (e.g. 500 ms vs 10 seconds), reducing 'hang time' on the page load.
Thanks for any feedback.
The thing is you have told IIS to run a particular application in classic mode. Now sit back and see. I should do it.
I don't think if you are running only one application on classic mode, it should affect another application running on integrated mode.
Is it possible to have a central cache for an ASP.NET web application that is accessed using multiple domain names? The web application is using a single website and application pool, with multiple domains (host headers) pointing to it.
A bit of background - the application has a lot of data that doesn't change much, and to alleviate database load, I've been storing this in static variables. This has been working without any problems when there is only a single domain. However, with multiple domains, it seems that each domain name being used to access the website has its own copy of this data, so when it's invalidated in one site, the others still retain their own version causing it to never be updated.
I've tried changing this to use HttpRuntime.Cache instead of static variables, but this also exhibits the same problem where each domain being used to access the site seems to be storing its own version of the data.
Is there any way to cache data within an ASP.NET web application that can be shared (and invalidated) across all domains being used to access it?
Try memcached. You can use the BeIT .Net API.
Memcached runs as a stand-alone service and is meant to facilitate distributed caching. It runs under windows and under Linux.
You could create a table in memory in your database and have all the applications pull from that. It should work essentially the same as your cache does now.