Double App Domains in ASP.NET 4 application - asp.net

I've got an ASP.NET application running on IIS 7 with multiple application domains, and I can't fathom why there are multiple app domains in a single process. I've grepped my code base, and I'm not explicitly creating a second application domain. Is it possible that a recycle has failed to time out?
These double domains will persist for sometime.
If a recycle occurs because of a web config or binary change, both app domains will go down, and two new ones will start up.
These servers are subject to several binary patches and IISResets per day - sometimes there are 2 domains, sometimes only 1.
Web gardening is disabled.
I discovered this because there is a timer in the application heart-beating to the database, and noticed one day the server had two heartbeats.
In windbg, !dumpdomain shows me the following result: (filtered to only show names of app domains):
Line 59: Name: None
Line 66: Name: None
Line 372: Name: DefaultDomain
Line 460: Name: /LM/W3SVC/1/ROOT/MyAppDomain-1-129882892717131250
Line 4437: Name: /LM/W3SVC/1/ROOT/MyAppDomain-4-129285605131450579

Even though you aren't creating an AppDomain, a library that you are using might be. What third-party components are you using? Do you have any Inversion of Control or Dynamic Proxy libraries that might be responsible? Here's an explanation of this happening with Castle.
Are you sure the application is only running in one place in IIS? It's possible to have multiple IIS sites/applications running off of the same files. This would be consistent with (1) getting your debug info from the db, rather than the app, and (2) the recycle due to editing web.config consistently resulting in duplicate domains. If one location is more commonly accessed than the other this could explain why there is sometimes only one AppDomain.
If you are leveraging ASP.NET's dynamic compilation and shadow copying feature, ASP.NET will at times have multiple AppDomains. Jim Schubert wrote an article called ASP.NET, AppDomains, and shadow-copying which explains this in more detail as well as makes several suggestions as to how to modify web.config to customize this. He also has a helpful answer over at Does my ASP.NET application stop executing if I overwrite the DLLs? Shadow copying can be disabled by setting <hostingEnvironment shadowCopyBinAssemblies="false" />.
Update
I got sucked into Jim Schubert's blog and ended up reading this unrelated post on Allowing Only A Single Instance of a .NET application. If all else fails, you could use this approach to ensure only one instance of the application is running.

May have a look at your ApplicationHost.config.
have a look at: maxProcesses it should be 1.
It seems your IIS starts multiple worker-processes.

As suggested by the following answer https://stackoverflow.com/a/3318367/2001769
in another thread, it seems the ASP.NET runtime keeps a pool of HttpApplication instances (irrespective of maxProcesses / Web Garden).
I don't know if is possible, or even desirable, to control this pool. The best practice could be to instantiate all application singletons in the Application_Start event that is supposed to run only once per application and not once per pooled HttpApplication instance.

If there are multiple IIS Applications targeting the same path. An AppDomain will be created per such IIS Application if the app is invoked.
For example:
http://myServer/App1
http://myServer/App2
If both target the same path they still count as two different applications and two AppDomains will be created.

Related

ASP.net Web API Singleton doesn't work

I'v used many ways to keep an object instance alive to share it's data between request.But All the methods even dependency injection doesn't work at all.
Finally I've realized that my App get recycled by every request and the reason was I wrote some log files in bin folder.So if you make any change in your bin directory, IIS will recycle your application.

Application Insights - Getting only client side data, no server data.

I have an ASP.Net MVC 4 application hosted on Windows Server 2008. I'm using Microsoft Application Insights, and it's working perfectly for client side metrics such as Client Processing Time, Custom Events, Users, Sessions, Page Views, etc. However, I cannot get any server-side metrics such as Processor Time or Available Memory. The areas are all covered by a banner that says something to the effect of "Learn how to collect server request data". When I click on the banner, it shows a blade with instructions, all of which I've already completed (the quick start).
In addition to installing the Application Insights SDK through VS 2013 (0.12.0-build17386), I've also installed and configured the Application Insights Status Monitor on the server. I've restarted IIS, and even restarted the server. Despite all this, I cannot get any server metrics. I've read the troubleshooting guide, and I've checked everything mentioned therein such as making sure the app pool identity is part of the "Performance Monitor Users" group.
I feel as though there is something I have to do to the ApplicationInsights.config file in order to either turn on and / or define the server metrics I want, but I simply cannot find any documentation on this.
Any help or suggestions would be greatly appreciated. Thanks!
No you shouldn't do anything additional with ApplicationInsights.config. Performance counters are the part of default monitoring package and almost all problems are related to that user is not the part of 'Performance Monitor Users' group, but it's not your case.
To be sure that config is correct you can check that the following module is defined in ApplicationInsights.config:
<Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCollector.PerformanceCollectorModule, Microsoft.ApplicationInsights.Extensibility.PerfCollector"/>
Also do you see any notifications in the StatusMonitor and/or traces/exceptions in the Diagnostic search at the application insights resource overview blade?
Ok, we've got it. There was an ApplicationInsights.config in the root folder of the application, and that was the only one I've ever looked at. At Yulia Safarova's suggestion, I discovered another one inside the bin folder. This one did NOT have the module definition specified. (It was basically empty). I copied all the contents of the one from the root into the one in the bin folder, and all the data started to flow.
If you are looking for the server data like CPU, Memory, Response rate to be displayed on the Azure Application Insight, then along with the addition of above module, also make sure that the web application Identity user is part of the administrator group on the server. and below flag is turned on in web.config
"EnableAppInsightUsageCollection" value="true"

Manually refresh output cache in IIS7

Problem description:
On our website we use standard asp cache with duration set to 5h.
It works fine, but sometimes the publisher add some special content that need to be showed impatiently on many different sub-pages (example some promoted article).
That's what I need to do it's easy to use page like this:
mydomain.com/admin/clear-all-website-output-cache.aspx.
I want to clear SERVER SIDE CACHE.
Thanks for help.
we use: IIS7, ASP.net 3.5
See this ServerFault question: Will an IIS reset force cached items to be resent?
This says that you need to use IISRESET (or reset IIS any other way) to do it.
I assume recycling the application pool of the application will have the same effect. It's a good practice to have one application pool per application, so, this should be less problematic than resetting IIS if there are other critical applications.
If your app pool is shared with other applications, create a new one, and change the app pool in the application properties to the new pool. Likely will have a similar effect.
BTW, I do not think stopping and starting the website (assuming likely the app has its own website) will have a similar effect, as it will not stop the process instance that holds the cache, which is represented by app pool. Not 100% sure though.
Use cache dependency on some file, the cache will expire when the file changed.

How do I force ASP.Net to invalid (and thus reload) the current application instance?

Basically I want the effect that would occur if I were to edit the web.config file. The application basically completely unloads itself and starts again, thus re-firing Application_Start and also ditching any dynamically created Types created by the now-defunct AppDomain.
EDIT
I need to do this in my C# code inside my web application. I know it can be done; I did it ages ago but have since lost the code and forgotten how I did it.
For full trust you can use HttpRuntime.UnloadAppDomain(). If you aren't running in full trust you can modify the last write time on the web.config file. Rick Strahl has wrapped these two approaches up in a nice class.
You can "touch" the web.config file (i.e. rewrite it to disk unchanged), or any file in the bin directory to recycle the application. Of course this means the identity under which your application is running needs appropriate permissions.
Lately I seem to be answering my own questions a lot :P
Here we go:
HttpRuntime.UnloadAppDomain();
If all the options above fail, you can also create an endless recursive function as a final resort. The resulting stackoverflow exception will force a reload of the application. (don't do this when you have the visual studio debugger attached)
In IIS you can recycle the worker processes. You don't need to restart IIS.
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/24e3c22e-79a9-4f07-a407-dbd0e7f35432.mspx?mfr=true
If you have created a separate application pool for your application, you can reset the Application Pool.
In general, it's always a good idea to have separate app pool's for each application.

How to deploy an ASP.NET Application with zero downtime

To deploy a new version of our website we do the following:
Zip up the new code, and upload it to the server.
On the live server, delete all the live code from the IIS website directory.
Extract the new code zipfile into the now empty IIS directory
This process is all scripted, and happens quite quickly, but there can still be a 10-20 second downtime when the old files are being deleted, and the new files being deployed.
Any suggestions on a 0 second downtime method?
You need 2 servers and a load balancer. Here's in steps:
Turn all traffic on Server 2
Deploy on Server 1
Test Server 1
Turn all traffic on Server 1
Deploy on Server 2
Test Server 2
Turn traffic on both servers
Thing is, even in this case you will still have application restarts and loss of sessions if you are using "sticky sessions". If you have database sessions or a state server, then everything should be fine.
The Microsoft Web Deployment Tool supports this to some degree:
Enables Windows Transactional File
System (TxF) support. When TxF support
is enabled, file operations are
atomic; that is, they either succeed
or fail completely. This ensures data
integrity and prevents data or files
from existing in a "half-way" or
corrupted state. In MS Deploy, TxF is
disabled by default.
It seems the transaction is for the entire sync. Also, TxF is a feature of Windows Server 2008, so this transaction feature will not work with earlier versions.
I believe it's possible to modify your script for 0-downtime using folders as versions and the IIS metabase:
for an existing path/url:
path: \web\app\v2.0\
url: http://app
Copy new (or modified) website to server under
\web\app\v2.1\
Modify IIS metabase to change the website path
from \web\app\2.0\
to \web\app\v2.1\
This method offers the following benefits:
In the event new version has a problem, you can easily rollback to v2.0
To deploy to multiple physical or virtual servers, you could use your script for file deployment. Once all servers have the new version, you can simultaneously change all servers' metabases using the Microsoft Web Deployment Tool.
You can achieve zero downtime deployment on a single server by utilizing Application Request Routing in IIS as a software load balancer between two local IIS sites on different ports. This is known as a blue green deployment strategy where only one of the two sites is available in the load balancer at any given time. Deploy to the site that is "down", warm it up, and bring it into the load balancer (usually by passing a Application Request Routing health check), then take the original site that was up, out of the "pool" (again by making its health check fail).
A full tutorial can be found here.
I went through this recently and the solution I came up with was to have two sites set up in IIS and to switch between them.
For my configuration, I had a web directory for each A and B site like this:
c:\Intranet\Live A\Interface
c:\Intranet\Live B\Interface
In IIS, I have two identical sites (same ports, authentication etc) each with their own application pool. One of the sites is running (A) and the other is stopped (B). the live one also has the live host header.
When it comes to deploy to live, I simply publish to the STOPPED site's location. Because I can access the B site using its port, I can pre-warm the site so the first user doesn't cause an application start. Then using a batch file I copy the live host header to B, stop A and start B.
OK so since everyone is downvoting the answer I wrote way back in 2008*...
I will tell you how we do it now in 2014. We no longer use Web Sites because we are using ASP.NET MVC now.
We certainly do not need a load balancer and two servers to do it, that's fine if you have 3 servers for every website you maintain but it's total overkill for most websites.
Also, we don't rely on the latest wizard from Microsoft - too slow, and too much hidden magic, and too prone to changing its name.
Here's how we do it:
We have a post build step that copies generated DLLs into a 'bin-pub' folder.
We use Beyond Compare (which is excellent**) to verify and sync changed files (over FTP because that is widely supported) up to the production server
We have a secure URL on the website containing a button which copies everything in 'bin-pub' to 'bin' (taking a backup first to enable quick rollback). At this point the app restarts itself. Then our ORM checks if there are any tables or columns that need to be added and creates them.
That is only milliseconds downtime. The app restart can take a second or two but during the restart requests are buffered so there is effectively zero downtime.
The whole deployment process takes anywhere from 5 seconds to 30 minutes, depending how many files are changed and how many changes to review.
This way you do not have to copy an entire website to a different directory but just the bin folder. You also have complete control over the process and know exactly what is changing.
**We always do a quick eyeball of the changes we are deploying - as a last minute double check, so we know what to test and if anything breaks we ready. We use Beyond Compare because it lets you easily diff files over FTP. I would never do this without BC, you have no idea what you are overwriting.
*Scroll to the bottom to see it :( BTW I would no longer recommend Web Sites because they are slower to build and can crash badly with half compiled temp files. We used them in the past because they allowed more agile file-by-file deployment. Very quick to fix a minor issue and you can see exactly what you are deploying (if using Beyond Compare of course - otherwise forget it).
Using Microsoft.Web.Administration's ServerManager class you can develop your own deployment agent.
The trick is to change the PhysicalPath of the VirtualDirectory, which results in an online atomic switch between old and new web apps.
Be aware that this can result in old and new AppDomains executing in parallel!
The problem is how to synchronize changes to databases etc.
By polling for the existence of AppDomains with old or new PhysicalPaths it is possible to detect when the old AppDomain(s) have terminated, and if the new AppDomain(s) have started up.
To force an AppDomain to start you must make an HTTP request (IIS 7.5 supports Autostart feature)
Now you need a way to block requests for the new AppDomain.
I use a named mutex - which is created and owned by the deployment agent, waited on by the Application_Start of the new web app, and then released by the deployment agent once the database updates have been made.
(I use a marker file in the web app to enable the mutex wait behaviour)
Once the new web app is running I delete the marker file.
The only zero downtime methods I can think of involve hosting on at least 2 servers.
I would refine George's answer a bit, as follows, for a single server:
Use a Web Deployment Project to pre-compile the site into a single DLL
Zip up the new site, and upload it to the server
Unzip it to a new folder located in a folder with the right permissions for the site, so the unzipped files inherit the permissions correctly (perhaps e:\web, with subfolders v20090901, v20090916, etc)
Use IIS Manager to change the name of folder containing the site
Keep the old folder around for a while, so you can fallback to it in the event of problems
Step 4 will cause the IIS worker process to recycle.
This is only zero downtime if you're not using InProc sessions; use SQL mode instead if you can (even better, avoid session state entirely).
Of course, it's a little more involved when there are multiple servers and/or database changes....
To expand on sklivvz's answer, which relied on having some kind of load balancer (or just a standby copy on the same server)
Direct all traffic to Site/Server 2
Optionally wait a bit, to ensure that as few users as possible have pending workflows on the deployed version
Deploy to Site/Server 1 and warm it up as much as possible
Execute database migrations transactionally (strive to make this possible)
Immediately direct all traffic to Site/Server 1
Deploy to Site/Server 2
Direct traffic to both sites/servers
It is possible to introduce a bit of smoke testing, by creating a database snapshot/copy, but that's not always feasible.
If possible and needed use "routing differences", such as different tenant URL:s (customerX.myapp.net) or different users, to deploy to an unknowing group of guinea pigs first. If nothing fails, release to everyone.
Since database migrations are involved, rolling back to a previous version is often impossible.
There are ways to make applications play nicer in these scenarios, such as using event queues and playback mechanisms, but since we're talking about deploying changes to something that is in use, there's really no fool proof way.
This is how I do it:
Absolute minimum system requirements:
1 server with
1 load balancer/reverse proxy (e.g. nginx) running on port 80
2 ASP.NET-Core/mono reverse-proxy/fastcgi chroot-jails or docker-containers listening on 2 different TCP ports
(or even just two reverse-proxy applications on 2 different TCP ports without any sandbox)
Workflow:
start transaction myupdate
try
Web-Service: Tell all applications on all web-servers to go into primary read-only mode
Application switch to primary read-only mode, and responds
Web sockets begin notifying all clients
Wait for all applications to respond
wait (custom short interval)
Web-Service: Tell all applications on all web-servers to go into secondary read-only mode
Application switch to secondary read-only mode (data-entry fuse)
Updatedb - secondary read-only mode (switches database to read-only)
Web-Service: Create backup of database
Web-Service: Restore backup to new database
Web-Service: Update new database with new schema
Deploy new application to apt-repository
(for windows, you will have to write your own custom deployment web-service)
ssh into every machine in array_of_new_webapps
run apt-get update
then either
apt-get dist-upgrade
OR
apt-get install <packagename>
OR
apt-get install --only-upgrade <packagename>
depending on what you need
-- This deploys the new application to all new chroots (or servers/VMs)
Test: Test new application under test.domain.xxx
-- everything that fails should throw an exception here
commit myupdate;
Web-Service: Tell all applications to send web-socket request to reload the pages to all clients at time x (+/- random number)
#client: notify of reload and that this causes loss of unsafed data, with option to abort
# time x: Switch load balancer from array_of_old_webapps to array_of_new_webapps
Decomission/Recycle array_of_old_webapps, etc.
catch
rollback myupdate
switch to read-write mode
Web-Service: Tell all applications to send web-socket request to unblock read-only mode
end try
A workaround with no down time and I am regularly using is:
Rename running .NET core application dll to filename.dll.backup
Upload the new .dll (web application is available and serving the requests while file is being uploaded)
Once upload is complete recycle the Application Pool. Either Requires RDP Access to server or function to recycle application pool in your hosting control panel.
IIS overlaps the app pool when recycling so there usually isn’t any downtime during a recycle. So requests still come in without every knowing the app pool has been recycled and the requests are served seamlessly with no downtime.
I am still searching for more better method than this..!! :)
IIS/Windows
After trying every possible solution we use this very simple technique:
IIS application points to a folder /app that is a symlink (!) to /app_green
We deploy the app to /app_blue
We change the symlink to point to /app_blue (the app keeps working)
We recycle the application pool
Zero downtime, but the app does choke for 3-5 seconds (JIT compilation and other initialization tasks)
Someone called it a "poor man's blue-green deployment" without a load balancer.
Nginx/linux
On nginx/linux we use "proper" blue-green deployment:
nginx reverse proxy points to localhost:3000
we deploy to localhost:3001
warmup the localhost:3001
switch the reverse proxy
shot down localhost:3000
(or use docker)
Both windows and linux solutions can be easily automated with powershell/bash scripts and invoked via Github Actions or a similar CD/CI engine.
I would suggest keeping the old files there and simply overwriting them. That way the downtime is limited to single-file overwrite times and there is only ever one file missing at a time.
Not sure this helps in a "web application" though (i think you are saying that's what you're using), which is why we always use "web sites". Also with "web sites" deploying doesn't restart your site and drop all the user sessions.

Resources