This question already has answers here:
Pros and cons of having dedicated application pools over keeping web applications in one default app pool
(5 answers)
Closed 3 years ago.
On an server running multiple ASP.NET sites, is it better to use one application pool per site or for sites to share a single application pool? What are the advantages or disadvantages inherent to each setup? Or is there a hard and fast rule here?
This really depends on what the requirements are of the site, as well as your concern regarding risks.
When two applications run inside the same application pool they have the same security level, so there is a security concern here for some as in theory each could have access to files of the other. Also, if one site starts having issues and using memory it could cause recycles or freezes that could impact both.
Although there is not a "hard and fast" rule to this, some of the things that I consider and that cause "automatic" decisions for me are the following.
Is the application mission critical? (If so, separate app pool)
Is this a third party application? (If so, and unsure of what all it does, separate app pool)
Will this application see major spikes in activity? (If so, it might be best to isolate)
There is a lot out there, but the keys are isolation and ability to troubleshoot single applications. Here is a Microsoft article that touches on it a bit as well.
One of the advantages of separate AppPools is that in the event that you need to recycle the AppPool you can do so for one site without affecting the performance (or caching) of the others.
One critical rule to remember:
DO NOT put .Net 1.1 and .Net 2.0 applications in the same application pool. It will mess things up really fast.
It would depend a lot on if certain sites need more reliability than others, what your expected load per site is, etc.
Sharing a pool will be more efficient in general, but a single misbehaving application can more rapidly cause problems for other sites. Additionally you can recycle or update separate app pools separately which may make maintenance schedules easier.
There's no hard and fast rule. I tend to use one app pool per site (and IIS7 even defaults to creating one per site) because I like to play it safe in case I have a memory leak in one site/pool, I don't want it affecting and taking down other sites. But I've also got some servers where 100 sites share a single pool without issue. So, as always, it depends.
I would strongy prefer single application - single application pool - unles you have performance concerns. An exception in background thread (once someone starts playing async) can bring down the whole app pool. And unless you have automatic recycling enabled this may cause lot of trouble.
Related
I have to see if I can work around a known thread safety issue on third party component. The plan was to let an ASP.Net app to talk to the third party component via a WCF service, this was based on the assumption that I will be able to assign the WCF service to it's own application pool, restrict the pool to one working process and the working process to one thread. Requests to the service will have to wait for their turn, but that's OK because we expect them to require very little time and to be rare as well.
Problem is, I can't find anything that suggests how to achieve this part: "restrict the working process to one thread".
I have found a few useful pages, but no solution:
On IISforums, the discussion seem to suggest that I can achieve this in IIS 7+ but does not mention IIS6.
On MSDN Blogs and linked MSDN books chapter (can't put the link because I'm new here!) the discussion is mostly about what you can set via the machine.config file, which, if I'm understanding it correctly, is going to apply to all AppPools/Worker processes and is not what I'm trying to do, I would like to control what one single app can do, leaving the other applications untouched.
Questions:
Is it possible to achieve what I'm trying to do? (assign the WCF service to it's own application pool, restrict the pool to one working process and the working process to one thread" via IIS 6 configurations)
If not, can this be achieved programmatically somehow? For example, using locks or other threading-related tricks within my WCF service implementation.
It doesn't make much sense to me.. A web server have to be multithreaded by definition, because it must handle different income messages at the same time, if there is only one thread once in use any new request will fail.
What about to wrap the component in a class with a SynchonizationAttribute, so only one thread can access the component? Even if this will make your solution less scalable, at least it may work
http://msdn.microsoft.com/en-us/library/system.runtime.remoting.contexts.synchronizationattribute(v=vs.110).aspx
Basically I have a website that will run a background task to perform some "maintenance" duties while it's not idle. When it is idle, these processes do not need to run.
Now I have a secondary website (virtual directory under main site) that needs to execute these tasks as well. However they can't both run them at the same time or it will cause issues.
Now the more correct solution would probably be to either merge the sites, break out the tasks into a different application, or change the tasks so that they do not conflict with each other if they're both running at the same time. For one reason or another, these are not (currently) options.
So basically when the secondary website is active, what would be the best way to make sure the primary website is awake and running these tasks?
I'm thinking the easiest solution would be to include a reference to the main website from the secondary website, so that any page load on the secondary website would force the first website to be server the request. Something like a 1px image.
But would this be better solved through IIS? Should they share the same application pool? Both applications are relatively stable, so I'm not too worried about one website bringing down the other.
Your question is a little confused. At one point you say that the two sites can't run the task at the same time, but then you say that when the secondary is active the primary should also be active?
Assuming your goal is to have the task run only in one place at a time, you basically having a locking problem. Some solutions would be:
Maintain a lock (e.g. physical file, database entry, ...) and only run the task if the other site isn't holding the lock.
Make the task callable in site A and then have site B call it rather than running the task itself. Site A can then track if it is already running the task.
All the solutions you listed yourself (especially separating background tasks away from websites altogether) which are better solutions than the above.
Hope that helps.
Kind of hard to depend on running an automated task on a web service. If the application pool is recycled then that task will become lost and the process could lose its integrity. There is no guarantee that the process will degrade nicely when the application pool is collected.
Running a dedicated task is better handled by a system which is set up to be dedicated. Namely, the server hosting this web service. I think you would be better suited to have the host service run directly on the server locally instead of exposed to the web as a web service.
However, if that is not an option, then you will probably benefit from merging them as you state. This way as long as the application pool is alive from the child service, the parent service will be available. Similarly, you could have them share an application pool. The issue here is as stated above, the integrity of the dedicated process could become compromised.
.NET is not really designed to run dedicated background tasks. That is better suited to a desktop application which supports the web service.
Recently our customers started to complain about poor performance on one of our servers.
This contains multiple large CMS implementations and alot small websites using Sitefinity.
Our Hosting team is now trying to find the bottlenecks in our environments, since there are some major issues with loadtimes. I've been given the task to specify one big list of things to look out for, devided into different the parts (IIS, ASP.NET, Web specific).
I think it'd be good to find out how many instances of the Sitecore CMS we can run on one server according to the Sitecore documentation e.d. We want to be able to monitor and find out where our bottleneck is at this point. Some of our websites load terribly slow, other websites load very fast. Most of our Sitecore implementations that run on this server have poor back-end performance, and have terrible load times after a compilation.
Our Sitecore solutions run on a Win 2008 64 server with Microsoft SQL Server 2008 for db's.
I understand that it might be handy to specify more detailed information about our setup, but I'm hoping we'd be able to get some usefull basic information regarding how to monitor and find bottlenecks e.d.
What tools / hints / tips & tricks do you have?
do NOT use too many different asp.net pools, called and as dedicate pool in plesk. Place more sites on the same pool.
More memory, or stop non used programs/services on the server
Check if you have memory limits on the application pool that make the pool continues auto-restarts.
On the database, set Recovery Mode to simple.
Shrink database files, and reindex database, from inside the program
after all that Defrag your disks
Check the memory with process explorer.
To check whats starts with your server use the autoruns but be careful not to stop any critical service and the computer never starts again. Do not stop services from autoruns, use the service manager to change the type to manual. Also many sql serve services they not need to run if you never used them.
Some other tips
Move the temporary files / and maybe asp.net build directory to a different disk
Delete all files from temporary dir ( cd %temp% )
Be sure that the free physical memory is not zero, using the process exporer. If its near zero, then your server needs memory, or needs to stop non using programs from running.
To place many sites under the same pool, you need to change the permissions of the sites under the new share pool. Its not difficult, just take some time and organize to know what site runs under what pool. Now let say that you have 10 sites, its better to use 2 diferent pools, and spread the sites on this pools base on the load of each site.
There are no immediate answer to Sitecore performance tuning. But here are some vital tips:
1) CACHING
Caching is everything. The default Sitecore cache parameters are rarely correct for any application. If you have lots of memory, you should increase the cache sizes:
http://learnsitecore.cmsuniverse.net/en/Developers/Articles/2009/07/CachingOverview.aspx
http://sitecorebasics.wordpress.com/2011/03/05/sitecore-caching/
http://blog.wojciech.org/?p=9
Unfortunately this is something the developer should be aware of when deploying an installation, not something the system admin should care about...
2) DATABASE
The database is the last bottleneck to check. I rarely touch the database. However, the DB performance can be increased with the proper settings:
Database properties that improves performance:
http://www.theclientview.net/?p=162
This article on index fragmentation is very helpful:
http://www.theclientview.net/?p=40
Can't speak for Sitefinity, but will come with some tips for Sitecore.
Use Sitecores caching whenever possible, esp. on XSLTs (as they tend to be simpler than layouts & sublayouts and therefore Sitecore caching doesn't break them, as Sitecore caching does to asp.net postbacks), this ofc will only help if rederings & sublayouts etc are accessed a lot. use /sitecore/admin/stats.aspx?site=website to check stuff that isn't cached
Use Sitecores profiler, open up an item in the profiler and see which sublayouts etc are taking time
Only use XSLTs for the simplest content, if it get anymore complicated than and I'd go for sublayouts (asp.net controls), this is a bit biased as I'm not fond of XSLT, but experience indicates that .ascx's are faster
Use IIS' content expiration on the static files (prob all of /sitecore and if you have some images, javascript & CSS files) this is for IIS 6: msdn link
Check database access times with Sitecore Databasetest.aspx (the one for Sitecore 6 is a lot better than the simple one that works on Sitecore 5 & 6) Sitecore SDN link
And that's what I can think of from the top of my head.
Sitecore has a major flaw, its uses GUIDs for primary keys (amongst other poorly chosen data types), this fragments the table from the first insert and if you have a heavily utilised Sitecore database the fragmentation can be greater than 90% within an hour. These is not a well-designed database and recommend looking at other products until they fix this, it is causing us a major performance headache (time and money).
We are at a stand still we cannot add anymore RAM cannot rebuild the indexes more often
Also, set your IIS to recycle the app_pool ONLY once a day at a specific time. I usually set mine for 3am. This way the application never goes to sleep, recycle or etc. Best to reduce spin up times.
Additionally configure IIS to 'always running' instead of 'on starup'. This way, when the application restarts, it recompiles immediately and again, is ready to roar.
Sitefinity is really a fantastic piece of software (hopefully my tips above get the thumbs up, and not my endorsement of the product). haha
We have a couple of sites which have some memory problems and fairly often crashes and taking a lot of other sites down with them. So I'm planning to at least put the troublesome sites in their own individual AppPools. I just wanted to make sure that this is a safe move to do or if there are anything I should know about separating AppPools.
There is a memory overhead associated with each app pool (a couple of hunded megabytes if memory serves), so if the problem is a lack of overall memory, this might not help.
Having said that, we run all of our websites in their own app pools by default, but we tend to have a maximum of a handfull on a single server, so I have no experience of large numbers of app pools at once.
We also find the advantages of being able to recycle a single app pool without affecting the other sites is a big advantage.
Finally, as well as isolation (as Guffa mentions) you can tune the settings of each app pool to restrict memory use etc, as well as identities and permissions. (We would tend to run different websites with different accounts, that only had permissions to their own databases for example)
There are a few questions about this on Server Fault too:
https://serverfault.com/questions/2106/why-add-additional-application-pools-in-iis
Advantage:
Separate credentials per app pool/web site
Better isolation, recycling etc
Separate configuration (timeout, memory etc)
Disadvantage:
Memory usage increases (not so much an issue with 64 bit these days)
Generally, there is no downside. If you have massive multiple sites on one web server, whether to have separate App Pools or not is a minor issue.
The advantage is that you isolate the site in it's own process.
The disadvantage is that you get more processes running on the server.
So, if you do this for just a few sites, there should be no problem at all.
Our company currently runs two Windows 2003 servers (a web server & a MSSQL 8 database server). We're planning to add another couple of servers for redundancy / availability purposes in a web farm setup. Our web sites are predominately ASP.NET, we do have a few PHP sites, but these are mainly static with no DB.
Does anyone who has been through this process have any gotchas or other points I should be aware of? And would using Windows Server 2008 offer any additional advantages for this situation (so I can convince my boss to upgrade :) ?
Thanks.
If you have dynamic load balancing (i.e. My first request goes to server X, but my next Request may go to server Y or Z), you will find out that In-Proc Sessions do not work. So you will either need sticky Sessions (your load balancer will ALWAYS send me (=my session) to server X) or out-of-process sessions (i.e. stored in an SQL Server).
Like Michael says, you'll need to take care of your session. Ideally make it lean and store out of process. You'll have similar challenge with cache depending on how you use it and might be interested in looking towards a more robust caching technology if you only use asp caching.
Don't forget things like machine keys and validation in your web.config. The machineKeys need to be consistant across your servers.
Read up on IIS7 and you should be able to pick out several good examples to show off to your boss.
A web farm can give you opportunities and challenges with deployment that should not be overlooked.
Without specifc experience to the setup above but to general moves of this kind. I would recommend phasing the approach. That is, move to Windows 2008 first and then farm.
One additional thing to look at is your deployment plan. Deployment plans seem to be sadly overlooked and/or undervalued. Remember that you are deploying to multiple nodes and you want to take into account how you want to deploy and test in a logical fashion.
For example, assume you have four nodes in your farm. Do you pull two out of the cluster and update and test, then swapping out the other two to repeat? Determine if your current deployment process fits in with the answer you provide. Just because you have X times the amount of servers does not mean that you want or need to do X times the amount of work.
Just revisiting the caching part of the conversation for a moment. You should definitely take a look at a distributed caching solution. If you are pre-caching data and using callbacks with cache removals, you can really put a pounding on the database if you are not careful. Also, a lot of the distributed caching solutions offer some level of session state management, as well. I have been very much enjoying Microsoft's Velocity project, although it is just a second CTP release and not ready for production.
In addition to what others have said, you might want to consider looking into Richard Campbell's (of .NET Rocks!) product:
http://www.strangeloopnetworks.com/
We use the ASP.NET State Server for handling out sessions. This comes free with windows server 2003/2008.
We then have to make sure the machine key's are the same (a setting in your web.config files).
I then manually take each site offline (using app.offline or whatever the magic file is called). Alternatively, u can use IIS and just turn the site off and the offline site 'on'.
That's about it. You could worry about distributed caching, but that's pretty hard-core stuff. You can get a lot of good millage out of the default Output Caching with ASP.NET. I'd start there, before you delve into the complexity (and cost, for some products) if you're going to do distributed caching.
Oh, we're using an F5 load balancer that does NOT do sticky sessions, so we need to maintain our sessions .. which is why we're using the ASP.NET state server.
One other gotcha aside from the Session issues described by the other posters is if the apps are writing to the local file system. Scaling out to a web farm would break the apps if they assume the files are on the local PC. For example, uploaded files might be available or not depending on which server is hit. Changing the paths to point to a shared drive should fix this.