ASP.Net increase MaxProcesses (web garden) using state server and caching - asp.net

I have an ASP.Net website on IIS7 and I am planing to increase the MaxProcesses to match the number of cores on the server (4 cores, 64bit Windows Server 2008).
From what I read, if I increase the MaxProcesses to create a web garden I have to set an out-of-process state server, so I am planing to use the ASPState service to share sessions between worker processes.
But there is something that is not clear to me, is Caching also shared? Or do I have to set a new custom provider for the cache?

In-process cache is never shared in a web garden.
But here's the REAL thing... I question the motivations behind what you're doing. If the object is to use your cores more efficiently, then you can just increase the number of request and/or worker threads you have running your ASP.NET application. Running multiple w3wp processes isn't necessarily the option you want. If you have some constrained resource, like an old in-process COM object that scales poorly with threads, then I can see how you might scale better with multiple processes. But unless you really know what you're doing and why, gently step back from that setting and leave it at 1. ;-)

Caching is not shared. The web garden creates multiple "w3wp" processes. Each process will have its own cache.

If you want to share cache then use something like MemCached Win32 (with the Enyim cache client) or use the new MS product Velocity. This way once you move beyond one server you will already be set up architecturally to handle it.

Related

ASP.NET hosting with unlimited single-node scalability

Since this question is from a user's (developer's) perspective I figured it might fit better here than on Server Fault.
I'd like an ASP.NET hosting that meets the following criteria:
The application seemingly runs on a single server (so no need to worry about e.g. session state or even static variables)
There is an option to scale storage, memory, DB size and CPU-power up and down on demand, in an "unlimited" way
I researched but there seems not to be such a platform, that completely abstracts the underlying architecture away and thus has the ease of use of a simple shared hosting but "unlimited" scalability.
"Single server" and "scalability" are mutually exclusive, I'm afraid. But a good load-balancer will apply affinity to requests so you don't need to needlessly double-cache data on multiple servers.
However, well-designed web applications are easy to port to a multiple-server scenario.
I think your best option is something like Windows Azure Websites (separate from Azure Web Workers) which run on a VM you don't have access to. The VM itself provides enough power as-is necessary to run your website, so you don't need to worry about allocating extra CPU power or RAM.
Things like SQL Server are handled separately, but is very cheap to run, and you can drag a slider to give yourself more storage space.
This can be still accomplished by using a cloud host like www.gearhost.com. Apps live in the cloud and by default get 1 node worker so session stickiness is maintained. You can then scale that application larger workers to accomplish what you need, all while maintaining HA and LB. Even further you can add multiple web workers. Each visitor is tied to a particular node to maintain session state even though you might have 10 workers for example. It's an easy and cheap way to scale a site with 100 visitors to many million in just a few clicks.

High performance ASP.NET setup

I would like to ask you what is the best setup for a following application:
ASP.NET 3.5 Web site - used as a presentation layer, a lot of AJAX and JS. Will not hit the server a lot.
ASP.NET WCF - sevice providing all data to the application. It's responsible for validation, data modeling / preparing and communication with the DB Server.
Database - SQL Server 2005 Std, some logic is coded on the server side as stored procedures. Some of the logic can be a bit time consuming. In my opinion it's the most resource consuming part of the app.
The website can have up to 1000 users per minute. We can have up to 4 servers in the following configuration: Intel Bi Xeon Quad 8x 2.00+ GHz, 16 GB RAM, SSD or RAID drives.
What is the best way to place parts of the application on the physical servers? Will they handle this kind of load?
The less scalable place in any application is database server, you can add more web and application servers but you can't replicate DB with the same ease so you will benefit in a long run if DB will not contain any logic especially any long running logic. In a lot of the applications limiting factor is not cpu but memory think about user sessions if you store 1mb of data per user you applications will be able to support 64,000 silmantanius user sessions with you machines it may be sufficient or not. Both problems can be mitigated by using application level caching but this can cause it own set of problems because now you faced with stale data. To scale session based sites you will need to use smart load balancer solution that supports sticky sessions, for your loads most likely you will need hardware load balancer.
In the application you describe, I suspect that thread management is going to be a big issue. Throwing hardware at the problem may not be the best approach.
In terms of partitioning, it depends on whether you can leverage things like caching and cache notifications. If every call to the app has to hit the DB and run a lengthy stored procedure, then you may want to have more DB machines and fewer front-end web servers.
This is a big subject. In an attempt to provide a reasonably comprehensive answer to exactly this kind of question, I ended up writing a book about it: Ultra-Fast ASP.NET: Build Ultra-Fast and Ultra-Scalable web sites using ASP.NET and SQL Server.

Webservice Applicationpool

I have two diffrent web services(running on local machine) and pointing to one application pool(1.Can I do that?Is it any performance concern?).I have not much knowledge about how the applicationpool will works.
the other .Net application is using two webservices,but frequently one webservice is not responding which internally calling by ssis package with in the .Net application.
what might be the reason and how to make sure it responds all the time, is there any better way to improve the performance?
if am missing or any further information, Comments Welcome
Yes you can have multiple web applications using to the same application pool.
Is it a performance concern? If it is really high traffic or is faulty code, then perhaps.
Application pools allow pushing sites to different processes, reducing the risk of each affecting the other. If one app pool contains an application/web application that has a memory leak, the leak will only affect that particular process, at least directly. Each process can be recycled either by time or system parameters, which mitigates risks of having something in a bad state.
Performance? Another benefit to app pools is the ability to have multiple instances running simultaneously (a similar thing when putting each app in its own pool). The benefit of this is that more request can be handled at a time. The down-side is that you cannot use in-process session state and your application state will be duplicated for each instance of the process. You would need to consider how much 'stuff' you keep in session and how your caching scheme would be effected, but, it has potential for giving a web application more scalability.
You mention call SSIS... I am assuming that is a long-running service, so you would probably want to push the call to that process to some sort of queue that can process outside of the web service request. MSMQ might work for you. If using a queue as such, you would initiate the running of the code, then have a way of checking on the status of the call to see if it is done.
I agree with Greg Ogle but one more point I think is worth mentioning. Splitting the applications into multiple app pools will also give you an added benefit when it comes to troubleshooting if there are any issues. If you have the various applications split out, you can tell specifically what app pool is related to what w3wp.exe process in the time of need. Like say when that w3wp.exe process is taking 98% of your cpu.

Anyone using Memcached with ASP.NET on a distributed farm?

We have 22 HTTP servers each running their own individual ASP.NET Caches. They read from a read only DB that is only updated off peak hours.
We use a file dependency to invalidate the cache, prompting the servers to "new up" their caches...If this is accidentally done during peak hours, it risks bringing down our DB cluster due to the sudden deluge of open connections.
Has anyone used memcached with ASP.NET in this distributed form? It seems to me that it would offer a huge advantage of having to only build up one cache (and hit the DB 21 times less), while memcached would handle distributing it on each box.
If you have, do you place it on the same box as the HTTP boxes, or do you run a separate cache tier? How well does it scale, can we expect it to need powerful servers? Our working dataset is not huge (We fit it into 4 gigs of memory on each HTTP box just fine).
How do you handle invalidation?
Looking for experiences and war stories.
EDIT: Win2k3, IIS6, 64-bit servers...4 gigs per box (I believe, we may have upped it to 16 gigs when we changed to 64-bit servers).
"memcached would handle distributing it on each box"
memcached does not distribute or replicate a cache to each box in a memcached farm. The memcached client basically hashes the key and chooses a cache server based on that hash. When one of the memcached servers fail you will lose whatever cached items existed on that server, however, the client will recognize the failure and begin writing values to a different server. This being the case, your code needs to account for missing items in the cache and reset them if necessary.
This article discusses the memcached architecture in more detail: How memcached works.
Best practice (according to the memcached site) is to run memcached on the same box as your web server app or else you're making http calls (which isn't all that bad, but it's not optimal). If you're running a 64-bit app server (which you probably should if you're going to be running memcached), then you can load up each of the servers with loads of memory and it will be available to memcached. There's not much in the way of CPU resources used by memcached, so if your current app server isn't very taxed, it will remain that way.
Haven't used them together, but I've used them both on separate projects.
Last I saw the documentation explicitly said that sharing with the web server was ok.
Memcache really only needs RAM and if you take your asp.net cache out of the equation how much RAM is you web server actually using? Probably not much. It won't compete much with your web server for CPU and it doesn't need disk at all. You might consider segmenting off the network traffic (if you don't already) from the incoming web requests.
It worked well and was fast I didn't have any problems with it.
Oh, invalidation was explicit on the project I used it on. Not sure what other modes there are for that.
If you want to get replication accross your memcached servers then it maybe worth a look at repcached. It's a patch for memcached that handles the replication part.
Worth checking out Velocity, which is a distributed cache provided by Microsoft. I cannot give you a point-by-point comparison to memcached, but Velocity is integrated with ASP.NET and will continue to get more development and integration.

What is the best solution for storing ASP.NET session variables? StateServer or SQLServer?

StateServer or SQLServer?
What is the best solution for storing ASP.NET session variables?
What are the pros and cons of each?
Are one better then other in any particular situation?
Here's some thoughts about pro's/con's.
I've also added Microsoft Velocity Distributed Caching solution.
Pros for InProc
Fastest optional available (it's all in memory/ram)
Easy to setup (nothing new required in the .config file .. i think this is the default behavior).
Most people I believe use this.
Cons for InProc
If the web site (application pool) dies, then all session info is lost.
Doesn't work in a WebFarm scenario -> session information is per app pool only.
Cannot contain non-session information.
Pro's for a StateServer
In memory/ram, so it's fast (but has some net latency .. read below), so it might not be as fast as Inproc.
Default configuration for a web farm scenario. Multiple iis sites use a stateserver to control the state session info.
Con's for StateServer
Requires the ASP.NET StateServer service to be set to run.
StateServer requires some config tweaking to accept 'remote iis machine' requests.
There's some tiny tiny net latency if the iis request needs to grab/set the session info on another networked machine.
Cannot contain non-session information.
Pro's for SqlServer (as a state server)
State is always retained, even after the iis site restarts.
Con's for SqlServer (as a state server)
Slowest solution -> net latency AND hard-drive latency (as the sql server stores the state on the harddisk / reads from the harddisk).
Hardest to setup/configure.
Cannot contain non-session information
Pro's for Velocity (or other distributed caching systems)
Can handle more than just session information -> objects, application settings, cache, etc. (This is a very GOOD thing IMO!!)
Can be memory only or persist to a database.
If one 'node' fails, the system still works. (assuming there's 2+ caching nodes)
Con's for Velocity (or other distributed caching systems)
Generally cost $$$
Hardest to setup (have to install stuff, tweak configs, add extra specal code).
Has network latency (which is generally nothing) but could have hard disk latency IF the service is persisting the data (eg. to a Sql Server).
I think the assumption would be that you are using a web farm of some sort.
One use of state service is in a Web Garden (multiple worker-processes on the same machine). In this case, you can use load-balancing to keep a user's connection going to a particular server, and have the n worker processes all sharing the same state service.
EDIT: In the web garden + state service or sql server scenario, you also have the benefit of being able to recycle the worker processes on that machine w/o the connected clients losing their session.
I'm not as familiar with using SQL Server as a session state store, but I would think you would gain robustness by using an SQL Server in a cluster. In this case, you could still have multiple worker processes and multiple servers, but you would not have to use a sticky session (server affinity).
And one more note, you can use state service on a second machine, and have all server in the farm hit that machine, but you would then have a single point of failure.
And finally, there are 3rd party (and some home-grown) distributed state-service-like applications. Some of these have performance benefits over the other options, plus Session_End event will actually fire. (In both State Service and SQL Server session backing, there the Session_End in Global.asax will not fire (there may be a way of hooking into SQL Server)).
In an n-tier environment, with SQL Server hosting session state you'll create additional network traffic to your back-end, as well as losing some SQL Server resources that will need to now take care of that additional traffic (session-related requests). SQL Server state management is also slower than state server.
However, if your servers go down in some unforeseen incident, SQL Server will most likely maintain the session information, as opposed to a state server.
In my personal experience I had a few problems storing in session variables. I kept loosing the session and I believe it was the anti virus, which, as it was scanning every file in the server, IIS would recompile the site killing the sessions. (I must say I had no power over that server, I was told to host the app there)
So I decided to store the session in the SQL Server and everybody is happy now... it is incredibly fast
Take a look at this article for a quick start up
Using a single machine to store state in a web garden means a single point of failure. We use SQL state, but it does add a bit of overhead.
In Proc is very Fast.
But having limitation. we can use single system only.
When the time of reboot the System, information will be lost.
worker processes in same machine
StateServer stored the session information in other machine.
Web Farm can use the session. for ex: multiple worker-processes can access the session information from server.
When the time of rebooting server, information will be lost.
SQLServer is used to store the info in Table. Default it will store in TempDB.
This tempdb will dynamically call after sqlservice is called.
So this also not persist the data. In this Scenario we can store in our own DB using Script, that is called Custom Option.

Resources