Scope of HttpContext.Current.Cache - asp.net

If I have the following line in multiple websites that run on the same server will there be any problems with the app's writing to the same cache record?
HttpContext.Current.Cache["SearchResults"] = myDataTable;
I know the docs say there is one cache per app domain but I don't quite understand what that means.
https://msdn.microsoft.com/en-us/library/system.web.httpcontext.cache
There is one instance of the Cache class per application domain. As a
result, the Cache object that is returned by the Cache property is the
Cache object for all requests in the application domain.

Application domain is a mechanism (similar to a process in an operating system) used within the Common Language Infrastructure (CLI) to isolate executed software applications from one another so that they do not affect each other.
So when you write to cache, it will be stored in memory for that app domain and if you have web garden ( which has multiple worker process) or web farm, then request might be served by diff worker proc/node, then those node/work proc don't have access to cache as those are entirely diff app domain.
So it states cache is specific to single app domain, to solve the above issue, you need to look for centralised cache store.

Related

High Availability WordPress setup

I'm going to run WordPress site in HA(High Availability) environment at AWS.
I already use HA MySQL - Amazon Avrora.
Right now I have a few question:
Should I prefer Session Replication or Sticky sessions or at my Load Balancer or both of them ?
User content must be uploaded to CDN and not to WP single node in cluster?
How AWS can help with WordPress HA setup ? For example should I use AWS Beanstalk for this purpose ?
What else should I pay my attention to in order to create HA for WordPress ?
Your questions are perhaps a bit broad for StackOverflow, but I am in your situation so I can sympathize.
Sticky sessions are not the preferred option because the need to use them would suggest that your application is not stateless.
In other words, you are requiring sticky sessions, that means your application relies on server memory for session management so, once a session is initialized, that user must stay on THAT server for the entire duration of the session. This is OK, but less desirable (compared to if your request didn't care at all which server instance it was running on) because if your traffic slowed down and Elastic Beanstalk decided to kill off the instance you were on, then on the next request when the load balancer routed you to another instance, your session would be RESET and your user would have to login again.
On the other hand, if your app was written to be completely stateless (by storing the state in a db instance for example), then you would not care which server each request hit because state would not be stored on the server instance. This would allow Beanstalk to freely spin up and down instances without affecting your users in any way.
The benefit to sticky sessions is, if your app is already written with a dependence on server memory, or MUST have it for some reason, it allow your app to run without code changes.
Yes, it seems to me like your user-content should not be uploaded to any single node (for mainly the same reasons I mentioned above). If your user-content is stored on the node and that node gets spun down due to low traffic, you will have lost that data.
This is where something like S3 comes in handy. Your application interacts directly with S3 as its storage solution and each instance saves content to your S3 bucket(s). Then, regardless of which node is running, it can just talk to the same S3 bucket and get the data it needs.
Aside from that, all I can recommend is that you experiment, look into load testing, and adjust as needed.

ASP.Net server side data caching on a web farm

Scenario:
Implement in-memory caching of master data in the WCF layer of an ASP.Net application for a web farm scenario
Data is cached on first access to the service layer ‘s, say, GetCountryList() method with cache’ expiry set to midnight. Let’s say the cache key is “CountryList_Cache”
All subsequent requests are served through cache’
If the Country list is updated using the master screen, then an additional call is made to invalidate the “CountryList_Cache” and fresh data is loaded into it
The next call now receives the updated country list
The above step is easy in a single server scenario, as step 3 only requires a cache expiry call to one server. The complexity increases when we have 2 or 3 load balanced web servers, because in that case the cache is updated (via master screen) on only one of the servers but has to be invalidated on all 3 servers.
Our proposed solution:
We intend to have an external service/ exe/ web page which would be aware of all load balanced servers (via a configuration file). In order to invalidate a specific cache, we would invoke this external component which in turn would invalidate the respective cache key on all the web servers and load then cache with latest data.
The problem:
Although the above approach would work for us, we do not think it is a clean approach for an enterprise class LOB application. Is there a better/ cleaner way of achieving the cache expiry across multiple servers?
Note:
We do not want to use distributed caching due to the obvious performance penalty, as compared to in-proc/ in-memory cache
Caching has been implemented using System.Runtime.Caching
We have worked with SQL dependency and used it in scenario of single web server
Comparing your design to Windows Azure In-Role Cache and AppFabric Cache.
In those products, the cache is stored in one or more servers (cache cluster). In order to speed up requests, they created Local Cache.
When local cache is enabled, the cache client stores a reference to
the object locally. This local reference keeps the object active in
the memory of the client application. When the application requests
the object, the cache client checks whether the object resides in the
local cache. If so, the reference to the object is returned
immediately without contacting the server. If it does not exist, the
object is retrieved from the server. The cache client then
deserializes the object and stores the reference to this newly
retrieved object in the local cache. The client application uses this
same object.
The local cache can be invalidation by time-out and/or notification
Notification-based Invalidation
When you use cache notifications, your application checks with the
cache cluster on a regular interval to see if any new notifications
are available. This interval, called the polling interval, is every
300 seconds by default. The polling interval is specified in units of
seconds in the application configuration settings. Note that even with
notification-based invalidation, timeouts still apply to items in the
local cache. This makes notification-based invalidation complementary
to timeout-based invalidation.

ASP.NET: Is Session Object my acceptable solution for static variable?

I've read several threads about this topic and need some clarification on a few sentences I read in a book:
If you store your Session state in-process, your application is not scalable. The reason for this is that the Session object is stored on one particular server. Therefore storing Session state in-process will not work with a web farm.
What does "scalable" in the first sentence mean?
Does the third sentence means if my app resides on a shared web host, I shouldn't use Session["myData"] to store my stuff? If so, what should I use?
Thanks.
1:
Scalability in this sense:
the ability of a system, network, or process, to handle growing amounts of work in a graceful manner or its ability to be enlarged to accommodate that growth.[
2:
Use a session server or store sessions in SQL Server, which are described here.
ASP.NET can store all the combined Session information for an Application (the "Session State") in 3 possible places on the server-side (client cookies is also possible but that is a different story):
"InProc" (In Process) which means in memory on the IIS server attached to the asp.net worker process,
"StateServer" which is a separate process that can be accessed by multiple IIS servers but still stores the Session state in memory, and
"SQLServer" which stores the Session state in a SQL Server database.
1) The reason In-process is not scalable is if your needs exceed the capacity of a single IIS server, multiple servers can't use an In-process session state. If you have determined a shared hosting scenario will fulfill you needs, you don't need to worry about it.
2) When you store something in Session["Name"], ASP.net stores that data wherever the application is configured to store Session state. If you want to change where Session state is stored, all you need to do is configure your web.config file. If you are using a shared hosting environment, your IIS deployment is considered single server even though no doubt the actual servers are in a farm of some sort.
See: MSDN Session-State Modes

Is the ASP.NET Cache independent for each host header set in IIS7

I have a site that dynamically loads website contents based on domain host name, served from IIS7. All of the domains share a cached collection of settings. The settings are being flushed from the cache on almost every page request, it seems. This is verified by logging the times at which the Cache value is null and reloaded from SQL. This codes works as expected on other servers and sites. Is it possible that ASP.NET Cache is being stored separately for each domain host name?
Having different host headers for your site will not affect the cache.
There are a few reasons why your Cache might be getting flushed. Off the top of my head I would say either your AppDomain is getting dumped, your web.config file is getting updated, or some piece of code is explicitly expiring/clearing out your cache.
The cache is per application, I would look at a few other items.
Is your application pool recycling (Timeout, memory limit, file changes, other)
Do you have Web Gardening Enabled, this would create different buckets for each garden thread
One other thing to check -- how much memory is available? The ASP.NET cache will start ejecting stuff left and right once it senses a memory crunch. Remember, RAM is expensive and valuable storage . . .

How can I manage Cache stored on the web server when I'm using 3 web servers?

We are working on a web application that is distributed across 3 load-balanced web servers. A lot of the data we retreive via NHibernate is stored in System.Web.Caching.Cache.
System.Web.Caching.Cache does a lot to increase the responsiveness of an application, but there are a few issues that we don't exactly know how to resolve, such as
when a user requests data on server1 that data is cached on server1, but for their next request, the load balancer might direct them to server2. That data they requested on server1 is no longer available, and server2 will have to request it from the database again.
If the user does something on server1 to invalidate the cached data, the cache is flushed on server1. However the original cache is still available on server2 & server3, so when the user submits a subsequent request and they're directed to either of the other servers, they are going to be presented with invalid data.
We have applications that update data (such as performance data) on a regular basis. When the performance data is updated we want to flush this from the cache so when a user requests the data again, they're presented with the latest data. How can we get these applications to flush the cache on 3 web servers?
What are the best ways to resolve these issues?
Should we have cache stored on a separate server such as we could to for HttpContext.Session with a SessionState server?
Is there a way for us to set a Cache Dependency on the Cache in the other 2 servers?
Is it possible for us to implement a Cache Dependency on the actual database tables or rows? When these change the cache is flushed? -- or could we set up a database trigger to flush the cache somehow?
Yes, a multi-server environment exposes the weakness of the ASP.NET cache in that it is single-server only. You might want to look into using a distributed cache like AppFabric, which would give you a single logical cache that underlies all three web servers.
AppFabric can be used with NHibernate as a second-level cache - see http://sourceforge.net/projects/nhcontrib/files/NHibernate.Caches/ (although be aware that this question suggests the current implementation may not be up-to-date with the AppFabric v1 release).
You have a few options:
1) Use a distributed cache (such as distcache, velocity, or ncache)
2) Use a shared cache instance (something like memcached, for instance) that all of your web servers make use of.
NHibernate has second-level cache providers for all of these, which can be found here.

Resources