AppFabric Local Cache Performance - asp.net

I'm currently testing out AppFabric Distributed Cache, it's been working great.
When performance testing the Local Cache feature however, I find there is no difference in performance.
For the purposes of the performance test I am storing large pages generated from OutputCache into AppFabric and am noticing the same performance with or without local cache on.
Does anyone else have any similar experience?
I'm using Timeout based local cache, with a ttl of 300 and objectcount of 100000.

If the distributed cache is on the local server, then there should be very little difference.Since the main time usage accessing the distributed cache is the transport across the network.
It may be that it takes a bit longer to access the distributed cache than the local on the same machine, since local cache is in process:
When local cache is enabled, the cache client stores a reference to
the object locally. This keeps the object active in the memory of the
client application
However, local cache does add some sync overhead. So the actual differences will depend on your usage pattern.

I think this might depend on the type of data your are caching.
We use local cache a lot for web services that have many almost identical Get methods (small data in return). The local cache gave a significant less load on the cache servers, and most transactions take 0 ms.

Related

Where do System.Runtime.Caching.ObjectCache caching data

Where do System.Runtime.Caching.ObjectCache cache/store data when it is Memorycahce.Default?
Do it save data in ram or cpu L1 cache ?
How do I caching memory in task manager?
Yes those are in memory (OR) in-process cache and does store the data in server's memory (RAM) whether L1/L2 cache that no idea. So, in case your worker process goes off (OR) IIS recyles (with context of ASP.NET) then all your cached data is gone.
On the other hand, you can as well choose to use distributed cache mechanism like REDIS or Azure Mem Cache which are stored on separate server instance and not in your server process.
No, it has nothing to do with processor caches L1, L2 or others. It is just a caching (as a concept) solution ,that is being held in memory.

AppFabric Cache - Is it reasonable to allocate two cache clusters?

My first impression of AppFabric Cache is that it's essentially a distributed hashtable in the same vein as memcached. The typical usage pattern of such a cache is that there is no guarantee that your data will be in the cache (old entries are evicted to make space for new ones), but with sufficient RAM they usually will be.
On the other hand MS provide a Web Session State Provider that stores session data in an AppFabric Cache. This appears to be a completely different usage pattern as we now require the cached items to never be evicted as a result of memory pressure. To achieve this MS provide a high-availability mode that keeps redundant copies of all data, furthermore eviction can be disabled, which in turn requires us to allocate sufficient RAM to ensure that the cache never reaches capacity.
It seems likely that an application would benefit from using both types/modes of cache, but as far as I can tell AppFabric RAM cannot be ringfenced within a cluster or host, hence the web session state may (and generally will) experience memory pressure in that case. The only solution I can see is to operate two AppFabric Cache clusters, one for each mode.
Is the above a good representation of the situation or am I missing some config setting that addresses this scenario?
Storing a session in appfabric is not a good idea,have faced many problems trying this(like due to memory pressure data got lost, multiple users hitting the cache to put the data can lead to data loss etc.) and now started using inProc/SqlServer session state use.

Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load.
I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night.
Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution
Thanks
I've used Squid before to reduce load on dynamically-created RSS feeds, and it worked quite well. It just takes some careful configuration and tuning to get it working the way you want.
Using a primed cache server is an excellent idea (I've done the same thing using wget and Squid). However, it is probably unnecessary in this scenario.
It sounds like your data is fairly static and the problem is server load, not network bandwidth. Generally, the problem exists in one of two areas:
Database query load on your DB server.
Business logic load on your web/application server.
Here is a JSP-specific overview of caching options.
I have seen huge performance increases by simply caching query results. Even adding a cache with a duration of 60 seconds can dramatically reduce load on a database server. JSP has several options for in-memory cache.
Another area available to you is output caching. This means that the content of a page is created once, but the output is used multiple times. This reduces the CPU load of a web server dramatically.
My experience is with ASP, but the exact same mechanisms are available on JSP pages. In my experience, with even a small amount of caching you can expect a 5-10x increase in max requests per sec.
I would use tiered caching here; deploy Squid as a reverse proxy server in front of your app server as you suggest, but then deploy a Squid at each client site that points to your origin cache.
If geographic latency isn't a big deal, then you can probably get away with just priming the origin cache like you were planning to do and then letting the remote caches prime themselves off that one based on client requests. In other words, just deploying caches out at the clients might be all you need to do beyond priming the origin cache.

Anyone using Memcached with ASP.NET on a distributed farm?

We have 22 HTTP servers each running their own individual ASP.NET Caches. They read from a read only DB that is only updated off peak hours.
We use a file dependency to invalidate the cache, prompting the servers to "new up" their caches...If this is accidentally done during peak hours, it risks bringing down our DB cluster due to the sudden deluge of open connections.
Has anyone used memcached with ASP.NET in this distributed form? It seems to me that it would offer a huge advantage of having to only build up one cache (and hit the DB 21 times less), while memcached would handle distributing it on each box.
If you have, do you place it on the same box as the HTTP boxes, or do you run a separate cache tier? How well does it scale, can we expect it to need powerful servers? Our working dataset is not huge (We fit it into 4 gigs of memory on each HTTP box just fine).
How do you handle invalidation?
Looking for experiences and war stories.
EDIT: Win2k3, IIS6, 64-bit servers...4 gigs per box (I believe, we may have upped it to 16 gigs when we changed to 64-bit servers).
"memcached would handle distributing it on each box"
memcached does not distribute or replicate a cache to each box in a memcached farm. The memcached client basically hashes the key and chooses a cache server based on that hash. When one of the memcached servers fail you will lose whatever cached items existed on that server, however, the client will recognize the failure and begin writing values to a different server. This being the case, your code needs to account for missing items in the cache and reset them if necessary.
This article discusses the memcached architecture in more detail: How memcached works.
Best practice (according to the memcached site) is to run memcached on the same box as your web server app or else you're making http calls (which isn't all that bad, but it's not optimal). If you're running a 64-bit app server (which you probably should if you're going to be running memcached), then you can load up each of the servers with loads of memory and it will be available to memcached. There's not much in the way of CPU resources used by memcached, so if your current app server isn't very taxed, it will remain that way.
Haven't used them together, but I've used them both on separate projects.
Last I saw the documentation explicitly said that sharing with the web server was ok.
Memcache really only needs RAM and if you take your asp.net cache out of the equation how much RAM is you web server actually using? Probably not much. It won't compete much with your web server for CPU and it doesn't need disk at all. You might consider segmenting off the network traffic (if you don't already) from the incoming web requests.
It worked well and was fast I didn't have any problems with it.
Oh, invalidation was explicit on the project I used it on. Not sure what other modes there are for that.
If you want to get replication accross your memcached servers then it maybe worth a look at repcached. It's a patch for memcached that handles the replication part.
Worth checking out Velocity, which is a distributed cache provided by Microsoft. I cannot give you a point-by-point comparison to memcached, but Velocity is integrated with ASP.NET and will continue to get more development and integration.

What are the strongest features of Memcached?

In particular what strengths does it have over caching features of Asp.net
memcached is a distributed cache -- the whole cache can be spread into multiple boxes. so for example you can use memcached to store session data in cluster environment, so this data is available to any box of the cluster.
memcached can be compared to Microsoft's Velocity (http://blogs.msdn.com/velocity/).
Another nice feature is that memcached runs as a stand alone service. If you take your application down, the cached data will remain in memory as long as the service runs.
We use memcached as a caching back-end in a ASP.NET web site. We have 12 memcached boxes.
UP for memcached:
Much more scalable, just add boxes with memory to spare
The cache nodes are very ignorant: this means that they have no knowlegde about the other nodes participating. This makes the management and configuration of such a system extremely easy.
All of the webservers have the same values in cache (so you never see hopping values deending on which webserver serves your request)
DOWN for memcached:
compared to in-memory cache, it is very slow. Mostly because of serialization/deserialization and network latency
The cache nodes are very ignorant: ther is, for example, no way to iterate over all of the cached items
Memcached is the simplest en fastest tool is you need distributed caching. If you can use in-process in-memory cache for your application, that will always be faster. We use a cache manager that will offload certain items to memcached and keep others in local cache.

Resources