.Net Core provides in-memory implementations for both interfaces (MemoryCache and DistributedMemoryCache) but let's assume we have a working IDistributedCache implementation for our application.
When does it make sense to still use IMemoryCache. In what scenarios is it helpful or preferred over caching data in a distributed cache?
I was searching for same and found the answer in github issue:
They have fundamentally different semantics. MemoryCache can store live objects, the distributed cache can't, objects have to be serialized. The distributed cache can be off box and calls to it may fail or take a long time so getting and setting should be async, the MemoryCache is always in memory and fast. The distributed cache can be disconnected from the store so the interface should account for that.
https://github.com/aspnet/Caching/issues/220#issuecomment-241229013
By design IMemoryCache interface used when you need to implement data caching mechanism for single or multiple process on same app server.
Shortly we could say, in-process cached mechanism.
Meanwhile IDistributedCache interface been designed for distributed cache mechanism, where any data cache shared on many app servers (on web farm).
Shortly we could say, web farm data caching scenario.
Hope this could helps.
Related
Can anyone give me some intuitive examples? I have seen a bunch of notes but still could not get the "point" and advantage of "distributed hash table" compared to a simple traditional hash table. Thanks!
There are a number of advantages that you can get over a traditional hashtable, when using a distributed cache:
Distributed cache will be out of process. Data will remain cached even if user application restarts; traditional hashtable will be disposed with application restart
Distributed cache can be shared among multiple applications, data cached by one application will be available to all others; traditional hashtable will be local to the process only
Distributed cache provides scalability, i.e. adding more servers will add more memory (RAM) to be used for distributed-hashtable; where as local hashtable can only use local process's memory
Distributed caching solutions provide extra features like replication for fault tolerance, expiration, eviction and dependencies etc which help user make better use of caching as compared to a hashtable
Several solutions like NCache also provide SQL-like queries to be used on in-memory data in distributed cache
You can look into Iqbal Khan's article on MSDN about Distributed Caching On The Path To Scalability for further understanding of need of distributed cache.
I use cache in spring mvc.But since
the server reset 2 times a day ,
the cached data will be destroyed .
How should the cached data is stored in
a folder that this does not happen ?
I hope you dont want to persist data on secondary storage since that will then involve disk IO and will again reduce your application's performance.
All you need is to store data in a distributed cache. A distributed cache will have dedicated servers for caching, so that even if you server resets/restarts, data will remain cached.
There are number of distributed caching solutions that provide integration with spring mvc like memcached being one of them. TayzGrid (an in-memory distributed datagrid) also provides integration with spring mvc. You can easily configure it as caching provider. And your same application will start using distributed cache without any code change required.
I need suggestion for Out-Proc cache for ASP.Net application.
The HttpRuntime.Cache is In-Proc cache can't be shared by multi w3wp.exe processes.
I am ware that there are some of open source projects for this subject, like http://www.sharedcache.com/cms/
But the problem is --
1. Serialization is required to store/get the cached data, which is
slow for big object instance.
2. Some types from ASP.Net framework are not allowed to be serialized, like RouteColltion class.
Do you have any idea for a fast Out-Proc cache solution without serialization?
Serialization is inevitable when using an out-proc cache since objects are to me transmitted form one process to another process (probably on another machine), which cannot be done without serializing objects.
However to reduce performance cost of serialization, NCache a .net based distributed caching solution provides Compact Serialization feature. Compact serialization as its name suggests provides optimized serialization of objects resulting in less number of bytes as compared to native .Net serialization. Thus reducing communication time required to transmit or receive data from out of process cache.
I store a large structure holding my application's reference data in a variable I access through HttpContext.Application. Every once in a while this data needs to change. When I update it in place, is there a danger that incoming requests will see the data in an inconsistent state? Is there a need (and a way) to lock some or all of this structure? Finally, are there other approaches to this problem other than querying the database every time you need this (mostly static) data?
There are also other solutions availiable, there are many caching providers that you can use.
First of all, there's the HttpRuntime.Cache (which is the same as the HttpContext cache). There's also the System.Runtime.Caching.MemoryCache in .NET 4.
You can set data expiry and other rules for the data in the cache.
http://wiki.asp.net/page.aspx/655/caching-in-aspnet/
http://msdn.microsoft.com/en-us/library/6hbbsfk6.aspx
http://msdn.microsoft.com/en-us/library/system.runtime.caching.memorycache.aspx
More advanced caching includes distributed caches.
Usually, they reside on another server but may also reside on a different process on the same server.
Such providers are AppFabric (from Microsoft) and MemCached and others that I can't recall currently.
appfabric: http://msdn.microsoft.com/en-us/magazine/ff714581.aspx
memcached: http://memcached.org/
You will not see the application variable in inconsistent state.
The MSDN page for HttpApplicationState says (Under the Thread Safety section):
This type is thread safe.
You may be looking for HttpContext.Items instead to store data in the request scope instead of the application scope. Check out this article to get a great overview of the different context scopes in ASP.NET.
Your solution to avoid querying the database for "mostly static data" is to leverage ASP.NET's caching.
I've 2 servers mirror with a load balancer. I'd like to know the pros and cons of sticking with app cache versus going with something like memcache? I'm very interested in various solutions and especially the types of errors that I could get or limitations by not synchronizing them.
To start the discussion, I'd hazard that using ASP.NET cache would be faster and simpler.
You are best advised to abstract the caching into an interface, implement the interface in a number of ways and Test the different implementations.
As in many cases, it is a matter of looking at the data and how much it is shared between different users.
ASP.NET cache would not necessarily be faster or simpler. It depends on how much you are caching and whether the webservers have the resources to handle it. In most reasonable size apps, the answer to that is often No.
The main downside to not synchronizing between cache servers would be that in a load balanced environment, subsequent requests for the same data might go to different servers. This would just mean that the database gets hit twice some of the time. A way to mitigate this is to implement sticky sessions, where a given user is always sent to the same server and the load balancer only makes a balancing decision at the start of a user session.