the documentation says the ListenerKeyCount is the number of key-based listeners currently registered with the StorageManager.
that said, we are seeing values for this that are don't seem to add up...
values > 0 when there aren't any application created cache listeners
values significantly higher than the actual number of application created cache listeners
seems like this metric includes all references to the cache that needs to be synchronized across the cluster (L1 client caches, other L2 back caches, etc) rather than explicit application listeners...any thoughts?
My guess is that you are using near caching, which often (behind the scenes) places listeners on the keys that it needs to listen to in order to maintain cache coherency. Does this match what you are doing?
Related
We are running WSO2-AM 2.6 multi tenant cluster that has two kinds of nodes
Full profile node (publisher, store, KM, etc..)
Gateway worker nodes
Sharing information b/w publisher and gateways is done using EFS.
So far we were working with Hazelcast enabled, but we prefer to have Hazelcast disabled as it is giving us a lot of pain in production, and we understand that in WSO2 2.x it is not mandatory to have it enabled.
We tested our system with the following setting:
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="false">
Everything was running ok, except for one side effect that we noticed: that it takes a long time (can be even 15 minutes) until deactivation or re-activation of tenant is populated to the worker node.
When creating totally new organization with a newly created API, it is possible to run the API almost instantly at the worker. But if you disable the organization, the API will still run. It will take a long time until worker will report that the tenant is no longer active.
Same for re-activating a tenant. It will take a lot of time until worker will stop complaining about inactive organization and allow running the API.
Is there a configuration setup we need to change? Or is this expected behavior? Who should report to workers about organization changes in the absence of Hazelcast?
There is a tenant cache[1] which contains tenant information. The default TTL of the cache (and any cache) is 15 minutes. When you deactivate a tenant, this distributed cache is cleared using hazelcast. That is why you observe above when you disable hazelcast clustering.
Typically, in a production environment, it's very unlikely that you needing to activate and deactivate tenants very frequently. So I don't think 15min delay is a concerning problem.
However, if it really is, you have to keep Hazelcast clustering enabled. When you said you faced a lot of pain due to Hazelcast, I believe that's because of the distributed nature of these caches. As a solution, you may enable local cache as opposed to the distributed cache. Here, Hazelcast clustering is used only for the cache invalidation calls. That might work for you. (Disclaimer: I haven't tried this yet.)
For this, you need to set ForceLocalCache to true in carbon.xml
<Cache>
<!-- Default cache timeout in minutes -->
<DefaultCacheTimeout>15</DefaultCacheTimeout>
<!-- Force all caches to act as local -->
<ForceLocalCache>true</ForceLocalCache>
</Cache>
[1] https://github.com/wso2/carbon-kernel/blob/4.4.x/core/org.wso2.carbon.user.core/src/main/java/org/wso2/carbon/user/core/tenant/JDBCTenantManager.java#L303
Honestly, I think you should explore more how to configure Hazelcast. Hazelcast is embedded in a lot of very highly used project stacks (JHipster, Atlassian, Apache Camel, SunGard, etc.) It’s very solid for doing what you want here, but it’s also highly configurable so you probably want to set it up according to your needs. If you just disable it, your removing all the clustered scalability that it brings. The configuration is just an XML file and you can find all the documentation here:
https://docs.hazelcast.org/docs/3.11.2/manual/html-single/index.html#understanding-configuration
It’s easy to figure out and definitely worth your time.
In hibernate already First level Cache is available for caching then why we have to use Second level cache? Instead of second level why we can not use only First level cache in hibernate for caching?
See What are First and Second Level caching in Hibernate? for good descriptions on Hibernate caching.
Basically:
the first level cache speeds up updates to a single session/transaction
the second level cache speeds up retrieval of objects used in many transactions.
These are two distinct use-cases with different requirements that needs different kinds of logic.
First level cache works at Session level, It means that a persistent object will be tracked until current Session is closed. And any changes made on this object before closing this session will be reflected in database. It's enabled by default.
Second level cache works at SessionFactory level, So all the changes made on a persistent object will be tracked even if current session is closed. You have to manually enable it. There are a few vendors which provide this functionality, Some of them are ehCache, SwarmCache, OScache etc.
Hibernate second level cache is an optional cache and first level cache will always be consulted before any attempt is made to locate an object in second level cache.
It is mainly used when you have the requirement for caching an object across sessions.
I've tried to read up on Caching in ASP.NET and still have a few questions.
When using a Sql Cache Dependency ... I know that you can specify which tables will be monitored but if a change happens to any one of those tables does it reset the entire cache? I understand that I don't want to cache tables that will have frequent changes but we could end up with a good handful of cached tables and even if each table only gets a few updates a day, that could turn into 50ish resets of the cache daily (8 hour window).
I would be creating and maintaining this cache via a GAC DLL. A large number of different applications would be accessing that GAC at any one time. Does each application maintain its own copy of the cache or is it just stored in one global location (or possibly per app pool)?
Is there a physical location on the server where I can see how much space the Cache is currently consuming? This would be extremely pertinent if each application maintains its own Cache as that could end up taking large amounts of disk space.
Is there some way to physically force the cache to rebuild itself? I could see my boss assuming that the cache was at fault for a particular issue and I'd need to be able to rule that out at the rootest level. No "changing a record and saying that SHOULD rebuild the cache" but rather "doing [Action X] and KNOWING that whatever was in the cache is now gone"
Thanks in advance for your answers and time.
SqlCacheDependency only monitors tables in the old-style SQL 2000 approach, which relies on triggers and polling. The SQL 2005+ method monitors changes at the row level, and uses Service Broker. At the level of the Cache object, changes will invalidate just the Cache entries associated with the given SqlCacheDependency (not the entire cache).
Each application has a separate copy of the Cache. If you have many apps sharing the same data, you might consider creating a separate "caching server," and have your apps get their data from there, using WCF -- basically add another tier to your app.
You can look at a couple of cache-related performance counters, but if your concern is disk space, then there's nothing to worry about, since the ASP.NET cache is stored entirely in RAM. In addition, if RAM gets too full, one feature of the cache is that it will let go of old/infrequently referenced objects to make room for new objects.
The easiest way to force the cache to be dropped is to simply recycle your application or AppPool (which happens once a day or so by default anyway). If you want something more targeted, you would need to write some code to forcibly remove certain items from the cache, either using Cache.Remove() or using linked dependencies.
from top of my head:
Only that table's content will be invalidated.
Each web application has it's own cache.
Cache is stored in memory. and see this question How to determine total size of ASP.Net cache? regarding cache size
http://bit.ly/vsqNDl this may help
What's the best way to cache web site user data in asp.net 4.0?
I have a table of user settings that track all kinds of user or session specific stuff like the state of UI elements (open/closed), preferences, whether some dialog has been dismissed, and so on. Since these don't change very often (for each user, anyway) but are looked up frequently it seems sensible to cache them. What's the best way? These are the options I've identified...
Store them in HttpContext.Current.Session directly (e.g. Session["setting_name"])
Store them in HttpContext.Current.Cache
Use a global static dictionary, e.g. static ConcurrentDictionary<string,string> where the key is a unique userID + setting name value
Store a dictionary object for each session in Session or Cache
What's the most sensible way to do this? How does Session differ from Cache from a practical standpoint? Would it ever make sense to store a dictionary as a single session/cache object versus just adding lots of values directly? I would think lookups might be faster, but updates would be slower since I'd have to re-store the entire dictionary when it changed.
What problems or benefits might there be to using a global static cache? Seems like this would be the fastest, but I'd have to manage the size. I could just flush it periodically if it hits a certain size, or keep a cross reference queue and remove things oldest first when it gets to a certain size. Does this make any sense or is it just trying too hard?
Session may end up being stored out-of-process or in a database, which can make retrieving it expensive. You would likely be using a session database if your application is to be hosted in a server farm, as opposed to a single server. A server farm provides improved scalability and reliability, and it's often a common deployment scenario. Have you thought about that?
Also, when you use Session not in-process, it ends up getting serialized to be sent out-of-process or to a database, and deserialized when retrieved, and you are effectively doing what you describe above:
... updates would be slower since I'd have
to re-store the entire dictionary when
it changed. ...
.. since, even if you use individual session keys, the entire Session object for a user is serialized and deserialized together (all at once).
Whereas, Cache would be in memory on a particular server in the farm, and therefore much more efficient than going out of process or to the database. However, something in cache on one server might not be in cache on another. So if a user's subsequent request is directed to another server in the farm, the cache on that server might not yet hold any of the user's items.
Nevertheless, I'd suggest you use Cache if you're caching for performance reasons.
p.s. Yes, you're trying too hard. Don't reinvent the wheel unless you really need to. :-)
might be better to put your information into memcached for scalability
I want to cache custom data in an ASP.NET application. I am putting lots of data into it, such as List<objects>, and other objects.
Is there a best practice for this? Since if I use a static data, if the w3p.exe dies or gets recycled, the cache will need to be filled again.
The database is also getting updated by other applications, so a thread would be needed to make sure it is on the latest data.
Update 1:
Just found this, which problably helps me
http://www.codeproject.com/KB/web-cache/cachemanagementinaspnet.aspx?fid=229034&df=90&mpp=25&noise=3&sort=Position&view=Quick&select=2818135#xx2818135xx
Update 2:
I am using DotNetNuke as the application, ( :( ). I have enabled persistent caching and now the whole application feels slugish.
Such as a Multiview takes about 3 seconds to swap view....
Update 3:
Strategies for Caching on the Web?
Linked to this, I am using the DotNetNuke caching method, which in turn uses the ASP.NET Cache object, it also has file based caching.
I have a helper:
CachingProvider.Instance().Add( _
(label & "|") + key, _
newObject, _
Nothing, _
Cache.NoAbsoluteExpiration, _
Cache.NoSlidingExpiration, _
CacheItemPriority.NotRemovable, _
Nothing)
Which runs that to add the objects to the cache, is this correct? As I want to keep it cached as long as possible. I have a thread which runs every x Minutes, which will update the cache. But I have noticied, the cache is getting emptied, I check for an object "CacheFilled" in the cache.
As a test I've told the worker process not to recycle, etc., but still it seems to clear out the cache. I have also changed the DotNetNuke settings from "heavy" to "light" but think that is for module caching.
You are looking for either out of process caching or a distributed caching system of some sort, based upon your requirements. I recommend distributed caching, because it is very scalable and is dedicated to caching. Someone else had recommended Velocity, which we have been evaluating and thoroughly enjoying. We have written several caching providers that we can interchange while we are evaluating different distributed caching systems without having to rebuild. This will come in handy when we are load testing the various systems as part of the final evaluation.
In the past, our legacy application has been a random assortment of cached items. There have been DataTables, DataViews, Hashtables, Arrays, etc. and there was no logic to what was used at any given time. We have started to move to just caching our domain object (which are POCOs) collections. Using generic collections is nice, because we know that everything is stored the same way. It is very simple to run LINQ operations on them and if we need a specialized "view" to be stored, the system is efficient enough to where we can store a specific collection of objects.
We also have put an abstraction layer in place that pretty much brokers calls between either the DAL or the caching model. Calls through this layer will check for a cache miss or cache hit. If there is a hit, it will return from the cache. If there is a miss, and the call should be cached, it will attempt to cache the data after retrieving it. The immediate benefit of this system is that in the event of a hardware or software failure on the machines dedicated to caching, we are still able to retrieve data from the database without having a true outage. Of course, the site will perform slower in this case.
Another thing to consider, in regards to distributed caching systems, is that since they are out of process, you can have multiple applications use the same cache. There are some interesting possibilities there, involving sharing database between applications, real-time manipulation of data, etc.
Also have a look at the MS Enterprise Caching Application block which allows your to write custom expiration policy, custom store etc.
http://msdn.microsoft.com/en-us/library/cc309502.aspx
You can also check "Velocity" which is available at
http://code.msdn.microsoft.com/velocity
This will be useful if you wish to scale your application across servers...
There are lots of articles about the Cache object in ASP.NET and how to make it use SqlDependencies and other types of cache expirations. No need to write your own. And using the Cache is recommended over session or any of the other collections people used to cram lots of data into.
Cache and Session can lead to sluggish behaviour, but sometimes they're the right solutions: the rule of right tool for right job applies.
Personally I've often created collections in pseudo-static singletons for the kind of role you describe (typically to avoid I/O overheads like storing a compiled xslttransform), but it's very important to keep in mind that that kind of cache is fragile, and design for it to A). filewatch or otherwise monitor what it's supposed to cache where appropriate and B). recreate/populate itself with use - it should expect to get flushed frequently.
Essentially I recommend it as a performance crutch, but don't rely on it for anything requiring real persistence.