From the doc
Add(CacheItem, CacheItemPolicy) : When overridden in a derived class, tries to insert a cache entry into the cache as a CacheItem instance, and adds details about how the entry should be evicted. [1]
-
Set(CacheItem, CacheItemPolicy) : When overridden in a derived class, inserts the cache entry into the cache as a CacheItem instance, specifying information about how the entry will be evicted. [2]
I see little difference in the wording (tries to) and signature (set is a sub, add returns a boolean), but I'm not sure which one I should use and if there is really something different between both.
The Main difference is that the Add() method tries to insert a cache without overwriting an existing cache entry with the same key.
While the Set() method will overwrite an existing cache entry having the same key. [ However If the key for an item does not exist, insertion will be done as a new cache entry ].
Above was the difference in terms of their functionality.
Syntactical Difference:
One significant syntactical difference is that the Add() method returns a Boolean which is true if insertion succeeded, or false if there is already an entry in the cache that has the same key as item.
The Set() method has a void return type.
One last point that internal implementation of Add() method actually calls its corresponding version of AddOrGetExisting() method.
public virtual bool Add(CacheItem item, CacheItemPolicy policy)
{
return this.AddOrGetExisting(item, policy) == null;
}
Related
I used AWS DynamoDBMapper Java class to build a repository class to support CRUD operations. In my unit test, I created an item, saved it to DB, loaded it and then deleted it. Then I did a query to DB with the primary key of deleted item, query returns an empty list, all looked correct. However, when I check the table on AWS console the deleted item is still there, and another client on a different session can still find this item. What did I do wrong? Is there any other configuration or set up required to ensure the "hard delete" happened as expected? My API looks like this:
public void deleteObject(Object obj) {
Object objToDelete = load(obj);
delete(obj);
}
public Object load(Object obj) {
return MAPPER.load(Object.class, obj.getId(),
ConsistentReads.CONSISTENT.config());
}
private void save(Object obj) {
MAPPER.save(obj, SaveBehavior.CLOBBER.config());
}
private void delete(Object obj) {
MAPPER.delete(obj);
}
Any help/hint/tip is munch appreciated
Dynamodb is eventually consistent by default. Creating -> Reading -> Deleting immediately would not always work.
Eventually Consistent Reads (Default) – the eventual consistency
option maximizes your read throughput. However, an eventually
consistent read might not reflect the results of a recently completed
write. Consistency across all copies of data is usually reached within
a second. Repeating a read after a short time should return the
updated data.
Strongly Consistent Reads — in addition to eventual consistency,
Amazon DynamoDB also gives you the flexibility and control to request
a strongly consistent read if your application, or an element of your
application, requires it. A strongly consistent read returns a result
that reflects all writes that received a successful response prior to
the read.
I have a replicated cluster composed by several nodes (up to 30) on which there is a single JAVA process accessing to the coherence cache and I use the map.invoke(key, agent) method for both creation and update of agents. The creation and the update are performed setting the value in the process method.
Example (agent is instance of a ConcreteEntryProcessor implementing EntryProcessor interface):
map.invoke(key, agent);
Which invoke the following code of agent object:
public Object process(Entry entry) {
if (entry.isPresent())
{
//UPDATE
... some stuff which compute the new entry value...
entry.setValue(newValue, true);
return newValue
}
else
{
//CREATION
..other stuff to determine the value...
entry.setValue(value, true);
return value;
}
}
I noticed that if the update is made by the node that created the agent I have good performances, otherwise I have a performance decrease if the update is made from a different node. It seems that there is a kind of ownership of data.
Is there a way to force the execution of an agent on the local node or change the ownership of data?
It all depends on cache configuration. If you use distributed (partitioned) cache, then indeed there is some kind of data ownership. In that case entry processor is invoked on a node that owns given key.
And according to your performance issues, I see two possibilities:
Performance of map.invoke(key, agent) decreases, but performance of EntryProcessor.process(entry) is stable. In that case your performance issues are probably caused by serialization and network traffic needed to send back the result of processing to the node that called map.invoke(key, agent). If you don't need this value on that node, then simply return null from your entry processor.
Performance of EntryProcessor.process(entry) decreases. In that case mayby your create/update logic need some data from the node that called map.invoke(key, agent). So it is again serialization/network traffic issue, but without knowing the details of your particular logic it is hard to find a solution to your issue.
I have the following problem. I want to execute a policy that checks for existance of a node and after that it should check if the value is greater than 0.
So lets say we have "xmlDoc" and I want to check if the node "test" exists and if the value of "test" is greater than 0.
<xmlDoc>
<test>5</test>
</xmlDoc>
When the node exists, there is no problem. When the node is missing though, all hell breaks lose..
It is obvious why he crashes. He can't find the node "test" so he can't check its value.
My question: is it possible in the BizTalk BRE to check on existance and on value of a node without it crashes?
There is the 'exists' Predicate on the list of Conditions, however, this doesn't always work since value fact is also evaluated.
One way I've found to address this is by creating a Vocabulary item and adjusting the Selector to point to the Element that may not exist, "text" in your case.
Then the XPath field would be the /text() node.
This way, if the Selector path returns null, the BRE knows the fact doesn't exist so no Rule that requires it will be evaluated.
If not exist check is performed alongwith value check, BRE does not work as expected.
Solution :
Below function will return node value and empty string if node does not exist.
Use return value of this function to perform value check.
claim : XML document.
path : XML path.
public static string GetXMLPathValue(TypedXmlDocument claim, string path)
{
string nodeContent = string.Empty;
if (claim.Document.SelectSingleNode(path) != null)
return claim.Document.SelectSingleNode(path).InnerXml;
return nodeContent;
}
We have a data driven ASP.NET website which has been written using the standard pattern for data caching (adapted here from MSDN):
public DataTable GetData()
{
string key = "DataTable";
object item = Cache[key] as DataTable;
if((item == null)
{
item = GetDataFromSQL();
Cache.Insert(key, item, null, DateTime.Now.AddSeconds(300), TimeSpan.Zero;
}
return (DataTable)item;
}
The trouble with this is that the call to GetDataFromSQL() is expensive and the use of the site is fairly high. So every five minutes, when the cache drops, the site becomes very 'sticky' while a lot of requests are waiting for the new data to be retrieved.
What we really want to happen is for the old data to remain current while new data is periodically reloaded in the background. (The fact that someone might therefore see data that is six minutes old isn't a big issue - the data isn't that time sensitive). This is something that I can write myself, but it would be useful to know if any alternative caching engines (I know names like Velocity, memcache) support this kind of scenario. Or am I missing some obvious trick with the standard ASP.NET data cache?
You should be able to use the CacheItemUpdateCallback delegate which is the 6th parameter which is the 4th overload for Insert using ASP.NET Cache:
Cache.Insert(key, value, dependancy, absoluteExpiration,
slidingExpiration, onUpdateCallback);
The following should work:
Cache.Insert(key, item, null, DateTime.Now.AddSeconds(300),
Cache.NoSlidingExpiration, itemUpdateCallback);
private void itemUpdateCallback(string key, CacheItemUpdateReason reason,
out object value, out CacheDependency dependency, out DateTime expiriation,
out TimeSpan slidingExpiration)
{
// do your SQL call here and store it in 'value'
expiriation = DateTime.Now.AddSeconds(300);
value = FunctionToGetYourData();
}
From MSDN:
When an object expires in the cache,
ASP.NET calls the
CacheItemUpdateCallback method with
the key for the cache item and the
reason you might want to update the
item. The remaining parameters of this
method are out parameters. You supply
the new cached item and optional
expiration and dependency values to
use when refreshing the cached item.
The update callback is not called if
the cached item is explicitly removed
by using a call to Remove().
If you want the cached item to be
removed from the cache, you must
return null in the expensiveObject
parameter. Otherwise, you return a
reference to the new cached data by
using the expensiveObject parameter.
If you do not specify expiration or
dependency values, the item will be
removed from the cache only when
memory is needed.
If the callback method throws an
exception, ASP.NET suppresses the
exception and removes the cached
value.
I haven't tested this so you might have to tinker with it a bit but it should give you the basic idea of what your trying to accomplish.
I can see that there's a potential solution to this using AppFabric (the cache formerly known as Velocity) in that it allows you to lock a cached item so it can be updated. While an item is locked, ordinary (non-locking) Get requests still work as normal and return the cache's current copy of the item.
Doing it this way would also allow you to separate out your GetDataFromSQL method to a different process, say a Windows Service, that runs every five minutes, which should alleviate your 'sticky' site.
Or...
Rather than just caching the data for five minutes at a time regardless, why not use a SqlCacheDependency object when you put the data into the cache, so that it'll only be refreshed when the data actually changes. That way you can cache the data for longer periods, so you get better performance, and you'll always be showing the up-to-date data.
(BTW, top tip for making your intention clearer when you're putting objects into the cache - the Cache has a NoSlidingExpiration (and a NoAbsoluteExpiration) constant available that's more readable than your Timespan.Zero)
First, put the date you actually need in a lean class (also known as POCO) instead of that DataTable hog.
Second, use cache and hash - so that when your time dependency expires you can spawn an async delegate to fetch new data but your old data is still safe in a separate hash table (not Dictionary - it's not safe for multi-reader single writer threading).
Depending on the kind of data and the time/budget to restructure SQL side you could potentially fetch only things that have LastWrite younger that your update window. you will need 2-step update (have to copy dats from the hash-kept opject into new object - stuff in hash is strictly read-only for any use or the hell will break loose).
Oh and SqlCacheDependency is notorious for being unreliable and can make your system break into mad updates.
Is there a way to specify how long data is held in HttpContext.Cache?
You can specify it in the 4th parameter of Cache.Add():
public Object Add(
string key,
Object value,
CacheDependency dependencies,
DateTime absoluteExpiration, // After this DateTime, it will be removed from the cache
TimeSpan slidingExpiration,
CacheItemPriority priority,
CacheItemRemovedCallback onRemoveCallback
)
Edit:
If you access the cache via the indexer (i.e. Cache["Key"]), the method that is called uses no expiration and remains in the cache indefinitely.
Here is the code that is called when you use the indexer:
public void Insert(string key, object value)
{
this._cacheInternal.DoInsert(true, key, value, null, NoAbsoluteExpiration, NoSlidingExpiration, CacheItemPriority.Normal, null, true);
}
Use the Cache.Add method such as:-
HttpContext.Cache.Add("mykey", someObj, null, Cache.NoAbsoluteExpiration, new TimeSpan(0, 15, 0), CacheItemPriority.Normal, null);
The above expires in 15 minutes after the last time it was accessed. Alternative you can pass the Cache.NoSlidingExpiration to this parameter and use a specific DateTime in the previous parameter.
Yes there is a way to specify how long data is held in Cache, but none of the previous 2 examples would actually guaranty you'll keep your items in for the expected amount of time passed with either of the 2 time-based parameters of the Add method (absolute or sliding expiration).
The cache is just a cache and its purpose is to speed things up. So you should not expect it to hold onto your data and always be prepared to go fetch it if it's not there.
As you probably know you can have dependencies for the items and they'll expire based on that even if the time has not expired. This is an easy concept but there's another not that easy. The priority.
Based on the priority of your items and coupled with memory pressure, you can find yourself in a situation where you're caching data with good enough expiration times based on your calculations, but you don't get to use that data more than once making your cache just an overhead in such situation.
EDIT: Well I forgot to specify THE actual way to really keep an item in for the amount of time you need to, and that's a product of chosing the desired time-based expiration, no dependency at all, not manually removing it, and using the NotRemovable priority. This is also how inproc session state is internally kept in the httpruntime cache.