what happens if an user trying to read HttpContext.Current.Cache[key] while the other one trying to remove object HttpContext.Current.Cache.Remove(key) at the same time?
Just think about hundreds of users reading from cache and trying to clean some cache objects at the same time. What happens and is it thread safe?
Is it possible to create database aware business objects in cache?
The built-in ASP.Net Cache object (http://msdn.microsoft.com/en-us/library/system.web.caching.cache.aspx) is thread-safe, so insert/remove actions in multi-threaded environments are inherently safe.
Your primary requirement for putting any object in cache is that is must be serializable. So yes, your db-aware business object can go in the cache.
If the code is unable to get the object, then nothing / null is returned.
Why would you bother to cache an object if you would have the chance of removing it so frequently? Its better to set an expiration time and reload the object if its no longer in the cache.
Can you explain "DB aware object"? Do you mean a sql cache dependency, or just an object that has information about a db connection?
EDIT:
Reponse to comment #3.
I think we are missing something here. Let me explain what I think you mean, and you can tell me if its right.
UserA checks for an object in cache
("resultA") and does not find it.
UserA runs a query. Results are
cached as "resultA" for 5 minutes.
UserB checks for an object in cache
("resultA") and does find it.
UserB uses the cached object "resultA"
If this is the case, then you dont need a Sql Cache dependency.
Well i have a code to populate cache:
string cacheKey = GetCacheKey(filter, sort);
if (HttpContext.Current.Cache[cacheKey] == null)
{
reader = base.ExecuteReader(SelectQuery);
HttpContext.Current.Cache[cacheKey] =
base.GetListByFilter(reader, filter, sort);
}
return HttpContext.Current.Cache[cacheKey] as List<CurrencyDepot>;
and when table updated cleanup code below executing:
private void CleanCache()
{
IDictionaryEnumerator enumerator =
HttpContext.Current.Cache.GetEnumerator();
while (enumerator.MoveNext())
{
if (enumerator.Key.ToString().Contains(_TableName))
{
try {
HttpContext.Current.Cache.Remove(enumerator.Key.ToString());
} catch (Exception) {}
}
}
}
Is this usage cause a trouble?
Related
I am new to caching and trying to understand how it works in general. Below is code snippet from ServiceStack website.
public object Get(CachedCustomers request)
{
//Manually create the Unified Resource Name "urn:customers".
return base.RequestContext.ToOptimizedResultUsingCache(base.Cache, "urn:customers", () =>
{
//Resolve the service in order to get the customers.
using (var service = this.ResolveService<CustomersService>())
return service.Get(new Customers());
});
}
public object Get(CachedCustomerDetails request)
{
//Create the Unified Resource Name "urn:customerdetails:{id}".
var cacheKey = UrnId.Create<CustomerDetails>(request.Id);
return base.RequestContext.ToOptimizedResultUsingCache(base.Cache, cacheKey, () =>
{
using (var service = this.ResolveService<CustomerDetailsService>())
{
return service.Get(new CustomerDetails { Id = request.Id });
}
});
}
My doubts are:
I've read that cached data is stored in RAM on same/distributed server. So, how much data can it handle, suppose in first method if customers count is more than 1 million, doesn't it occupy too much memory.
In general case, do we apply caching only for GET operations and invalidate if it gets UPDATE'd.
Please suggest any tool to check memory consumption of caching.
I think you can find the answers to your questions here -https://github.com/ServiceStack/ServiceStack/wiki/Caching
I've read that cached data is stored in RAM on same/distributed server...
There are several ways to 'persist' cached data. Again, see here - https://github.com/ServiceStack/ServiceStack/wiki/Caching. 'InMemory' is the option you seem to be questioning. The other options don't have the same impact on RAM.
In general case, do we apply caching only for GET operations and invalidate if it gets UPDATE'd.
In ServiceStack you can manually clear/invalidate the cache or have a time based expiration. If you manually clear the cache I would recommend doing so on DELETES and UPDATES. You are free to choose how you manage/invalidate the cache. You just want to avoid having stale data in your cache. As far as 'apply caching' you would return cached data on GET operations, but your system can access cached data just like any other data store. Again, you just need recognize the cache my not have the most recent set of data.
My Flex app uses local SharedObjects. There have been incidents of the Flash cookie getting corrupt, for example, due to a plugin crash. In this case SharedObjects.getLocal will throw an exception (#2006).
My client wants the app to recover gracefully: if the cookie is corrupt, I should replace it with an empty one.
The problem is, if SharedObject.getLocal doesn't return an instance of SharedObject, I've nothing to call clear() on.
How can I delete or replace such a cookie?
Many thanks!
EDIT:
There isn't much code to show - I access the local cookie, and I can easily catch the exception. But how can I create a fresh shared object at the same location once I caught the exception?
try {
localStorage = SharedObject.getLocal("heywoodsApp");
} catch (err:Error) {
// what do I do here?
}
The error is easily reproduced by damaging the binary content of a Flash cookie with an editor.
I'm not really sure why you'd be getting a range error - esp if you report that can find it. My only guess for something like this is there is a possibility of crossing boundries with respect to the cross-domain policy. Assuming IT has control over where the server is hosted, if the sub-domain ever changed or even access type (from standard to https) this can cause issues especially if the application is ongoing (having been through several releases). I would find it rather hard to believe that you are trying to retrieve a named SO that has already been named by another application - essentially a name collision. In this regard many of us still uses the reverse-dns style naming convention even on these things.
If you can catch the error it should be relatively trivial to recover from: - just declare the variable outside the scope of the try so it's accessible to catch as well. [edit]: Since it's a static method, you may need to create a postfix to essentially start over with a new identifier.
var mySO:SharedObject;
....
catch(e:Error)
{
mySO = SharedObject.getLocal('my.reversedns.so_name_temp_name');
//might want to dispatch an error event or rethrow a specific exception
//to alert the user their "preferences" were reset.
}
You need to be testing for the length of SharedObject and recreate if it's 0. Also, always use flush to write to the object. Here's a function we use to count the number of times our software is launched:
private function usageNumber():void {
usage = SharedObject.getLocal("usage");
if (usage.size > 0) {
var usageStr:String = usage.data.usage;
var usageNum:Number = parseInt(usageStr);
usageNum = usageNum + 1;
usageStr = usageNum.toString();
usage.data.usage = usageStr;
usage.flush();
countService.send();
} else {
usage.data.usage = "1";
usage.flush();
countService.send();
}
}
It's important to note that if the object isn't available it will automatically be recreated. That's the confusing part about SharedObjects.
All we're doing is declaring the variable globally:
public var usage:SharedObject;
And then calling it in the init() function:
usage = SharedObject.getLocal("usage");
If it's not present, then it gets created.
I've got quite a lot of code on my site that looks like this;
Item item;
if(Cache["foo"] != null)
{
item = (Item)Cache["foo"];
}
else
{
item = database.getItemFromDatabase();
Cache.insert(item, "foo", null, DateTime.Now.AddDays(1), ...
}
One such instance of this has a rather expensive getItemFromDatabase method (which is the main reason it's cached). The problem I have is that with every release or restart of the application, the cache is cleared and then an army of users come online and hit the above code, which kills our database server.
What is the typical method of dealing with these sorts of scenarios?
You could hook into the Application OnStart event in the global.asax file and call a method to load the expensive database calls in a seperate thread when the application starts.
It may also be an idea to use a specialised class for accessing these properties using a locking pattern to avoid multiple database calls when the initial value is null.
I'm trying to work out where a lot of the memory in my app is going and while doing some profiling I'm noticing that any data objects that are loaded by NHibernate are hanging around once the request (is asp.net), and therefore session, has ended. Tracing it back, there are various things that seem to be doing it, like the "SingleTableEntityPersister" and the "StatefulPersistenceContext". I've disabled 2nd level caching for now, but they're still being held on to
Any ideas?
The session is being correctly disposed:
if (session != null)
{
if (session.Transaction != null && session.Transaction.IsActive)
{
session.Transaction.Rollback();
}
else
{
session.Flush();
}
session.Close();
session.Dispose();
}
NHibernate tracks all changes that are made to objects, that means that if you do:
user.FirstName = "name"
it will make the appropriate update in the DB.
But to track this NH needs references to all your objects. To get not tracked entities you can either use IStatelessSession or remove object from the session using the Evict method.
When session is disposed it releases all the tracked entities. So check if session is deleted properly and transaction is closed
I check a session object and if it does exist then call another method which would use that object indirectly. Although the second method would access this object in a few nanoseconds I was thinking of a situation when the object exactly expires between two calls. Does Session object extends its lifetime on every read access from code for preventing such a problem ? If not how to solve the problem ?
If you are going to say why I don't pass the retrieved object from first method to second one, this is because I pass the ASP.NET Page object which carries many other parameters inside it to second method and if I try to pass each of them separately, there would be many parameters while I just pass one Page object now.
Don't worry, this won't happen
If I understand your situation it works sort of this way:
Access a certain page
If session is active it immediately redirects to the second page or executes a certain method on the first page.
Second page/method uses session
You're afraid that session will expire between execution of the first and second method/page.
Basically this is impossible since your session timer was reset when just before the first page starts processing. So if the first page had active session then your second page/method will have it as well (as long as processing finishes before 20 minutes - default session timeout duration).
How is Session processed
Session is processed by means of an HTTP Module that runs on every request and before page starts processing. This explains the behaviour. If you're not familiar with HTTP Modules, then I suggest you read a bit about IHttpModule interface.
It's quite difficult to understand your question, IMHO, but I will try.
From what I understand, you're doing something like:
string helloWorld = string.Empty;
if (this.Session["myObject"] == null)
{
// The object was removed from the session or the session expired.
helloWorld = this.CreateNewMyObject();
}
else
{
// Session still exists.
helloWorld = this.Session["myObject"].ToString(); // <- What if the session expired just now?
}
or
// What if the session existed here...
if (this.Session["myObject"] == null)
{
this.Session["myObject"] = this.CreateNewMyObject();
}
// ... but expired just there?
string helloWorld = this.Session["myObject"].ToString();
I thought that Session object is managed by the same thread as the page request, which would mean that it is safe to check if object exists, than use it without a try/catch.
I were wrong:
For Cache objects you have to be aware of the fact that you’re dealing essentially with an object accessed across multiple threads
Source: ASP.NET Cache and Session State Storage
I were also wrong about not reading to carefully the answer by Robert Koritnik, which, in fact, clearly answers the question.
In fact, you are warned about the fact that an object might be removed during page request. But since Session lifespan relies on page requests, it would mean that you must take in account the removal of session variables only if your request takes longer than the session timeout (see How is Session processed in the answer by Robert Koritnik).
Of course, such situation is very rare. But if in your case, you are pretty sure that the page request can take longer than 20 minutes (default session timeout), than yes, you must take in account that an object may be removed after you've checked if it exists, but before you really use it.
In this situation, you can obviously increment the session timeout, or use try/catch when accessing the session objects. But IMHO, if the page request takes dozens of minutes, you must consider other alternatives, as Windows services, to do the work.
I'm having difficulties understanding what the problem here is but let me try it again referring to thread safety.
Thread safety issue
If this is a thread safety issue, you can always issue a lock when creating a certain session object so other parallel requests won't run into a problem by double creating your object.
if (obj == null)
{
lock (objLock)
{
if (obj == null)
{
obj = GenerateYourObject();
}
}
}
Check lock documentation on MSDN if you've never used it before. And don't forget to check other web resources as well.