We are refactoring our site to use an external cache and the first step we took was using a custom OutputCacheProvider. First, we created a simple provider that just wraps MemoryCache and found problems with the way we are managing dependencies.
We have a custom OutputCacheAttribute that adds an extra key dependency to be able to invalidate a set of pages when certain entities change and to keep this feature I see some options:
Removing manually the CachedVary that ASP.NET stores in the cache, assuming that the key is "a2" + query". This seems to work but I'm not sure about the reliability.
Add cache keys that contain an array of the pages that have to be evicted from the cache then the key is removed o use the external cache key dependency feature in case it has it. This should be enough to emulate the key dependency we used but in a more complex way.
Forget about this, put a short cache period and let them expire without worrying much about that.
Do our own page caching and forget about ASP.NET output cache, not very appealing.
I'm sure there are other ways. Any tips, experiences or recommendations?
I answer my own question with the solution we adopted just for the record.
In our OutputCacheAttribute we add an empty cache object with a key that depends on the requested URL and some parameters. This will be used to invalidate a page externally.
Then, we also add another object with a key that depends on the current request and contains the previous cacheKey.
Finally, a static ValidationCallback is set up. The callback gets the value of the key for the current request, which is the dependency key. Then if it's not null, gets the value of the dependency, if it's null, the dependency has been evicted and we set the validationStatus to HttpValidationStatus.Invalid.
Some code to illustrate:
public override void OnResultExecuting(ResultExecutingContext filterContext)
{
base.OnResultExecuting(filterContext);
// Build dependencies
BuildDependencies(paramsToDepend, filterContext.Controller, this.Duration);
}
private void BuildDependencies(IEnumerable<string> paramsToDepend, ControllerBase controller, int duration)
{
string[] valuesToInclude = GetValuesToInclude(paramsToInclude, controller.ControllerContext);
// Build the caché key for the current request
var cacheKey = CacheKeyProvider.GetCacheKeyFor(controller, paramsToDepend);
var cache = controller.ControllerContext.HttpContext.Cache;
var cacheValue = cache.Get(cacheKey);
if (cacheValue == null)
{
// The key is created if not exists
Provider.Add(cacheKey, new object(), Context.CurrentDateTime.AddSeconds(duration).ToUniversalTime());
}
// Add the dependency
Provider.Set(CachePrefix + controller.ControllerContext.HttpContext.Request.Path, cacheKey, Context.CurrentDateTime.AddSeconds(duration).ToUniversalTime());
// Register callback
controller.ControllerContext.HttpContext.Response.Cache.AddValidationCallback(new HttpCacheValidateHandler(ValidationCallback), null);
}
public static void ValidationCallback(HttpContext context, object data, ref HttpValidationStatus validationStatus)
{
var provider = OutputCache.Providers[OutputCache.DefaultProviderName];
var dependency = provider.Get(CachePrefix + context.Request.RawUrl) as string;
if (dependency == null) return;
var depValue = provider.Get(dependency);
// If it's null, someone has invelidated the caché (an entity was modified)
if (depValue == null)
{
validationStatus = HttpValidationStatus.Invalid;
}
}
Related
I've read multiple questions similar to this one but none are exactly my situation.
Using linq-to-sql I insert a new record and submit changes. Then, in the same web request, I pull that same record, and update it, then submit changes. The changes are not saved. The DatabaseContext is the same across both these operations.
Insert:
var transaction = _factory.CreateTransaction(siteId, userId, questionId, type, amount, transactionId, processor);
using (IUnitOfWork unitOfWork = UnitOfWork.Begin())
{
transaction.Amount = amount;
_transactionRepository.Add(transaction);
unitOfWork.Commit();
}
Select and Update:
ITransaction transaction = _transactionRepository.FindById(transactionId);
if (transaction == null) throw new Exception(Constants.ErrorCannotFindTransactionWithId.FormatWith(transactionId));
using (IUnitOfWork unitOfWork = UnitOfWork.Begin())
{
transaction.CrmId = crmId;
transaction.UpdatedAt = SystemTime.Now();
unitOfWork.Commit();
}
Here's the unit of work code:
public virtual void Commit()
{
if (_isDisposed)
{
throw new ObjectDisposedException(GetType().Name);
}
_database.SubmitChanges();
}
I even went into the designer.cs file and put a breakpoint on the field that is being set but not updated. I stepped through and it entered and execute the set code, so the Entity should be getting "notified" of the change to this field:
public string CrmId
{
get
{
return this._CrmId;
}
set
{
if ((this._CrmId != value))
{
this.OnCrmIdChanging(value);
this.SendPropertyChanging();
this._CrmId = value;
this.SendPropertyChanged("CrmId");
this.OnCrmIdChanged();
}
}
}
Other useful information:
ObjectTracking is enabled
No errors or exceptions when second SubmitChanges is called (just silently fails update)
SQL profiler shows insert and select but not the subsequent update statement. Linq-To-Sql is not generating the update statement.
There is only one database, one database string, so the update is not going to another database
The table has a primary key.
I don't know what would cause Linq-To-Sql to not issue the update command and not raise some kind of error. Perhaps the problem stems from using the same DataContext instance? I've even refreshed the object from the database using the DataContact.Refresh method before it is pulled for the update, but that didn't help.
I have found what is likely to be the root cause. I am using Unity. The initial insert is being performed in a service class with a PerWebRequest lifetime. The select and update is happening in a class with a Singleton lifetime. So my assumption that the DataContext instances are the same was incorrect.
So, in my class with the Singleton lifetime, I get a fresh instance of the database repository and perform the update and no problem.
Now I still don't know why the original code didn't work and my approach could still be considered more a workaround than a solution, but it did solve my problem and hopefully will be useful to others.
I'm still relatively new to .NET and ASP.NET MVC, and I have had a few occasions where it would be nice to store information retrieved from the DB temporarily so it can be used on a subsequent server request from the client. I have begun using the .NET Session to store this information, keyed off of a timestamp, and then retrieve the information using the timestamp when I hit the server again.
So a basic use case:
User clicks 'Query' button to gather information from the system.
In JS, generate a timestamp of the current time, and pass this to the server with request
On server, gather information from DB
On server, use unique timestamp from client as a key into the Session to store the response object.
Return response object to client
User clicks 'Generate Report' button (will format query results into Excel doc)
Pass same timestamp from #2 down to server again, and use to gather query results from #4.
Generate report w/o additional DB hit.
This is the scheme that I have begun to use in any case where I use the Session as temporary storage. But generating a timestamp in JS isn't necessarily secure, and the whole things feels a little... unstructured. Is there an existing design pattern I can use for this, or a more streamlined/secure approach? Any help would be appreciated.
Thanks.
You may take a look at TempData which stores the data in Session.When you pull something out of TempData it will be removed after the Action is done executing.
So, if you put something in TempData in an Action, it will live in TempData across all other actions until its requested TempDatafrom TempData again.
You can also call TempData.Peek("key") which will keep it in memory until you call TempData["key"] or TempData.Remove("key")
Ok, I'm not sure I understand you correctly as the JS timestamp step seems superfluous.
But this is what I would do.
public static string SessionReportKey = "Reports";
public static string ReportIDString = "ReportID";
public Dictionary<string, object> SessionReportData
{
get
{
return Session[SessionReportKey] == null ?
new Dictionary<string, object>() :
(Dictionary<string, object>) Session[SessionReportKey];
}
set
{
Session[SessionReportKey] = value;
}
}
public ActionResult PreviewReport()
{
//retrive your data
object reportData = GetData();
//get identifier
string myGUID = new GUID().ToString();
//might only need [SessionReportData.Add(myGUID, reportData);] here
SessionReportData = SessionReportData.Add(myGUID, reportData);
//in your view make a hyperlink to PrintReport action with a
//query string of [?ReportID=<guidvalue>]
ViewBag[ReportIDString] = myGUID;
return View(reportData);
}
public FileContentResult PrintReport()
{
if(SessionReportData[QueryString[ReportIDString]] == null)
{
//error no report in session
return null;
}
return GenerateFileFromData(SessionReportData[QueryString[ReportIDString]]);
}
#In
Identity identity;
Boolean newValue = identity.hasPermission(target, action);
Any call to the above method also does a "select role from Role r" call, which is called from the underlying seam engine. How do I set the query cache for this call as a query hint (e.g. org.hibernate.cacheable flag) so that it doesn't get called again.
Note: Role information is never bound to change, hence I view this as a unnecessary sql call.
I am not in hibernate, but as this question is still unanswered: we extended the standard Identity class of seam for several reasons. You might want to extend it as well to help you caching the results.
As this cache is session scoped, it will have the possible benefit that it will be reloaded when the user logs on/off again - but this depends on your requirements.
Best regards,
Alexander.
/**
* Extended Identity to implement i.e. caching
*/
#Name("org.jboss.seam.security.identity")
#Scope(SESSION)
#Install(precedence = Install.APPLICATION)
#BypassInterceptors
#Startup
public class MyIdentity extends Identity {
// place a concurrent hash map here
#Override
public boolean hasPermission(Object name, String action) {
// either use the use the cached result in the hash map ...
// ... or call super.hasPermission() and cache the result
}
}
I'm trying to create a Caching Class to cache some objects from my pages. The purpose is to use the Caching system of the ASP.NET framework but to abstract it to separate class.
It seems that the caching doesn't persist.
Any ideas what I'm doing wrong here? Is it possible at all to cache object out side the Page it self?
EDIT: added the code:
Insert to cache
Cache c = new Cache();
c.Insert(userid.ToString(), DateTime.Now.AddSeconds(length), null, DateTime.Now.AddSeconds(length), Cache.NoSlidingExpiration,CacheItemPriority.High,null);
Get from the cache
DateTime expDeath = (DateTime)c.Get(userid.ToString())
I get null on the c.Get, even after I did have the key.
The code is in a different class than the page itself (the page uses it)
Thanks.
There are numerous ways you can store objects in ASP.NET
Page-level items -> Properties/Fields on the page which can live for the lifetime of the page lifecycle in the request.
ViewState -> Store items in serialised Base64 format which is persisted through requests using PostBack. Controls (including the page itself - it is a control) can preserve their previous state by loading it from ViewState. This gives the idea of ASP.NET pages as stateful.
HttpContext.Items -> A dictionary of items to store for the lifetime of the request.
Session -> Provides caching over multiple requests through session. The session cache mechanism actually supports multiple different modes.
InProc - Items are stored by the current process, which means should the process terminate/recycle, the session data is lost.
SqlServer - Items are serialised and stored in a SQL server database. Items must be serialisable.
StateServer - Items are serialised and stored in a separate process, the StateServer process. As with SqlServer, items must be serialisable.
Runtime - Items stored in the runtime cache will remain for the lifetime of the current application. Should the applciation get recycled/stop, the items will be lost.
What type of data are you trying to store, and how do you believe it must be persisted?
Right at the beginning of last year I wrote a blog post on a caching framework I had been writing, which allows me to do stuff like:
// Get the user.
public IUser GetUser(string username)
{
// Check the cache to find the appropriate user, if the user hasn't been loaded
// then call GetUserInternal to load the user and store in the cache for future requests.
return Cache<IUser>.Fetch(username, GetUserInternal);
}
// Get the actual implementation of the user.
private IUser GetUserInternal(string username)
{
return new User(username);
}
That was nearly a year ago, and it has been evolved a bit since then, you can read my blog post about it, let me know if thats of any use.
Your cache reference needs to be accessible to all items in your code - the same reference.
If you are newing up the Cache class every time, you are doing it wrong.
I have done almost the same things, but with a different code (and it work for me) :
(CacheKeys is an enum)
using System;
using System.Configuration;
using System.Web;
using System.IO;
public static void SetCacheValue<T>(CacheKeys key, T value)
{
RemoveCacheItem(key);
HttpRuntime.Cache.Insert(key.ToString(), value, null,
DateTime.UtcNow.AddYears(1),
System.Web.Caching.Cache.NoSlidingExpiration);
}
public static void SetCacheValue<T>(CacheKeys key, T value, DateTime expiration)
{
HttpRuntime.Cache.Insert(key.ToString(), value, null,
expiration,
System.Web.Caching.Cache.NoSlidingExpiration);
}
public static void SetCacheValue<T>(CacheKeys key, T value, TimeSpan slidingExpiration)
{
HttpRuntime.Cache.Insert(key.ToString(), value, null,
System.Web.Caching.Cache.NoAbsoluteExpiration,
slidingExpiration);
}
public static T GetCacheValue<T>(CacheKeys key)
{
try
{
T value = (T)HttpRuntime.Cache.Get(key.ToString());
if (value == null)
return default(T);
else
return value;
}
catch (NullReferenceException)
{
return default(T);
}
}
I have an ASP.NET application with a lot of dynamic content. The content is the same for all users belonging to a particular client. To reduce the number of database hits required per request, I decided to cache client-level data. I created a static class ("ClientCache") to hold the data.
The most-often used method of the class is by far "GetClientData", which brings back a ClientData object containing all stored data for a particular client. ClientData is loaded lazily, though: if the requested client data is already cached, the caller gets the cached data; otherwise, the data is fetched, added to the cache and then returned to the caller.
Eventually I started getting intermittent crashes in the the GetClientData method on the line where the ClientData object is added to the cache. Here's the method body:
public static ClientData GetClientData(Guid fk_client)
{
if (_clients == null)
_clients = new Dictionary<Guid, ClientData>();
ClientData client;
if (_clients.ContainsKey(fk_client))
{
client = _clients[fk_client];
}
else
{
client = new ClientData(fk_client);
_clients.Add(fk_client, client);
}
return client;
}
The exception text is always something like "An object with the same key already exists."
Of course, I tried to write the code so that it just wasn't possible to add a client to the cache if it already existed.
At this point, I'm suspecting that I've got a race condition and the method is being executed twice concurrently, which could explain how the code would crash. What I'm confused about, though, is how the method could be executed twice concurrently at all. As far as I know, any ASP.NET application only ever fields one request at a time (that's why we can use HttpContext.Current).
So, is this bug likely a race condition that will require putting locks in critical sections? Or am I missing a more obvious bug?
If an ASP.NET application only handles one request at a time all ASP.NET sites would be in serious trouble. ASP.NET can process dozens at a time (typically 25 per CPU core).
You should use ASP.NET Cache instead of using your own dictionary to store your object. Operations on the cache are thread-safe.
Note you need to be sure that read operation on the object you store in the cache are threadsafe, unfortunately most .NET class simply state the instance members aren't thread-safe without trying to point any that may be.
Edit:
A comment to this answer states:-
Only atomic operations on the cache are thread safe. If you do something like check
if a key exists and then add it, that is NOT thread safe and can cause the item to
overwritten.
Its worth pointing out that if we feel we need to make such an operation atomic then the cache is probably not the right place for the resource.
I have quite a bit of code that does exactly as the comment describes. However the resource being stored will be the same in both places. Hence if an existing item on rare occasions gets overwritten the only the cost is that one thread unnecessarily generated a resource. The cost of this rare event is much less than the cost of trying to make the operation atomic every time an attempt to access it is made.
This is very easy to fix:
private _clientsLock = new Object();
public static ClientData GetClientData(Guid fk_client)
{
if (_clients == null)
lock (_clientsLock)
// Check again because another thread could have created a new
// dictionary in-between the lock and this check
if (_clients == null)
_clients = new Dictionary<Guid, ClientData>();
if (_clients.ContainsKey(fk_client))
// Don't need a lock here UNLESS there are also deletes. If there are
// deletes, then a lock like the one below (in the else) is necessary
return _clients[fk_client];
else
{
ClientData client = new ClientData(fk_client);
lock (_clientsLock)
// Again, check again because another thread could have added this
// this ClientData between the last ContainsKey check and this add
if (!clients.ContainsKey(fk_client))
_clients.Add(fk_client, client);
return client;
}
}
Keep in mind that whenever you mess with static classes, you have the potential for thread synchronization problems. If there's a static class-level list of some kind (in this case, _clients, the Dictionary object), there's DEFINITELY going to be thread synchronization issues to deal with.
Your code really does assume only one thread is in the function at a time.
This just simply won't be true in ASP.NET
If you insist on doing it this way, use a static semaphore to lock the area around this class.
you need thread safe & minimize lock.
see Double-checked locking (http://en.wikipedia.org/wiki/Double-checked_locking)
write simply with TryGetValue.
public static object lockClientsSingleton = new object();
public static ClientData GetClientData(Guid fk_client)
{
if (_clients == null) {
lock( lockClientsSingleton ) {
if( _clients==null ) {
_clients = new Dictionary``();
}
}
}
ClientData client;
if( !_clients.TryGetValue( fk_client, out client ) )
{
lock(_clients)
{
if( !_clients.TryGetValue( fk_client, out client ) )
{
client = new ClientData(fk_client)
_clients.Add( fk_client, client );
}
}
}
return client;
}