We're using a Linq-to-SQL DataContext in a web application that provides read-only data to the application and is never updated (have set ObjectTrackingEnabled = false to enforce this.
Since the data never changes (except for occasional config updates) it seems wasteful to be reloading it from SQL Server with a new DataContext for each web request.
We tried caching the DataContext in the Application object for all requests to use, but it was generating a lot of error and our research since shows that this was a bad idea, DataContext should be disposed of within the same unit of work, not thread safe, etc, etc.
So since the DataContext is meant to be a data access mechanism, not a data store, we need to be looking at caching the data that we get from it, not the context itself.
Would prefer to do this with the entities and collections themselves so the code can be agnostic about whether it is dealing with cached or "fresh" data.
How can this be done safely?
First, I need to make sure that the entities and collections are fully loaded before I dispose of the DataContext. Is there a way to force a full load of everything from the database conveniently?
Second, I'm pretty sure that storing references to the entities and collections is a bad idea, because it will either
(a) cause the entities to be corrupted when the DataContext goes out of scope or
(b) prevent the DataContext from going out of scope
So should I clone the EntitySets and store them? If so, how? Or what's the go here?
This is not exactly an answer to your question, but I suggest avoiding caching on web site side.
I would rather focus on optimizing database queries for faster and more efficient data retrieval.
Caching will:
not be scalable
need extra code for synchronization, I assume your data isn't completely static in DB?
extra code will be bug prone
will eat up memory of your web server quickly, the next thing you might end up addressing is memory issue on your web server
will not work very well, when you need to load-balance your web site
[Edit]
If I needed to cache 5MB data, I would use Cache object, probably with lazy loading. I would use a set of lightweight collections, like ReadonlyCollection<T>, Collectino<T>. I would probably use ReadonlyDictionary<TKey, TValue> also for quick searches in the memory. I would use LINQ-to-Objects to manipulate with the collections.
You want to cache the data retrieved from the DataContext rather than the DataContext object itself. I usually refactor out commonly-retrieved data into methods that I can implement silent caching with, something like this (may need to add thread-safe logic):
public class MyBusinssLayer {
private List<MyType> _myTypeCache = null;
public static List<MyType> GetMyTypeList() {
if (_myTypeCache == null) {
_myTypeCache = // data retrieved from SQL server
}
return _myTypeCache
}
}
This is the simplest pattern that can be used and will cache for one web request. To cache for longer periods, store the contents in a longer-term storage, such as Application or Cache. For instance, to store in Application level data, use this kind of pattern.
public static List<MyType> GetMyTypeList() {
if (Application["MyTypeCacheName"] = null) {
Application["MyTypeCacheName"] = // data retrieved from SQL server
}
return (List<MyType>)Application["MyTypeCacheName"];
}
This would be for data that almost never changes, such as a static collection of status types to choose from in a DropDownList. For more volitile data, you can use the Cache with a timeout period, which should be selected based on how often the data changes. With the Cache items can be invalidated manually with code if necessary, or with a depedency checker like SqlCacheDependency.
Hope this helps!
Related
We're writing a class we'll use in our asp.net site. This class will pull down some json using HttpClients and such, and use it to provide information to other clients.
Some of this information will change very infrequently and it doesn't make sense to query for it on each client request.
For that reason I'm thinking of making a static constructor in this new class for the slow-changing information and stashing the results in a few static member variables. That'll save us a few HttpRequests down the line-- I think.
My question is, how long can I expect that information to be there before the class is recycled by ASP.Net and a new one comes into play, with the static constructor called once more? Is what I'm trying to do worth it? Are there better ways in ASP.Net to go about this?
I'm no expert on ASP.Net thread pooling or how it works and what objects get recycled and when.
Typical use of the new class (MyComponent, let's call it) would be as below, if that helps any.
//from mywebpage.aspx.cs:
var myComponent = new MyComponent();
myComponent.doStuff(); //etc etc.
//Method calls like the above may rely on some
//of the data we stored from the static constructor call.
Static fields last as long as the AppDomain. It is a good strategy that you have in mind but consider that the asp runtime may recycle the app pool or someone may restart the web site/server.
As an extension to your idea, save the data locally (via a separate service dedicated to this or simply to the hard drive) and refresh this at specific intervals as required.
You will still use a static field in asp.net for storing the value, but you will aquire it from the above local service or disk ... here I recommend a System.Lazy with instantiation and publication options on thrread safe (see the constructor documentation).
I'm developing an app with VS2013, using EF6.02, and Web API 2. I'm using the ASP.NET SPA template, and creating a RESTful api against an entity framework data source backed by a sql server. (In development, this resides on the SQL Server local instance.)
I've got two API methods so far (one that just reads data, one that writes data), and I'm testing them by calling them in the javascript. When I only call a single method in my script, either one works perfectly. But if I call both in script (without waiting for either's callback to fire), I get bad results and different exceptions in the debugger. Some exceptions state that the save can't be completed because there are pending transactions. Another exception stated something about a conflict with other threads. And sometimes, the read operation fails with a null pointer exception when trying to read a result set.
"New transaction is not allowed because there are other threads running in the session."
This makes me question if I'm correctly getting a new DBContext per request. My code for this looks like:
static Startup()
{
context = new Data.SqlServer.AppDbContext();
...
}
and then whenever instantiating a unit of work, I access Startup.context.
I've tried to implement the unit of work pattern, and each request shares a single UOW object which has a single DBContext object.
My question: Do I have additional responsibility to ensure that web requests "play nicely" with eachother? I hope that this is a problem that others have already dealt with. Perhaps the errors that I'm seeing are legitimate in the sense that if one user's data is being touched, it is temporarily in an invalid state and if other requests come in at that exact moment, they indeed will fail (and I should code anticipating these failures). I guess that even if each request has its own DBContext, they still share the same underlying SQL data source so perhaps that's causing issues.
I can try to put together a testcase, but I get differing behavior depending on where I put breakpoints and how long I spend on them, reaffirming to me that this is timing related.
Thanks for any help or suggestions...
-Ben
Your problem is where you are setting your context. The Startup method is for when the entire application starts, thus any request made will all use the same context. This is not a per request setup, but rather a per application setup. As to why you are getting the errors, EntityFramework is NOT thread-safe. Since IIS spawns many threads to handle concurrent request, your single context is being used across multiple threads.
As for a solution, you can look into
-Dependency Injection frameworks (such as Ninject or Unity)
-place a using statement in your UnitOfWork classes
using(var context = new Data.SqlServer.AppDbContext()){//do stuff}
-Or, I have seen instances of people creating a class that gets the context for that request and stores it in the HttpContext.Cache[] element (using a unique name so you can retrieve it in another class easily), making it so that you will reuse the same context for the same request. Something like this:
public AppDbContext GetDbContext()
{
var httpContext = HttpContext.Current;
if (httpContext == null) return new AppDbContext();
const string contextTypeKey = "AppDbContext";
if (httpContext.Items[contextTypeKey] == null)
{
httpContext.Items.Add(contextTypeKey, new AppDbContext());
}
return httpContext.Items[contextTypeKey] as AppDbContext;
}
To use the above method, make a simple call var context = GetDbContext();
Note
We have all of the above methods, but this is specifically to the third method. It seems to work well with two caveats. First, do not use this in a using statement as it will not be available to any other classes during the scope of the request (you dispose it). And secondly, ensure that you have a call on Application_EndRequest that does actually dispose of it. We saw these little buggers hanging around after the request ended in memory causing a huge spike in memory usage.
Our project is running on ASP.NET, we are using Entity Framework with LINQ (lambda syntax) and we need to prevent from inserting into table at same time. I tried to use ReaderWriterLock class, but it works only in one session (when opened more tabs in browser), but not in more different browsers. I also read about creating table with timestamps (not sure if it can solve our problem) or use transactions, but do not now exactly how to use it in web application with LINQ.
Can you tell me please how to handle this exclusive write access in ASP.NET?
The ReaderWriterLockSlim could be a good choice, but if you want that ANY thread/process may share the same lock, the whole ReaderWriterLockSlim must be a static member.
That is, your class should look like this:
public class Class1
{
private readonly static ReaderWriterLockSlim _lock = new ReaderWriterLockSlim();
}
Important note
Using an application layer lock you'll be able to lock your own application threads in order to limit one thread to access the database at once. But other applications (not the ASP.NET one, or another application in another application pool also on IIS) may be able to access the database in parallel either doing reads and writes.
If you want a 100% effective solution, you must use database transactions. If SQL Server is the RDBMS, you can go for a transaction with Serializable isolation level:
Volatile data can be read but not modified, and no new data can be
added during the transaction.
Learn more here.
Is there any way to use caching in ASP.Net except SQL Server second level cache. As it is the first time to work with caching I want any way with an example. I have found that NHibernate implements this but we are using .netTiers as an application framework.
The Session cache seems to be the appropriate caching mechanism here. The Session cache is a fault-tolerant cache of objects.
Inserting an object
Session["Username"] = "Matt";
Reading an object
string username = (string)Session["Username"];
Removing an object
Session.Remove("Username");
I say fault-tolerant because if the value with the key you specify doesn't exist in the Session cache, it will not through an exception, it will return null. You need to consider that when implementing your code.
One thing to note, if you are using Sql Server or State Server, the objects you can put in the cache need to be serializable.
Memcached is also a very good way to go, as it is very flexible. It is a windows service that runs on any number of machines and your app can talk to the instances to store and retrieve from the cache. Good Article Here
I have a app that pass through a web service to access data in database.
For performance purpose, I store all apps parameters in cache, otherwise I would call the web service on each page requests.
Some examples of these parameters are the number of search result to display, or wich info should be displayed or not.
The parameters are stored in database because they are edited through a windows management application.
So here comes my question, since these parameters don't have to expire (I store them for a couple of hours), would it be more efficent to store them in a static variable, like a singleton?
What do you think?
I don't think there'd be a noticeable performance difference in storing your parameters in the HttpCache versus a Singleton object. Either way, you need to load the parameters when the app starts up.
The advantage of using the HttpCache is that it is already built to handle an expiration and refresh, which I assume you would want. If you never want to refresh the parameters, then I suppose you could use a Singleton due to the simplicity.
The advantage of building your own custom class is that you can get some static typing for your parameters, since everything you fetch from HttpCache will be an object. However, it would be trivial to build your own wrapper for the HttpCache that will return a strongly typed object.