I have created a Registry class in .NET which is a singleton. Apparently this singleton behaves as if it were kept in the Cache (the singleton object is available to each session). Is this a good practice of should I add this Singleton to the Cache?
+ do I need to wacth out for concurrency problems with the GetInstance() function?
namespace Edu3.Business.Registry
{
public class ExamDTORegistry
{
private static ExamDTORegistry instance;
private Dictionary<int, ExamDTO> examDTODictionary;
private ExamDTORegistry()
{
examDTODictionary = new Dictionary<int, ExamDTO>();
}
public static ExamDTORegistry GetInstance()
{
if (instance == null)
{
instance = new ExamDTORegistry();
}
return instance;
}
}
}
Well, your GetInstance method certainly isn't thread-safe - if two threads call it at the same time, they may well end up with two different instances. I have a page on implementing the singleton pattern, if that helps.
Does your code rely on it being a singleton? Bear in mind that if the AppDomain is reloaded, you'll get a new instance anyway.
I don't really see there being much benefit in putting the object in the cache though. Is there anything you're thinking of in particular?
Despite their presence in GoF singletons are generally considered bad practice. Is there any reason why you wish to have only one instance?
HttpContext.Cache is available to all sessions, but items in the cache can be removed from memory when they expire or if there is memory pressure.
HttpContext.Application is also available to all sessions and is a nice place to store persistent, application-wide objects.
Since you've already created a singleton and it works, I don't see why should use one of the ones built-in singleton collections instead, unless you need the extra functionality that Cache gives you.
Not sure sure what you mean by cache... if you want this cached (as in... keep it in memory so that you don't have to fetch it again from some data store) then yes, you can put it in the cache and it will be global for all users. Session means per user, so I don't think this is what you want.
I think the original question spoke to which was preferred. If you have data that remains static or essentially immutable, then http caching or singleton pattern makes a lot of sense. If the singleton is loaded on application start up then there is no "Threading" issue at all. Once the singleton is in place you will receive the same Instance you requested. The problem with a lot of what I am seeing in actual implementations is that people are using both without fully thinking it out. Why should you expire immutable configuration data? Had one client that cached there data and still created ADO DB objects etc. when last they checked if it was in cache. Effectively both of these solutions will work for you, but to gain any positive effect, make sure you use the cache/singleton. In either case, if your data is not available, both should be refreshed at that moment.
i would make it like:
private static READONLY ExamDTORegistry instance;
then you dont need to check for NULL and its thread safe.
Related
Some background:
Working with:
.NET 4.5 (thinking of migrating to 4.5.1 if it's painless)
Web Forms
Entity Framework 5, Lazy Loading enabled
Context Per Request
IIS 8
Windows 2012 Datacenter
Point of concern: Memory Usage
Over the project we are currently on, and probably our first bigger project, we're often reading some bigger chunks of data, coming from CSV imports, that are likely to stay the same for very long periods of time.
Unless someone explicitly re-imports the CSV data, they are guaranteed to be the same, this happens in more than one places in our project and similar approach is used for some regular documents that are often being read by the users. We've decided to cache this data in the HttpRuntime cache.
It goes like this, and we pull about 15,000 records consisting mostly of strings.
//myObject and related methods are placeholders
public static List<myObject> GetMyCachedObjects()
{
if (CacheManager.Exists(KeyConstants.keyConstantForMyObject))
{
return CacheManager.Get(KeyConstants.keyConstantForMyObject) as List<myObject>;
}
else
{
List<myObject> myObjectList = framework.objectProvider.GetMyObjects();
CacheManager.Add(KeyConstants.keyConstantForMyObject, myObjectList, true, 5000);
return myObjectList;
}
}
The data retrieving for the above method is very simple and looks like this:
public List<myObject> GetMyObjects()
{
return context.myObjectsTable.AsNoTracking().ToList();
}
There are probably things to be said about the code structure, but that's not my concern at the moment.
I began profiling our project as soon as I saw high memory usage and found many parts where our code could be optimized. I never faced 300 simultaneous users before and our internal tests, done by ourselves were not enough to show the memory issues. I've highlighted and fixed numerous memory leaks but I'd like to understand some Entity Framework related unknowns.
Given the above example, and using ANTS Profiler, I've noticed that 'myObject', and other similar objects, are referencing many System.Data.Entity.DynamicProxies.myObject, additionally there are lots of EntityKeys which hold on to integers. They aren't taking much but their count is relatively high.
For instance 124 instances of 'myObject' are referencing nearly 300 System.Data.Entity.DynamicProxies.
Usually it looks like this, whatever the object is:
Some cache entry, some object I've cached and I now noticed many of them have been detached from dbContext prior caching, the dynamic proxies and the objectContext. I've no idea how to untie them.
My progress:
I did some research and found out that I might be caching something Entity Framework related together with those objects. I've pulled them with NoTracking but there are still those DynamicProxies in the memory which probably hold on to other things as well.
Important: I've observed some live instances of ObjectContext (74), slowly growing, but no instances of my unitOfWork which is holding the dbContext. Those seem to be disposed properly per request basis.
I know how to detach, attach or modify state of an entry from my dbContext, which is wrapped in a unitOfWork, and I often do it. However that doesn't seem to be enough or I am asking for the impossible.
Questions:
Basically, what am I doing wrong with my caching approach when it comes to Entity Framework?
Is the growing number of Object Contexts in the memory a concern, I know the cache will eventually expire but I'm worried of open connections or anything else this context might be holding.
Should I be detaching everything from the context before inserting it into the cache?
If yes, what is the best approach. Especially with List I cannot think of anything else but iterating over the collection and call detach one by one.
Bonus question: About 40% of the consumed memory is free (unallocated), I've no idea why .NET is reserving so much free memory in advance.
You can try using non entity class with specific properties with SELECT method.
public class MyObject2 {
public int ID { get; set; }
public string Name { get; set; }
}
public List<MyObject2> GetObjects(){
return framework.provider.GetObjects().Select(
x=> new MyObject2{
ID = x.ID ,
Name = x.Name
}).ToList();
);
}
Since you will be storing plain c# objects, you will not have to worry about dynamic proxies. You will not have to call detach on anything at all. Also you can store only few properties.
Even if you disable tracking, You will see dynamic proxy because EF uses dynamic class derived from your class which stores extra meta data information (relation e .g. name of foreign key etc to other entities) for the entity.
steps to reduce memory here:
Re new the context, often
Dont try and delete content from the Context. Or Set it to detached.
It hangs around like a fart in a phone box
eg context = new MyContext.
But if possible you should be
using (var context = new Mycontext){ .... }
//short lived contexts is best practice
With your Context you can set Configurations
this.Configuration.LazyLoadingEnabled = false;
this.Configuration.ProxyCreationEnabled = false; //<<<<<<<<<<< THIS one
this.Configuration.AutoDetectChangesEnabled = false;
you can disable proxies if you still feel they are hogging memory.
But that may be unecesseary if you apply using to the context in the first place.
I would redesign the solution a bit:
You are storing all data as a single entry in cache
I would move this and have an entry per cache item.
You are using HTTPRuntime cache
I would use Appfabric Caching, also MS, also free.
Not sure where you are calling that code from
I would Call it on Application start, then all data is in memory when the user needs it
You are using Entity SQL
For this I would use an Entity Data Reader http://msdn.microsoft.com/en-us/library/system.data.entityclient.entitydatareader(v=vs.110).aspx
See also:
http://msdn.microsoft.com/en-us/data/hh949853.aspx
I need to cache a very small amount of data for a maximum of one hour for an ASP.NET web application (one instance). Obviously this needs to be thread-safe so I can access the cache from within my requests.
I want to do this "in process", and not use anything external.
What would be the easiest way to implement this?
You can user the Cache object ASP.NET provides you with.
You can create a property that returns the cached object if exist else retrieve it from db.
private myClass myProp {
get{
if (Cache["Key1"] == null)
Cache.Add("Key1", "Value 1", null, DateTime.Now.AddMinutes(60), Cache.NoSlidingExpiration, CacheItemPriority.High);
return (myClass)Cache["Key1"];
}
}
Use static variables. You could write a static cache class including your update logic (maximum of one hour) and store the retrieved data in a static member.
The class will persist in the app pool until it is recycled. This could be too often or too rarely for your use cases. But the caching ability should be fair enough.
For the thread-safety issues you could provide getter methods in this class and make use of the lock statement.
Problem Background
I have been thinking of ways to optimize the out of state storage of sessions within SQL server and a few I ran across are:
Disable session state on pages that do not require the session. Also, use read-only on pages that are not writing to the session.
In ASP.NET 4.0 use gzip compression option.
Try to keep the amount of data stored in the session to a minimum.
etc.
Right now, I have a single object (a class called SessionObject) stored in the session. The good news is, is that it is completely serializable.
Optimizing using protobuf-net
An additional way I thought might be a good way to optimize the storage of sessions would be to use protocol buffers (protobuf-net) serialization/deserialization instead of the standard BinaryFormatter. I understand I could have all of my objects inherit ISerializable, but I'd like to not create DTO's or clutter up my Domain layer with serialize/deserialize logic.
Any suggestions using protobuf-net with session state SQL server mode would be great!
If the existing session-state code uses BinaryFormatter, then you can cheat by getting protobuf-net to act as an internal proxy for BinaryFormatter, by implementing ISerializable on your root object only:
[ProtoContract]
class SessionObject : ISerializable {
public SessionObject() { }
protected SessionObject(SerializationInfo info, StreamingContext context) {
Serializer.Merge(info, this);
}
void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context) {
Serializer.Serialize(info, this);
}
[ProtoMember(1)]
public string Foo { get; set; }
...
}
Notes:
only the root object needs to do this; the any encapsulated objects will be handled automatically by protobuf-net
it will still ad d a little type metadata for the outermost object, but not much
you will need to decorate the members (and encapsulated types) accordingly (this is best done explicit per member; there is an implicit "figure it out yourself" mode, but this is brittle if you add new members)
this will break existing state; changing the serialization mechanism is fundamentally a breaking change
If you want to ditch the type metadata from the root object, you would have to implement your own state provider (I think there is an example on MSDN);
advantage: smaller output
advantage: no need to implement ISerializable on the root object
disadvantage: you need to maintain your own state provider ;p
(all the other points raised above still apply)
Note also that the effectiveness of protobuf-net here will depend a bit on what the data is that you are storing. It should be smaller, but if you have a lot of overwhelmingly large strings it won't be much smaller, as protobuf still uses UTF-8 for strings.
If you do have lots of strings, you might consider additionally using gzip - I wrote a state provider for my last employer that tried gzip, and stored whichever (original or gzip) was the smallest - obviously with a few checks, for example:
don't gzip if it is smaller than [some value]
short-circuit the gzip compression early if the gzip ever exceeds the original
The above can be used in combination with protobuf-net quite happily - and if you are writing a state-provider anyway you can drop the ISerializable etc for maximum performance.
A final option, if you really want would be fore me to add a "compression mode" property to [ProtoContract(..., CompressionMode = ...)]; which:
would only apply for the ISerializable usage (for technical reasons, it doesn't make sense to change the primary layout, but this scenario would be fine)
automatically applies gzip during serialization/deserialization of the above [perhaps with the same checks I mention above]
would mean you don't need to add your own state provider
However, this is something I'd only really want to apply for "v2" (I'm being pretty brutal about bugfix only in v1, so that I can keep things sane).
Let me know if that would be of interest.
I'm learning EF now and have a question regarding the ObjectContext:
Should I create instance of ObjectContext for every query (function) when I access the database?
Or it's better to create it once (singleton) and reuse it?
Before EF I was using enterprise library data access block and created instance of dataacess for DataAccess function...
I think the most common way is to use it per request. Create it at the beginning, do what you need (most of the time these are operation that require common ObjectContext), dispose at the end. Most of DI frameworks support this scenario, but you can also use HttpModule to create context and place it in HttpContext.Current.Items. That is simple example:
public class MyEntitiesHttpModule : IHttpModule
{
public void Init(HttpApplication application)
{
application.BeginRequest += ApplicationBeginRequest;
application.EndRequest += ApplicationEndRequest;
}
private void ApplicationEndRequest(object sender, EventArgs e)
{
if (HttpContext.Current.Items[#"MyEntities"] != null)
((MyEntities)HttpContext.Current.Items[#"MyEntities"]).Dispose();
}
private static void ApplicationBeginRequest(Object source, EventArgs e)
{
var context = new MyEntities();
HttpContext.Current.Items[#"MyEntities"] = context;
}
}
Definitely for every query. It's a lightweight object so there's not much cost incurred creating one each time you need it.
Besides, the longer you keep an ObjectContext alive, the more cached objects it will contain as you run queries against it. This may cause memory problems. Therefore, having the ObjectContext as a singleton is a particularly bad idea. As your application is being used you load more and more entities in the singleton ObjectContext until finally you have the entire database in memory (unless you detach entities when you no longer need them).
And then there's a maintainability issue. One day you try to track down a bug but can't figure out where the data was loaded that caused it.
Don't use a singleton.. everyone using your app will share that and all sorts of crazy things will happen when that object context is tracking entities.
I would add it as a private member
Like Luke says this question has been asked numerous times on SO.
For a web application, per request cycle seems to work best. Singleton is definitely a bad idea.
Per request works well because one web page has a User, maybe some Projects belonging to that user, maybe some Messages for that user. You want the same ObjectContext so you can go User.Messages to get them, maybe mark some messages as read, maybe add a Project and then either commit or abandon the whole object graph at the completion of the page cycle.
Late post here by 7 months. I am currently tackling this question in my app and I'm leaning towards the #LukLed solution by creating a singleton ObjectContext for the duration of my HttpRequest. For my architecture, I have several controls that go into building a page and these controls all have their own data concerns that pull read-only data from the EF layer. It seems wasteful for them to each create and use their own ObjectContext's. Besides, there are a few situations where one control may pull data into the Context that could be reused by other controls. For instance, in my masterpage, my header at the top of the page has user information that can be reused by the other controls on the page.
My only worry is that I may pull entities into the context that will affect the queries of other controls. I haven't seen that yet but don't know if I'm asking for trouble. I guess we'll see!
public class DBModel {
private const string _PREFIX = "ObjectContext";
// DBModel.GetInstance<EntityObject>();
public static ObjectContext GetInstance<T>() {
var key = CreateKey<T>();
HttpContext.Current.Items[key] = HttpContext.Current.Items[key] ?? Activator.CreateInstance<T>();
return HttpContext.Current.Items[key] as ObjectContext;
}
private static string CreateKey<T>() {
return string.Format("{0}_{1}", _PREFIX, typeof(T).Name);
}
}
EDIT: Duplicate of Should Entity Framework Context be Put into Using Statement?
I've been tossing around this idea for some time wondering what bad could happen by not properly disposing my object context and allowing it to die with the GC. Normally, I would shun this, but there is a valid reason to do it.
We are using partial classes. In those partial classes we expose properties that access FK objects. For example, let's say I have a Customer class with a CustomerType FK object. In the class, I would expose a CustomerTypeName property that does this:
public string CustomerTypeName {
get {
if (CustomerType == null) {
CustomerTypeReference.Load()
}
return CustomerType.CustomerTypeName;
}
}
This works out very handy if the original query did not do a .Include("CustomerType").
However, if I dispose the context, the above property no longer works. So... I guess this leads to a couple of questions:
1) If I never explicitly dispose of the context, what negatives will I see, if any?
2) Is there any other way to accomplish lazy loading in the above scenario and still dispose of the context?
In my answer to 'LINQ to SQL - where does your DataContext live?' we have the page as owner of the DataContext for the life of the page, and it is the page that properly disposes of the DataContext when the page is itself disposed of.
As #Chu points out it's a little dirty, but if you're going to use what is arguably a data transfer object directly in your UI, then your UI should control the lifetime of the DataContext.
Well ObjectContext's that are left around indefinitely are fine, so long as you don't keep loading / adding lots of new objects.
Every object that is loaded or added will always be tracked by the ObjectContext until it is disposed, so if you never dispose, and you keep tracking more objects it will just get bigger and bigger.
One option you could look at doing is using some utility method to either access some well known context or create a temporary context.
The key to this is using the EntityReference.EntityKey and making sure both entities are detached.
i.e.
this.CustomerType = Utility.GetDetachedObjectByKey<Customer>(
this.CustomerTypeReference.EntityKey);
The basic implementation of GetDetachedObjectByKey is something like this:
public static T GetDetachedObjectByKey<T>(EntityKey key)
where T: EntityObject
{
using (MyContext ctx = new MyContext())
{
T t = ctx.GetObjectByKey(key) as T;
ctx.Detach(t);
return t;
}
}
This will only work if the original object target is detached too. You could experiment with where the Context used by this method comes from.
Hope this helps
Alex
Why not keep the context around for the length of your screen?