Efficiently Determine if EF 4 POCO Already in ObjectSet - poco

I'm trying EF 4 with POCO's on a small project for the first time. In my Repository implementation, I want to provide a method AddOrUpdate that will add a passed-in POCO to the repository if it's new, else do nothing (as the updated POCO will be saved when SaveChanges is called).
My first thought was to do this:
public void AddOrUpdate(Poco p)
{
if (!Ctx.Pocos.Contains<Poco>(p))
{
Ctx.Pocos.AddObject(p);
}
}
However that results in a NotSupportedException as documented under Referencing Non-Scalar Variables Not Supported (bonus question: why would that not be supported?)
Just removing the Contains part and always calling AddObject results in an InvalidStateException:
An object with the same key already
exists in the ObjectStateManager. The
existing object is in the Unchanged
state. An object can only be added to
the ObjectStateManager again if it is
in the added state.
So clearly EF 4 knows somewhere that this is a duplicate based on the key.
What's a clean, efficient way for the Repository to update Pocos for either a new or pre-existing object when AddOrUpdate is called so that the subsequent call to SaveChanges() will do the right thing?
I did consider carrying an isNew flag on the object itself, but I'm trying to take persistence ignorance as far as practical.

Try to look at ObjectStateManager.TryGetObjectStateEntry Method, it is good described in this stackoverflow queston.

Related

Saving an entire one-to-many structure of transient objects in one query

In Short
I seem to have landed on a MAJOR anti-pattern of saving objects WAY too many times. I've read through the limited Objectify docs and can't seem to find the right pattern to use.
Details
I have multiple objects I want to store. They are all transient (they don't exist in the database yet) and they have a one-to-many relationship. I don't want to sit and call ofy().save() on every last object in my hierarchy.
In the following example, a Player has a List of Cards.
My Model:
#Entity
public class Player {
#Id private Long id = null;//will be generated
private List<Ref<Card>> cards = new ArrayList<Ref<Card>>();
//getters and setters here
}
public class Card{
#Id private Long id = null;//will be generated
//lots of other fields and getters and setters here
}
My Operation:
I need to create a new player and new card, with the player having a reference to the card in his List "cards."
IDEAL SOLUTION:
I would like to just create the player and card java objects, set their relationships, and pass them to Objectify to be saved. Like this:
Player player = new Player();
Card card = new Card();
player.setPlayer(Ref.create(card));
ofy.save().entity(player).now();
That will fail. The 3rd line attempts to create a new Ref for Card, which cannot be done because Card doesn't have an Id yet, which will be assigned to it once it's already persisted. It seems I must never associate an object with another until one has already been saved.
Current Crappy Solution
So, my solution must be to save the Card first, and then relate it to the Player, then save the player.
Player player = new Player();
Card card = new Card();
ofy().save().entity(card).now();
player.setPlayer(Ref.create(card));
ofy().save().entity(card).now();
This is insane. It seems reasonable at first, but my app is dealing with many more relationships than just this, and with this pattern my algorithm will be a spiderweb of checking for transient objects inside collections before saving the entity I'm actually concerned with.
There MUST be some way to tell Objectify to just SAVE all child/related entities along with the entity I've requested, and furthermore generate the Ids necessary instead of throwing an Exception at me.
Furthermore, I'll also need this sort of "recursive save" solution even when none of my objects are transient (ie they all have IDs already). I can't waste my time iterating through collections and then all the collections WITHIN those collections and saving them all. I'm going to need some way of telling Objectify to just SAVE THIS WHOLE HEIRARCHY OF OBJECTS I just passed you.
I've been reading around this #Load annotation and I feel like maybe there's something in there I'm missing... I don't know. Need help. Documentation is slim.
UPDATED SOLUTION
For posterity -
Using the allocateId() method decouples the entire ID generation constraint away from the database and you get a VERY clean pattern, particularly if you do as I did:
All database #Entity classes get a private constructor and a static public factory for creating transient objects. This static factory method ( createTransient() ) will always allocate a new ID. So then, all client code can use this method for acquiring new transient objects, or the obvious objectify load for acquiring existing persisted instances. Simple. Done. Lovely.
I recommend two things:
Allocate ids manually when you construct your objects using ObjectifyFactory.allocateId(). Do not use the "save with null autogenerates" feature. As you've noticed, it's a PITA to deal with entity objects that have null ids, so don't allow them to exist.
Use deferred saves. ofy().defer().save().entity(blah); You can save almost any number of things this way and they'll only get saved once on commit (or closing of the objectify session). Deferring save on the same entity multiple times produces only a single save.
This pattern of leaving ids null and filling it in on save is a holdover from the JPA days. It didn't work very well with JPA either; there were plenty of frustrating edge cases dealing with entities missing ids (especially when you wanted to put the in maps or sets). The best solution is to simply guarantee that no entity is ever missing an id in the first place.
Note that you'll want to allocate the id in a custom constructor, not the no-args constructor that Objectify uses to build your entity on load. Allocating an id is cheap but still a call to the GAE service layer and you don't want to do this on every load.

hashtable keys() keySet() which is better

Just curiously I am asking which is the better method to use Hashtable.keys() or Hashtable.keySet(). Any one would have been sufficient. Why have they given 2 methods with different return types. Is there any performance drawback/benefit of one over the other ?
keySet is there because
it returns a Set view of the keys contained in this Hashtable. The Set is backed by the Hashtable, so changes to the Hashtable are reflected in the Set, and vice-versa. The Set supports element removal (which removes the corresponding entry from the Hashtable), but not element addition.
And keys just returns an enumeration of the keys in this hashtable, no changes will be reflected after getting enumeration.
Besides the funcitonal difference mentioned by Rahul, Hashtable itself is an old artifact of earlier java version and retrofitted to implement Map interface.
So keySet is a later construct required by the Map interface.
Additionally, if this is new code that you are writing, you should read up the api details for this data structure on http://docs.oracle.com/javase/7/docs/api/java/util/Hashtable.html and see if you should consider the guideline and use HashMap or other later Collections instead.

Mocking DbEntityEntry.CurrentValues.SetValues() in EF4 CTP5 Code First

I am trying to use the DbEntityEntry.CurrentValues.SetValues() method to facilitate updating an existing entity with values from a non-entity DTO (see: http://blogs.msdn.com/b/adonet/archive/2011/01/30/using-dbcontext-in-ef-feature-ctp5-part-5-working-with-property-values.aspx)
I'm having trouble removing the dependency on DbEntityEntry though (for mocking, testing). Here is an example of what I would like to do:
var entity = dbSet.Find(dto.Id);
var entry = context.Entry(entity);
entry.CurrentValues.SetValues(dto);
context.SaveChanges();
I've also considered:
EntityType entity = new EntityType() { Id = dto.Id };
context.Attach(entity);
var entry = context.Entry(entity);
entry.CurrentValues.SetValues(entity);
context.SaveChanges();
From what I've been able to find both seem reasonable when working with an actual DbContext, but when I abstract the context to an IMyContext I lose the capability to get a DbEntityEntry for an entity, thus losing the SetValues option.
Is there any way to work around this issue, or do I need to bite the bullet and manually set modified properties on the entity from the DTO (potentially a lot of boilerplate for entities with many properties)?
(I'm fairly new to EF and this is my first StackOverflow question, so please be gentle)
If you have never used it before, this would be a great use for AutoMapper (also available via NuGet). I am unaware of how to solve your IMyContext issue and would also resort to mapping the properties. But instead of doing so manually, I would allow AutoMapper to do the heavy lifting.

Keep a history of values for specific properties of EF entities

I have a requirement to keep a history of values of some fields in an EF4 ASP.NET MVC3 application. This just needs to be a log file of sorts, log the user, datetime, tablename, fieldname, oldvalue, newvalue.
Although it would be pretty easy to code this in various save routines, I'm wondering if I can get global coverage by wiring it into some sort of dataannotation, so that I can perhaps declare
[KeepHistory()]
public string Surname { get; set; }
in my partial class (I'm using POCO but generated from a T4 template).
So Questions
1) Is this a bad idea ? I'm basically proposing to side-effect changes to an entity, not directly referenced by the code, as a result of an annotation.
2) Can it be done ? Will I have access to the right context to tie up with my unit of work so that changes get saved or dropped with the context save?
3) Is there a better way?
4) If you suggest I do this, any pointers would be appreciated - everything I've read is for validation, which may not be the best starting point.
Actually, validation might be a good starting point. Since an attribute does not know about which property or class it was assigned to, but a validation-attribute gets called by the validation framework with all the necessary informátion. If you implement the System.ComponentModel.DataAnnotations.ValidationAttribute class you can override the IsValid(object, ValidationContext) method, which gives you the actual value of the property, the name of the property and the container.
This might take a lot of work, since you need to get to the currently logged-in user etc. I'm guessing that the .NET implementation provides some sort of caching for the specific attributes on an entity type, which would be a pain to implement by yourself.
Another way, would be to use the ObjectStateManager exposed by your EF ObjectContext, which can provide you with the ObjectStateEntry-objects for all entities of a given state. See the
ObjectStateManager.GetObjectStateEntries(EntityState) method, for more information about how to call it (and when). The ObjectStateEntry actually contains a record of the original and current-values, which can be compared to find any changes made within the lifetime of the current ObjectContext.
You might consider using the ObjectStateManager to inject your custom logging behavior, while this behavior decides based on property-attributes which change should be logged.

Entity Framework: What negatives can I bump into by not disposing of my object context?

EDIT: Duplicate of Should Entity Framework Context be Put into Using Statement?
I've been tossing around this idea for some time wondering what bad could happen by not properly disposing my object context and allowing it to die with the GC. Normally, I would shun this, but there is a valid reason to do it.
We are using partial classes. In those partial classes we expose properties that access FK objects. For example, let's say I have a Customer class with a CustomerType FK object. In the class, I would expose a CustomerTypeName property that does this:
public string CustomerTypeName {
get {
if (CustomerType == null) {
CustomerTypeReference.Load()
}
return CustomerType.CustomerTypeName;
}
}
This works out very handy if the original query did not do a .Include("CustomerType").
However, if I dispose the context, the above property no longer works. So... I guess this leads to a couple of questions:
1) If I never explicitly dispose of the context, what negatives will I see, if any?
2) Is there any other way to accomplish lazy loading in the above scenario and still dispose of the context?
In my answer to 'LINQ to SQL - where does your DataContext live?' we have the page as owner of the DataContext for the life of the page, and it is the page that properly disposes of the DataContext when the page is itself disposed of.
As #Chu points out it's a little dirty, but if you're going to use what is arguably a data transfer object directly in your UI, then your UI should control the lifetime of the DataContext.
Well ObjectContext's that are left around indefinitely are fine, so long as you don't keep loading / adding lots of new objects.
Every object that is loaded or added will always be tracked by the ObjectContext until it is disposed, so if you never dispose, and you keep tracking more objects it will just get bigger and bigger.
One option you could look at doing is using some utility method to either access some well known context or create a temporary context.
The key to this is using the EntityReference.EntityKey and making sure both entities are detached.
i.e.
this.CustomerType = Utility.GetDetachedObjectByKey<Customer>(
this.CustomerTypeReference.EntityKey);
The basic implementation of GetDetachedObjectByKey is something like this:
public static T GetDetachedObjectByKey<T>(EntityKey key)
where T: EntityObject
{
using (MyContext ctx = new MyContext())
{
T t = ctx.GetObjectByKey(key) as T;
ctx.Detach(t);
return t;
}
}
This will only work if the original object target is detached too. You could experiment with where the Context used by this method comes from.
Hope this helps
Alex
Why not keep the context around for the length of your screen?

Resources