Often I need to lookup certain rows from the database, a simplistic example would be creating a new user and referencing 'Mr' from a table defining possible salutations.
Peppering the code with static references to a database value would seem really bad, i.e.:
$em->getRepository('Salutations')->findOneBy(array('name' => 'Mr'));
So instead I create a constant in the Salutations entity, i.e.:
$em->getRepository('Salutations')->findOneBy(array('name' => Salutations::MR);
This at least limits some database changes to only affecting one file but does not seem ideal. Is there a better way to statically reference database values?
Constants are great for this, if you need to save time typing and avoid typos.
When I need to control used values, e.g. not to allow "Misses", I use ManyToOne association to another entity, e.g. Salutation.
Related
I have a use case where DynamoDB is running in production and I need to add a new column IDUpdatedAt which will also be serving as a sort key for one of the GSIs.
I tried a thing in test where my application adds the new rows with IDUpdatedAt, it's working fine but what about the existing rows? How to add the values for those?
Also the new rows will not be added without IDUpdatedAt, but how will the search be impacted for older rows?
PS: IDUpdatedAt is being used as a filter in the application, i.e., user can search for specific ID and can get results sorted by date. That's why IDUpdatedAt is also a part of GSI (sort key).
Please help.
You've got the right idea by adding the field to new items. After all, DynamoDB does not enforce a particular schema outside of the primary key.
This also happens to be a very useful feature, especially when defining a GSI on that attribute; if the atttibute exists on the item, it ends up in the index! For example, imagine modeling an email inbox in DDB where each item represents an email. You could include an attribute 'is_read' and define a GSI using that atttibute.
If the 'is_read' attribute exists on the item, it's in the index. Otherwise, it's not. A cool way to use GSIs to implement filtering.
Pretty neat stuff!
However, there is no way to retroactively update all items with a new attribute other than manually updating each item (or in batches). The equivalent in SQL databases is defining a new column. Unfortunately, an analogous operation in DDB does not exist.
I can't believe this hasn't come up for other people, but I'm unable to find a solution.
Let's say I have two entity types, A and B with a one-to-many relationship. A has a collection of Bs.
The form for A has a CollectionType for the Bs, with a custom entry_type for B, allow_add and allow_delete set to true. When the form is created/populated/rendered, the Bs' fields are identified by their index in the collection. When the form is posted back, the fields are mapped back onto the B entities according to the index again.
What if the database in the mean time decided to return the Bs in a different order? Then the values get swapped around on the Bs! I can't have that, as other entities will reference the Bs and now they've changed their meaning!
Even if the database doesn't change the order, the same issue appears when a B is deleted: The fields get shifted through the Bs and a different one deleted! (Ok, I'm not a 100% certain this happens, as there's a gap then in the numbering of the posted fields.) I've found this similar question where it does happen when another one is created (Symfony CollectionType regards deletion+creation as a modification of an item), but that sort of drifted from the issue and there's no usable answer.
How do I make sure the form is updating the entities the user actually edited?
I already tried to render the Bs' IDs as a HiddenType, but then the form rightfully complains that the ID has no setter. It would probably force an ID on the wrong B anyways and Doctrine doesn't like that. I suppose I could add the Bs as unmapped and copy the values over to the correct objects manually, but that would defeat a good chunk of Symfony's form system.
I've used CollectionType before, but not for entities that are referenced elsewhere. I would then delete all of the previous entities and create the collection anew from the posted data. But I can't do that now, can I?
Since doctrine 2.1, it's possible to change how associations are indexed. This will allow you to use the id as the collection key (as the field has to be unique):
#OneToMany(targetEntity="B", mappedBy="A", indexBy="id")
You might also need to enable orphanRemoval so that the data is actually removed instead of the relation just set to null.
Iterations of this question have been asked in the past, but this presents unique challenges as it combines some of the issues in one larger problem.
I have an entity(User) that is used as the user class in my application, then I have another entity (UserExtra), in a one-to-one relationship with the user entity, UserExtra's id is the same as User. The foreign key is the same as the primary key.
When the user object is loaded (say by $this->getUser() or by {{ app.user }}, the UserExtra data is also loaded through a join. The whole point of having two entities is so I don't have to load all the data at once.
I even tried defining a custom UserLoaderInterface/UserProviderInterface Repository for User, making sure that refreshUser and loadUserByUsername would only load the User data (I'd like for the UserExtra data to sit in a proxy unless I explicitly need it) but when Doctrine goes to Hydrate the object, it issues an extra query to load the UserExtra data, thereby skipping the Proxy status.
Is there a way out of this?
there are many solution for your issue:
1) Change the owning side and inverse side http://developer.happyr.com/choose-owning-side-in-onetoone-relation - I don't think that's right from a DB design perspective every time.
2) In functions like find, findAll, etc, the inverse side in OneToOne is joined automatically (it's always like fetch EAGER). But in DQL, it's not working like fetch EAGER and that costs the additional queries. Possible solution is every time to join with the inverse entity
3) If an alternative result format (i.e. getArrayResult()) is sufficient for some use-cases, that could also avoid this problem.
4) Change inverse side to be OneToMany - just looks wrong, maybe could be a temporary workaround.
5) Force partial objects. No additional queries but also no lazy-loading: $query->setHint (Query::HINT_FORCE_PARTIAL_LOAD, true) - seams to me the only possible solution, but not without a price:
Partial Objects are a little bit risky, because your entity behavior is not normal. For example if you not specify in ->select() all associations that you will user you can have an error because your object will not be full, all not specifically selected associations will be null
6) Not mapping the inverse bi-directional OneToOne association and either use an explicit service or a more active record approach - https://github.com/doctrine/doctrine2/pull/970#issuecomment-38383961 - And it looks like Doctrine closed the issue
this question may help you : one to one relation load
I'm having a problem updating a disconnected POCO model in an ASP.NET application.
Lets say we have the following model:
Users
Districts
Orders
A user can be responsible for 0 or more districts, an order belongs to a district and a user can be the owner of an order.
When the user logs in the user and the related districts are loaded. Later the user loads an order, and sets himself as the owner of the order. The user(and related districts) and order(and related district) are loaded in two different calls with two different dbcontexts. When I save the order after the user has assigned himself to it. I get an exception that saying that acceptchanges cannot continue because the object's key values conflict with another object.
Which is not strange, since the same district can appear both in the list of districts the user is responsible and on the order.
I've searched high and low for a solution to this problem, but the answers I have found seems to be either:
Don't load the related entities of one of the objects in my case that would be the districts of the user.
Don't assign the user to the order by using the objects, just set the foreign key id on the order object.
Use nHibernate since it apparently handles it.
I tried 1 and that works, but I feel this is wrong because I then either have to load the user without it's districts before relating it to the order, or do a shallow clone. This is fine for this simple case here, but the problem is that in my case district might appear several more times in the graph. Also it seems pointless since I have the objects so why not let me connected them and update the graph. The reason I need the entire graph for the order, is that I need to display all the information to the user. So since I got all the objects why should I need to either reload or shallow clone it to get this to work?
I tried using STE but I ran in to the same problem, since I cannot attach an object to a graph loaded by another context. So I am back at square 1.
I would assume that this is a common problem in anything but tutorial code. Yet, I cannot seem to find any good solution to this. Which makes me think that either I do not under any circumstance understand using POCOs/EF or I suck at using google to find an answer to this problem.
I've bought both of the "Programming Entity Framework" books from O'Reilly by Julia Lerman but cannot seem to find anything to solve my problem in those books either.
Is there anyone out there who can shed some light on how to handle graphs where some objects might be repeated and not necessarily loaded from the same context.
The reason why EF does not allow to have two entities with the same key being attached to a context is that EF cannot know which one is "valid". For example: You could have two District objects in your object graph, both with a key Id = 1, but the two have different Name property values. Which one represents the data that have to be saved to the database?
Now, you could say that it doesn't matter if both objects haven't changed, you just want to attach them to a context in state Unchanged, maybe to establish a relationship to another entity. It is true in this special case that duplicates might not be a problem. But I think, it is simply too complex to deal with all situations and different states the objects could have to decide if duplicate objects are causing ambiguities or not.
Anyway, EF implements a strict identity mapping between object reference identity and key property values and just doesn't allow to have more than one entity with a given key attached to a context.
I don't think there is a general solution for this kind of problem. I can only add a few more ideas in addition to the solutions in your question:
Attach the User to the context you are loading the order in:
context.Users.Attach(user); // attaches user AND user.Districts
var order = context.Orders.Include("Districts")
.Single(o => o.Id == someOrderId);
// because the user's Districts are attached, no District with the same key
// will be loaded again, EF will use the already attached Districts to
// populate the order.Districts collection, thus avoiding duplicate Districts
order.Owner = user;
context.SaveChanges();
// it should work without exception
Attach only the entities to the context you need in order to perform a special update:
using (var context = new MyContext())
{
var order = new Order { Id = order.Id };
context.Orders.Attach(order);
var user = new User { Id = user.Id };
context.Users.Attach(user);
order.Owner = user;
context.SaveChanges();
}
This would be enough to update the Owner relationship. You would not need the whole object graph for this procedure, you only need the correct primary key values of the entities the relationship has to be created for. It doesn't work that easy of course if you have more changes to save or don't know what exactly could have been changed.
Don't attach the object graph to the context at all. Instead load new entities from the database that represent the object graph currently stored in the database. Then update the loaded graph with your detached object graph and save the changes applied to the loaded (=attached) graph. An example of this procedure is shown here. It is safe and a very general pattern (but not generic) but it can be very complex for complex object graphs.
Traverse the object graph and replace the duplicate objects by a unique one, for example just the first one with type and key you have found. You could build a dictionary of unique objects that you lookup to replace the duplicates. An example is here.
I have a stored procedure that search a view using full text.
I'm trying to map the results to an existing Entity (Sales), where I mapped the column MivType to SaleType (as it makes more sense, and I want to keep db names away from my web site). The stored procedure is mapped to a Function Import, and I've defined its ReturnType to Sales.
This work well as long as the entity has the same property names as fields names.
Here's my problem: when I change the property's name, I get the following error after running the imported function:
The data reader is incompatible with the specified 'Model.Sale'. A member of the type, 'SaleType', does not have a corresponding column in the data reader with the same name.
I can fix this if I change the property 'SaleType' to 'MivType' on the entity, but why should I do that? Isn't that what the mapping is for?
This means I have to use the exact same names on the stored procedure and the entity, so in effect, the mapping is ignored (I have names like YzrName, MivYaad, etc, and I don't like it).
Is there a simple way around this? I don't want to use the db names on my application, and prefer not to change the stored procedure...
(I should mention I'm a beginner with the EF, so this can be a rookie mistake)
Thanks.
Well the entity designer doesnt work very well. I generally try to do everything in the XML. In the XML there are 3 parts. The Storage (a representation of the SQL Database). The Conceptual (a represention of your .Net Objects. and the Conceptual to Storage Mapping
It sounds like the error is in your Conceptual to Storage Mapping. You can keep the property name SalesType on the conceptual side but the mapping must map the the correct names on both the conceptual and storage side.
Refer to MSDN here are some articles
http://msdn.microsoft.com/en-us/library/cc716731.aspx