I fetch a list of rows (Id,field1,f2...) and do some computation and store the result in a IList.
I want to now update all the values in this list to a table T. I am using entity framework and this needs to be a transaction.
Will it be fine if I open a transactionscope and update using a stored proc or is thr a efficient way to push multiple updates once ?
SaveChanges method of ObjectContext is a gateway to persist all changes made to entities to the database. When you call ObjectContext.SaveChanges(), it performs insert, update or delete operation on the database based on EntityState of the entities.
ObjectContext.SaveChanges();
Hope this may help you.
Related
Imagine this scenario:
I have an array of ids for some entities that have to be deleted from database (i.e. a couple of externals keys that identifies a record into a third table) and an array of ids for some entities that have to be updated/inserted (based on some criteria that, in this moment, doesn't matters).
What can I do for delete those entities ?
Load them from db (repository way)
Call delete() on the obtained objects
Call flush() onto my entity manager
In that scenario I can make all my operation atomical as I can update/insert other records before call flush().
But why have I to load from db some records just for delete them? So I wrote my personal DQL query (into repo) and call it.
The problem is that if I call that function into my repo, this operation is done immediatly and so my "atomicity" can't be guaranteed.
So, how can I "jump" over this obstacle by following the second "delete-option" ?
By using flush() you're making Doctrine to start transactions implicitly. It is also possible to use transactions explicitly and that approach should solve your problem.
My situation
I have a c# object which contains some lists. One of these lists are for example a list of tags, which is a list of c# "SystemTag"-objects. I want to instantiate this object the most efficient way.
In my database structure, I have the following tables:
dbObject - the table which contains some basic information about my c# object
dbTags - a list of all available tabs
dbTagConnections - a list which has 2 fields: TagID and ObjectID (to make sure an object can have several tags)
(I have several other similar types of data)
This is how I do it now...
Retrieve my object from the DB using an ID
Send the DB object to a "Object factory" pattern, which then realise we have to get the tags (and other lists). Then it sends a call to the DAL layer using the ID of our C# object
The DAL layer retrieves the data from the DB
These data are send to a "TagFactory" pattern which converts to tags
We are back to the Object Factory
This is really inefficient and we have many calls to the database. This especially gives problems as I have 4+ types of lists.
What have I tried?
I am not really good at SQL, but I've tried the following query:
SELECT * FROM dbObject p
LEFT JOIN dbTagConnection c on p.Id= c.PointId
LEFT JOIN dbTags t on c.TagId = t.dbTagId
WHERE ....
However, this retreives as many objects as there are tagconnections - so I don't see joins as a good way to do this.
Other info...
Using .NET Framework 4.0
Using LINQ to SQL (BLL and DAL layer with Factory patterns in the BLL to convert from DAL objects)
...
So - how do I solve this as efficient as possible? :-) Thanks!
At first sight I don't see your current way of work as "inefficient" (with the information provided). I would replace the code:
SELECT * FROM dbObject p
LEFT JOIN dbTagConnection c on p.Id= c.PointId
LEFT JOIN dbTags t on c.TagId = t.dbTagId
WHERE ...
by two calls to the DALs methods, first to retrieve the object main data (1) and one after that to get, only, the data of the tags related (2) so that your factory can fill-up the object's tags list:
(1)
SELECT * FROM dbObject WHERE Id=#objectId
(2)
SELECT t.* FROM dbTags t
INNER JOIN dbTag Connection c ON c.TagId = t.dbTagId
INNER JOIN dbObject p ON p.Id = c.PointId
WHERE p.Id=#objectId
If you have many objects and the amount of data is just a few (meaning that your are not going to manage big volumes) then I would look for a ORM based solution as the Entity Framework.
I (still) feel comfortable writing SQL queries in the DAOs to have under control all queries being sent to the DB server, but finally it is because in our situation is a need. I don't see any inconvenience on having to query the database to recover, first, the object data (SELECT * FROM dbObject WHERE ID=#myId) and fill the object instance, and then query again the DB to recover all satellite data that you may need (the Tags in your case).
You have be more concise about your scenario so that we can provide valuable recommendations for your particular scenario. Hope this is useful you you anyway.
We used stored procedures that returned multiple resultsets, in a similar situation in a previous project using Java/MSSQL server/Plain JDBC.
The stored procedure takes the ID corresponding to the object to be retrieved, return the row to build the primary object, followed by multiple records of each one-to-many relationship with the primary object. This allowed us to build the object in its entirety in a single database interaction.
Have you thought about using the entity framework? You would then interact with your database in the same way as you would interact with any other type of class in your application.
It's really simple to set up and you would create the relationships between your database tables in the entity designer - this will give you all the foreign keys you need to call related objects. If you have all your keys set up in the database then the entity designer will use these instead - creating all the objects is as simple as selecting 'Create model from database' and when you make changes to your database you simply right-click in your designer and choose 'update model from database'
The framework takes care of all the SQL for you - so you don't need to worry about that; in most cases..
A great starting place to get up and running with this would be here, and here
Once you have it all set up you can use LINQ to easily query the database.
You will find this a lot more efficient than going down the table adapter route (assuming that's what you're doing at the moment?)
Sorry if i missed something and you're already using this.. :)
As far I guess, your database exists already and you are familiar enough with SQL.
You might want to use a Micro ORM, like petapoco.
To use it, you have to write classes that matches the tables you have in the database (there are T4 generator to do this automatically with Visual Studio 2010), then you can write wrappers to create richer business objects (you can use the ValueInjecter to do it, it is the simpler I ever used), or you can use them as they are.
Petapoco handles insert / update operations, and it retrieves generated IDs automatically.
Because Petapoco handles multiple relationships too, it seems to fit your requirements.
By Linq, I mean Entity Framework and Linq. A further question, if the SELECT query is the same, but OEDER BY clause is different, does Linq have to access the database or the in-memory entities have enough information for the new SELECT query with different ORDER BY clause?
The short answer is no, once the Entity is created, the Entity itself contains all the information that will be used for data binding, unless you explicitly instruct the Entity to requery the underlying data store.
i have two tables
asset employee
assetid-pk empid-pk
empid-fk
now, i have a form to populate the asset table but it cant because of the foreign key constraint..
what to do?
thx
Tk
Foreign keys are created for a good reason - to prevent orphan rows at a minimum. Create the corresponding parent and then use the appropriate value as the foreign key value on the child table.
You should think about this update as a series of SQL statements, not just one statement. You'll process the statements in order of dependency, see example.
Asset
PK AssetID
AssetName
FK EmployeeID
etc...
Employee
PK EmployeeID
EmployeeName
etc...
If you want to "add" a new asset, you'll first need to know which employee it will be assigned to. If it will be assigned to a new employee, you'll need to add them first.
Here is an example of adding a asset named 'BOOK' for a new employee named 'Zach'.
DECLARE #EmployeeFK AS INT;
INSERT (EmployeeName) VALUES ('Zach') INTO EMPLOYEE;
SELECT #EmployeeFK = ##IDENTITY;
INSERT (AssetName, EmployeeID) VALUES ('BOOK',#EmployeeFK) INTO ASSET;
The important thing to notice above, is that we grab the new identity (aka: EmployeeID) assigned to 'Zach', so we can use it when we add the new asset.
If I understand you correctly, are you trying to build the data graph locally before persisting to the data? That is, create the parent and child records within the application and persist it all at once?
There are a couple approaches to this. One approach people take is to use GUIDs as the unique identifiers for the data. That way you don't need to get the next ID from the database, you can just create the graph locally and persist the whole thing. There's been a debate on this approach between software and database for a long time, because while it makes a lot of sense in many ways (hit the database less often, maintain relationships before persisting, uniquely identify data across systems) it turns out to be a significant resource hit on the database.
Another approach is to use an ORM that will handle the persistence mapping for you. Something like NHibernate, for example. You would create your parent object and the child objects would just be properties on that. They wouldn't have any concept of foreign keys and IDs and such, they'd just be objects in code related by being set as properties on each other (such as a "blog post" object with a generic collection of "comment" objects, etc.). This graph would be handed off to the ORM which would use its knowledge of the mapping between the objects and the persistence to send it off to the database in the correct order, perhaps giving back the same object but with ID numbers populated.
Or is this not what you're asking? It's a little unclear, to be honest.
I'm quite new to LINQ and was wondering what was the best design for inserting an [Order], subsequently getting the [Order].[ID] and using this to save some [OrderItems].
The ID column of the [Order] table is an Identity column.
I want to try and prevent calling db.SubmitChanges(), getting the newly inserted Order ID and then having to call it (db.SubmitChanges()) again when inserting the related Order Items - is this the only way to do this though?
Thanks in advance.
If you have your LINQ to SQL Classes set up correctly, then Order should have a collection of OrderItems.
You should be able to create an Order and then add new OrderItems to the collection and then call db.SubmitChanges() once and allow LINQ to SQL handle the Id issues.
You don't have to call it in two separate steps. LINQ takes care of that for you.
If you have your database modeled correctly using a foreign key relationship between your Order and OrderItems tables, then the generated objects will also have a relationship. They Order will be composed with OrderItems. Specifically in the case of a one-to-many relationship, your Order object will have a property OrderItems that will be a collection of OrderItem objects.
LINQ is smart enough to see the composition of your objects, and because it has knowledge of the type of constraint between your tables, if will insert all the records in the correct order with the appropriate IDs, to create the same entity modeled in your OOP code, in the relational world.
So, you only have to call SubmitChanges once.
db.Orders.InsertOnSubmit(myOrder);
db.SubmitChanges();