Efficient way to load lists of objects from database to instantiate a single object - asp.net

My situation
I have a c# object which contains some lists. One of these lists are for example a list of tags, which is a list of c# "SystemTag"-objects. I want to instantiate this object the most efficient way.
In my database structure, I have the following tables:
dbObject - the table which contains some basic information about my c# object
dbTags - a list of all available tabs
dbTagConnections - a list which has 2 fields: TagID and ObjectID (to make sure an object can have several tags)
(I have several other similar types of data)
This is how I do it now...
Retrieve my object from the DB using an ID
Send the DB object to a "Object factory" pattern, which then realise we have to get the tags (and other lists). Then it sends a call to the DAL layer using the ID of our C# object
The DAL layer retrieves the data from the DB
These data are send to a "TagFactory" pattern which converts to tags
We are back to the Object Factory
This is really inefficient and we have many calls to the database. This especially gives problems as I have 4+ types of lists.
What have I tried?
I am not really good at SQL, but I've tried the following query:
SELECT * FROM dbObject p
LEFT JOIN dbTagConnection c on p.Id= c.PointId
LEFT JOIN dbTags t on c.TagId = t.dbTagId
WHERE ....
However, this retreives as many objects as there are tagconnections - so I don't see joins as a good way to do this.
Other info...
Using .NET Framework 4.0
Using LINQ to SQL (BLL and DAL layer with Factory patterns in the BLL to convert from DAL objects)
...
So - how do I solve this as efficient as possible? :-) Thanks!

At first sight I don't see your current way of work as "inefficient" (with the information provided). I would replace the code:
SELECT * FROM dbObject p
LEFT JOIN dbTagConnection c on p.Id= c.PointId
LEFT JOIN dbTags t on c.TagId = t.dbTagId
WHERE ...
by two calls to the DALs methods, first to retrieve the object main data (1) and one after that to get, only, the data of the tags related (2) so that your factory can fill-up the object's tags list:
(1)
SELECT * FROM dbObject WHERE Id=#objectId
(2)
SELECT t.* FROM dbTags t
INNER JOIN dbTag Connection c ON c.TagId = t.dbTagId
INNER JOIN dbObject p ON p.Id = c.PointId
WHERE p.Id=#objectId
If you have many objects and the amount of data is just a few (meaning that your are not going to manage big volumes) then I would look for a ORM based solution as the Entity Framework.
I (still) feel comfortable writing SQL queries in the DAOs to have under control all queries being sent to the DB server, but finally it is because in our situation is a need. I don't see any inconvenience on having to query the database to recover, first, the object data (SELECT * FROM dbObject WHERE ID=#myId) and fill the object instance, and then query again the DB to recover all satellite data that you may need (the Tags in your case).
You have be more concise about your scenario so that we can provide valuable recommendations for your particular scenario. Hope this is useful you you anyway.

We used stored procedures that returned multiple resultsets, in a similar situation in a previous project using Java/MSSQL server/Plain JDBC.
The stored procedure takes the ID corresponding to the object to be retrieved, return the row to build the primary object, followed by multiple records of each one-to-many relationship with the primary object. This allowed us to build the object in its entirety in a single database interaction.

Have you thought about using the entity framework? You would then interact with your database in the same way as you would interact with any other type of class in your application.
It's really simple to set up and you would create the relationships between your database tables in the entity designer - this will give you all the foreign keys you need to call related objects. If you have all your keys set up in the database then the entity designer will use these instead - creating all the objects is as simple as selecting 'Create model from database' and when you make changes to your database you simply right-click in your designer and choose 'update model from database'
The framework takes care of all the SQL for you - so you don't need to worry about that; in most cases..
A great starting place to get up and running with this would be here, and here
Once you have it all set up you can use LINQ to easily query the database.
You will find this a lot more efficient than going down the table adapter route (assuming that's what you're doing at the moment?)
Sorry if i missed something and you're already using this.. :)

As far I guess, your database exists already and you are familiar enough with SQL.
You might want to use a Micro ORM, like petapoco.
To use it, you have to write classes that matches the tables you have in the database (there are T4 generator to do this automatically with Visual Studio 2010), then you can write wrappers to create richer business objects (you can use the ValueInjecter to do it, it is the simpler I ever used), or you can use them as they are.
Petapoco handles insert / update operations, and it retrieves generated IDs automatically.
Because Petapoco handles multiple relationships too, it seems to fit your requirements.

Related

Can SQLite return default values for non-existent columns instead of error?

I know how to use IFNULL to get default values for non-existent rows or null values, but for creating queries that are compatible with older schema versions, it would be nice to be able to do this:
Schema v1: CREATE TABLE Employee (Name TEXT, Phone TEXT)
Schema v2: CREATE TABLE Employee (Name TEXT, Phone TEXT, Address TEXT)
Theoretical backward compatible query:
SELECT Name, Phone, IFNULL(Address, '') FROM Employee
Obviously this doesn't work for a file created with schema v1. Is there some way to do this though?
There are 2 alternative workflows, but both are rather annoying. Either 1) update the old db by adding missing columns (which would start with null values); or 2) build the query code dynamically based on schema version.
Create a temporary view that references a particular schema, substituting default values (or even transforming other data) for individual columns which differ between the base schemas.
Sqlite views can even be made modifiable by defining appropriate triggers.
This still requires programming some conditional logic upon connection, but it would allow more uniform queries and interaction with different versions of the schema.
The suggested syntax would perhaps be convenient in some limited cases, but this approach is much more useful since it can be expanded beyond simple "if column exists" Boolean operations and instead could be used to perform dynamic transformation of one schema into another, perhaps joining tables and providing more advanced logic for updates of differing schema, etc.
Pseudo code mixed with view definitions to demonstrate:
db <- Open database connection
db_schema <- determine schema version
If db_schema == 1 Then
db.execute( "CREATE VIEW temp.EmployeeX AS
SELECT Name, Phone, '' AS Address
FROM main.Employee;" )
Else If db_schema == 2 Then
db.execute( "CREATE VIEW temp.EmployeeX AS
SELECT Name, Phone, Address
FROM main.Employee;" )
End If
#Later in code
data <- db.getdata("SELECT Name, Address
FROM EmployeeX")
If you're really averse to conditional statements for the schema this may still be annoying, but it would at least reduce/eliminate conditional statements throughout the code--ideally occurring as part of the connection logic at one location in the code.
You might further notice that this pattern is really what object-oriented programming is supposed to solve. There's no mention of the language in the question, but a well-designed object model could be created in a similar fashion so that all database access is done through a unified interface. The implementation details for different schemas are internal to different objects that derive (i.e. implement interfaces and/or inherit from base class) from a basic set of interfaces. Consider the language you're using to see if the problem could be solved this way.

Object Mapping from stored procedure using the columnname attribute in EntityFramework CodeFirst

I have an existing db that I am using entityframework 6 Code-First on to work with. A requirement though is that all work with the db has to be via stored procedures. So I started out using the mapping that is new in 6.0:
modelBuilder.Entity<Post>().MapToStoredProcedures();
The issue is that this only supports mapping Insert, Update, and Delete sp's not the select sp's, I need to be able to us a sp to select. (I do not have access to edit any of the existing sp's)
My poco's have attributes on them specifying the column name's to use using the column attribute. Apparently though the built in mapping does not support using those unless you are doing a direct selection on the table via a dbset object.
Originally I had (which worked):
return (from c in DataContext.Current.AgeRanges orderby c.StartAge select c);
Then to switch it to the sp I tried (using the database sqlquery call):
return DataContext.Current.Database.SqlQuery<AgeRange>("[DIV].[GetAgeRangesList]").AsQueryable();
This returned valid objects, but none of the columns marked with the Column attribute had anything in them.
Then I tried (thinking since it was against the actual dbset object I'd get the column mapping):
return DataContext.Current.AgeRanges.SqlQuery("[DIV].[GetAgeRangesList]").ToList().AsQueryable();
Nope, this instead gave me an error that one of the properties in the POCO object (one of the Column attribute ones) was not found in the returned recordset.
So the question is, in entity framework (or best solution outside of that) what is the best way to call a stored procedure and map the results to objects and have that mapping respect the column attribute on the properties?
I would even be willing to use an old school Table object and a SqlCommand object to fill it, if I had a fast easy way to then map the objects that respects the Column Attribute.
SqlQuery does not honor Column attribute. If the names of the columns of the returned result set match the names of the properties of the entity the properties should be set accordingly. Note however that SqlQuery does only minimal amount of work (for instance it does not support relationships (.Include)) so you are limiting yourself if you decide using stored procedures for queries.
Enhancing SqlQuery to use ColumnName attributes is being tracked here: https://entityframework.codeplex.com/workitem/233 - feel free to upvote this codeplex item.

Teradata: Is there a way to generate DDL from a view or select statement?

I am using a global application user account to access database A. This user account does not have permissions to modify database A's schema (ie, create tables, modify tables, etc). This user also has access to database B, but only views. I need to run SQL to feed data from a view in database B into a table in database A.
In a perfect world, I would be able to use this SQL:
create database_a.mytable as (select * from database_b) with no data
However, the user can't create tables in database A. If I could get the DDL of the select statement then I could log in under my personal account (which doesn't have any access to database B) and run the DDL in database A to create the table.
The only other option is to manually write the SQL, but I don't want to do that, especially since this view I am wanting to copy has many columns of varying data types and sizes.
Edit: I may be getting closer. I just experimented with this:
show (select * from database_b.myview)
However, it generated the DLL of every single table that is used in the view itself, as well as the definition for the view. This doesn't really help me since I just want the schema of the select statement itself. In other words, I need what would be generated if I were to use the create table as statement mentioned above.
Edit for Rob: Perhaps "DDL" was the wrong term to use. Using show view db.myview just shows the definition of the view, not the schema it represents. In my above example of create table as, I show how you can create a table that mimics the schema of a result set returned in a select. It generates a DDL on the back end for creating a table and then executes that DDL to actually create the table. You can then say show table db.newtable and see the new table's DDL. I want to get that DDL directly from a select statement so that I can copy it, log out of the app account, into my personal account, and then execute the DDL to create the table.
This is only to save me the headache of having to type out the DDL manually by hand to save time and reduce typing errors, especially since the source view has so many columns. That said, I think hitting up the DBA or writing some snazzy stored procedure to do dynamic stuff would be a bit over the top for my needs. I think there has to be a way to get the DDL for creating a table schema directly from a select statement.
Generate DDL Statements for objects:
SHOW TABLE {DatabaseB}.{Table1};
SHOW VIEW {DatabaseB}.{View1};
Breakdown of columns in a view:
HELP VIEW {DatabaseB}.{View1};
However, without the ability to create the object in the target database DatabaseA your don't have much leverage. Obviously, if the object already existed INSERT INTO SELECT ... FROM DatabaseB.Table1 or MERGE INTO would be options that you already explored.
Alternative Solution
Would it be possible to have a stored procedure created that dynamically created the table based on the view name that is provided? The global application account would simply need privilege to execute the procedure. Generally the user creating the stored procedure would need the permissions to perform the actions contained within the stored procedure. (You have some additional flexibility with this in Teradata 13.10.)
There are some caveats with this approach. You are attempting to materialize views that could reference anywhere from hundreds to billions of records. These aren't simple 1:1 views that are put on top of the target tables. Trying to determine the required space in the target database to materialize the view will be difficult. Performance can and will vary depending on the complexity of the view and the data volumes. This will not be a fast-path or data block optimized operation.
As a DBA, I would be concerned with this approach being taken on by a global application account without fully understanding the intent. I trust you have an open line of communication with the DBA(s) involved for supporting this system. I'm sure there are reasons for your madness that can't be disclosed here.
Possible Solution - VOLATILE TABLE
Unless the implicit privilege for CREATE TABLE has been revoked from the global application account this solution should work.
Volatile tables do not require perm space. There table definitions persist for the duration of the session and any data inserted into them relies on the spool space of the user who instantiated it.
CREATE VOLATILE TABLE {Global Application UserID}.{TableA_Copy} AS
(
SELECT *
FROM {DatabaseB}.{TableA}
)
WITH NO DATA
NO PRIMARY INDEX
ON COMMIT PRESERVE ROWS;
SHOW TABLE {Global Application UserID}.{TableA_Copy};
I opted to use a Teradata 13.10 feature called NO PRIMARY INDEX. By default, CREATE TABLE AS will take the first column of the SELECT statement and make it the PRIMARY INDEX of the table. This could lead to skewing and perm space issues in your testing depending on the data demographics. You can specify an explicit PRIMARY INDEX on your own as you understand the underlying data. (See the DDL manuals for details on the syntax if you're uncertain.)
The use of ON COMMIT PRESERVE ROWS for the intent of this example is probably extraneous. But in reality if you popped any data into that table for testing this clause would be beneficial in Teradata mode as the data would otherwise be lost immediately after the CREATE TABLE or any other data manipulation was performed against the volatile table.

Entity Framework - Making a column auto-incremement

So I'm using the Entity Framework and we have a modal for a table called TPM_PROJECTVERSIONNOTES. This table has a column called NOTEID which is a number. Right now, when we create a new row, we get the next available number with this code:
note.NOTEID = (from n in context.TPM_PROJECTVERSIONNOTES
orderby n.NOTEID descending
select n.NOTEID).Max() + 1;
To me, this seems incredibly hacky (I mean you have to do an entire SQL query just to get the next value). Plus, it's incredibly dangerous; it's not thread safe or transaction safe. I've already found 9 instances in the DB that have the same NOTEID! Good thing no one even thought to put a UNIQUE constraint on that column... sigh.
So anyway, I've added a new sequence to the database:
CREATE SEQUENCE TPM_PROJECTVERSIONNOTES_SEQ START WITH 732 INCREMENT BY 1;
Now my question:
How do I instruct the Entity framework to use TPM_PROJECTVERSIONNOTES_SEQ.nextval when inserting a row into this table? Basically, I just don't want to specify a NOTEID at all and I want the framework to take care of it for me. It's been suggested I use a trigger, but I think this is a bit hacky and would rather have to Entity framework just create the correct SQL in the first place. I'm using Oracle 11g for this.
While this may still fall into what you call the 'hacky' category, you can avoid using triggers to call the nextval, but you must utilize a stored procedure to handle the insert (whereas it will call the nextval in lieu of using a TRIGGER). (I guess this could fall more into a TAPI/XAPI category)
Check out the recent article
TECHNOLOGY: Oracle Data Provider for .NET
it explains (and contains samples) to using a stored procedure to handle the insert, calling the sequence, and mapping it back to the ODP EF BETA.
This obviously does not have the ODP EF Beta do the SQL for nextval, but it is an alternative. (look at this forum, it does appear that most of the EF Oracle frameworks fall victim to this-- devart etc {https://forums.oracle.com/forums/thread.jspa?threadID=2184372} )

insert data from a asp.net form to a sql database with foreign key constraints

i have two tables
asset employee
assetid-pk empid-pk
empid-fk
now, i have a form to populate the asset table but it cant because of the foreign key constraint..
what to do?
thx
Tk
Foreign keys are created for a good reason - to prevent orphan rows at a minimum. Create the corresponding parent and then use the appropriate value as the foreign key value on the child table.
You should think about this update as a series of SQL statements, not just one statement. You'll process the statements in order of dependency, see example.
Asset
PK AssetID
AssetName
FK EmployeeID
etc...
Employee
PK EmployeeID
EmployeeName
etc...
If you want to "add" a new asset, you'll first need to know which employee it will be assigned to. If it will be assigned to a new employee, you'll need to add them first.
Here is an example of adding a asset named 'BOOK' for a new employee named 'Zach'.
DECLARE #EmployeeFK AS INT;
INSERT (EmployeeName) VALUES ('Zach') INTO EMPLOYEE;
SELECT #EmployeeFK = ##IDENTITY;
INSERT (AssetName, EmployeeID) VALUES ('BOOK',#EmployeeFK) INTO ASSET;
The important thing to notice above, is that we grab the new identity (aka: EmployeeID) assigned to 'Zach', so we can use it when we add the new asset.
If I understand you correctly, are you trying to build the data graph locally before persisting to the data? That is, create the parent and child records within the application and persist it all at once?
There are a couple approaches to this. One approach people take is to use GUIDs as the unique identifiers for the data. That way you don't need to get the next ID from the database, you can just create the graph locally and persist the whole thing. There's been a debate on this approach between software and database for a long time, because while it makes a lot of sense in many ways (hit the database less often, maintain relationships before persisting, uniquely identify data across systems) it turns out to be a significant resource hit on the database.
Another approach is to use an ORM that will handle the persistence mapping for you. Something like NHibernate, for example. You would create your parent object and the child objects would just be properties on that. They wouldn't have any concept of foreign keys and IDs and such, they'd just be objects in code related by being set as properties on each other (such as a "blog post" object with a generic collection of "comment" objects, etc.). This graph would be handed off to the ORM which would use its knowledge of the mapping between the objects and the persistence to send it off to the database in the correct order, perhaps giving back the same object but with ID numbers populated.
Or is this not what you're asking? It's a little unclear, to be honest.

Resources