In an n-tiered application where you are using custom entities, how do you find yourself handling data needed from lookup tables? Do you create entities for each of these lookup tables or employ some other strategy?
For example. I have a "Ratings" lookup table that will be used to populate a dropdownlist. Would you create a ratings object with a ratingid and rating property and pass that to your UI or is there a more efficient way to go about it?
Appreciate your thoughts.
I'd suggest that the solution will depend on how often the lookup data changes, whether or not it needs to be editable, and whether or not you're enforcing referential integrity at the database. I think it makes the schema more understandable if you put each lookup type into a separate table.
I generally don't create entities for each lookup table, but instead will load most of the common lookups into structures that are easily re-used by the application - for an asp.net app, for example, I'll create hashtables or ordered dictionaries which can easily be bound to most web controls.
And, horror of horrors, I sometimes create a singleton to manage access to all these lookups, which can be stored as static vars or in the cache, depending on requirements.
We seperate out the different look up types into different objects. It seems to be a little more work up front, but it provides us the ability to make changes to each individual object when we need to, such as an addition of additional information to an object.
Related
I have a Core Data model with something like 20 entities. I want all entities to have common attributes. For example, all of them have a creation date attribute.
I therefore introduced an common entity containing all the common attributes, and all the other entities inherit from this common entity.
This is fine and works well, but then, all entities end up in one single SQLite table (which is rather logical).
I was wondering if there was any clear drawback to this ?
For example, when going in real life with 1000+ objects of each entity, would the (single) table become so huge that terrible performance problems could happen ?
This question has been asked before:
Core Data entity inheritance --> limitations?
Core data performances: when all entities inherit from the same parent entity
Core Data inheritance vs no inheritance
Also keep in mind that when you want to check the SQLite file for debugging purposes, seperate tables are easier to examine.
I would use a common NSManagedObject subclass instead of a parent entity.
Don't worry about this. From Core Data documentation:
https://developer.apple.com/library/tvos/documentation/Cocoa/Conceptual/CoreData/Performance.html
... The SQLite store can scale to terabyte-sized databases with billions of rows, tables, and columns. Unless your entities themselves have very large attributes or large numbers of properties, 10,000 objects is considered a fairly small size for a data set.
What is way more important is that if you are doing any heavy operations, like fetching a lot of objects, or parsing objects based on some JSON from a webservice, you do this not on the mainthread. This is not very hard to do, look into parent/child managedobjectcontexts and how they can be used with managedcontextobjects with a private / main queue concurrencytype. Many good blog posts about this subject exist all over the interwebs.
I've been working on a project with one base entity for around 20 subentities and easily overall 50k instances for over 2 years now. We've never had performance problems with selects, inserts or updates.
The keys to using Core Data inheritance with large data sets are
optimized fetch requests (tune predicate, exclude irrelevant properties, prefetch relationships, omit subentities, set fetchLimit, use dictionary result type or count-requests if sufficient etc.)
batch saves (meaning not saving the MOC after every insert etc.)
setting up proper indices (they can speed up selects a looot)
structuring your UI appropriately so you won't have to load and display many thousand objects in one viewController
We do not even use parent/child managedObjectContexts or private queues (which introduce a lot of extra complexity on their own) when importing JSON, as our data model and mapping code is so highly optimized, that the UI doesn't even flicker or hang considerably when importing a few thousand objects.
I'm creating(ed) an ASP.NET application (SQL Server backend) that allows the user (a business) to create their own tables and fields. They will all be child tables of a parent table (non-dynamic) and have proper PK/FK relationships (default fields when the table is created).
However, I don't like my current method of updating/inserting and selecting the fields. I was going to create an SP that was passed the proper keys and table names, then have it return the proper SQL statement. I'm thinking that it might make more sense to just pass the name/value pairs of fields/values and have an SP actually process them. Is this the best way to do it? If so, I'm not good at SP's so any examples of how?
I don't have a lot of experience with the EAV model, but it does sound like it might be a good idea for implementing what you're trying to achieve. However, if you already have a system in place, an overhaul could be very expensive.
If the queries you're making against the user tables are basic CRUD operations, what about just creating CRUD stored procs for each table? E.g. -
Table:
acme_orders
Stored Procs:
acme_orders_insert
acme_orders_update
acme_orders_select
acme_orders_delete
... [other necessary procs]
I have no idea what the business needs are for these tables, but I imagine that whatever you're doing currently could be translated into doing the same thing with stored procs.
I was going to create an SP that was passed the proper keys and table names, then have it >return the proper SQL statement. I'm thinking that it might make more sense to just pass the >name/value pairs of fields/values and have an SP actually process them.
Assuming you mean the proc would generate and then execute the SQL (sometimes known as dynamic SQL) this can work, but it probably performs slower than static / compiled SQL, as in normal procs.
I have an ASP.NET data entry application that is used by multiple clients. The application consists of multiple data entry modules that are common to all clients.
I now have multiple clients that want their own custom module added which will typically consist of a dozen or so data points. Some values will be text, others numeric, some will be dropdown selections, etc.
I'm in need of suggestions for handling the data model for this. I have two thoughts on how to handle. First would be to create a new table for each new module for each client. This is pretty clean but I don't particular like it. My other thought is to have one table with columns for each custom data point for each client. This table would end up with a lot of columns and a lot of NULL values. I don't really like either solution and suspect there's a better way to do this, so any feedback you have will be appreciated.
I'm using SQL Server 2008.
As always with these questions, "it depends".
The dreaded key-value table.
This approach relies on a table which lists the fields and their values as individual records.
CustomFields(clientId int, fieldName sysname, fieldValue varbinary)
Benefits:
Infinitely flexible
Easy to implement
Easy to index
non existing values take no space
Disadvantage:
Showing a list of all records with complete field list is a very dirty query
The Microsoft way
The Microsoft way of this kind of problem is "sparse columns" (introduced in SQL 2008)
Benefits:
Blessed by the people who design SQL Server
records can be queried without having to apply fancy pivots
Fields without data don't take space on disk
Disadvantage:
Many technical restrictions
a new field requires DML
The xml tax
You can add an xml field to the table which will be used to store all the "extra" fields.
Benefits:
unlimited flexibility
can be indexed
storage efficient (when it fits in a page)
With some xpath gymnastics the fields can be included in a flat recordset.
schema can be enforced with schema collections
Disadvantages:
not clearly visible what's in the field
xquery support in SQL Server has gaps which makes getting your data a real nightmare sometimes
There are maybe more solutions, but to me these are the main contenders. Which one to choose:
key-value seems appropriate when the number of extra fields is limited. (say no more than 10-20 or so)
Sparse columns is more suitable for data with many properties which are filled out infrequent. Sounds more appropriate when you can have many extra fields
xml column is very flexible, but a pain to query. Appropriate for solutions that write rarely and query rarely. ie: don't run aggregates etc on the data stored in this field.
I'd suggest you go with the first option you described. I wouldn't over think it. The second option you outlined would be a bad idea in my opinion.
If there are fields common to all the modules you're adding to the system you should consider keeping those in a single table then have other tables with the fields specific to a particular module related back to the primary key in the common table. This is basically table inheritance (http://www.sqlteam.com/article/implementing-table-inheritance-in-sql-server) and will centralize the common module data and make it easier to query across modules.
I want to use ASP.NET Dynamic Data for my next project, but there is a problem a can't manage to solve. In the database we manage authorization on a per-row basis. For example no user is permitted to see all rows of the Contracts table. So there is a Many to Many Relationship between Contracts and Users. So everytime Dynamic Data performs a Select to show all Contracts it has to look into the ContractUsers junction table to see what contracts the current user is permitted to see (filtered by UserID which will be stored in a session variable). Of course these junction tables should be invisible to the users.
By default Dynamic Data returns all rows of a table, so is it possible to customize this behaviour for every query the user performs?
I want to use Dynamic Data together with LINQ to SQL but if this task would much easier to accomplish using Entity Framework I would look into that too.
Thanks for your help and time.
Implementing such a solution in Dynamic Data it will probably require the creation of a custom Entity Template; not really easy but once done it will not require the creation of custom pages just the editing of the page templates.
I think it will be really usefull to check the excellent work on DD done by S.J.Naughton and presented on his blog.
Greetings, F.
You should not use dynamic data because you need full control over querying and manually write all linq queries to add your data level security. If you still insist on dynamic data be aware that you will still write most of pages yourselves and you will only use dynamic templates. You will have to manually define ever data source and correctly pass where condition to filter results based on logged user.
In addition linq-to-sql is not able to hide junction table and entity framework is able to do that only if junction table contains just two FKs for many-to-many relation. If this table contains any other column you want to use in the application you will have to map it as any other entity and dynamic data will show it as an entity.
Dynamic data are technology for quick creation of simple application where you need to provide access to database through web interface but what you describe is not a simple scenario. You need per record authorization which can differ among entity types.
I am developing an ASP.NET 2.0 website. I have created data access and business logic layers. Now in the presentation layer I am returning data from the business layer as a dataset.
My question is whether to use a dataset or object collection (for example a Category object representing the Category table in my database). I have defined all classes that are mapped to database tables (Common objects). But there are situations where I need all of the records from the category table in the presentation layer. I am just confused. What should I do?
You don't want to return datasets, you want to return objects.
Generally when you have a data access layer and a business logic layer you will also want to have an entity layer. The entity layer will be an in memory repersentation of the database result set. If you return one row from the database you load one entitiy object. If you return more than one row, you will load an entity for each row, and return a collection of entities to be consumed by the presentation layer.
If you're using .net 2.0 and above for example, you can create generic collections of the entity type and easily bind different types of controls to these collections.
Hope this is helpful for you.
I would recomend returning objects or IEnumerable/IList etc of objects.
Depending upon your DB access you can populate a list of category objects manually or using something like LINQ2SQL or ADO.NET Entity framework very quickly, and if required cache them.
I've used both methods depending on the situation. If you only need to display the data from a table in a grid, the dataset(or datatable) method isn't terrible because if fields are added to the table they will automatically appear in the grid...assuming you are auto populating the grid with the columns. I look at this method as more of the quick and dirty method.
I would not return datasets at all. You are tightly coupling you service to you database. In the future when you database evolves you will be forced to change anything that uses the datasets. You want to create an abstraction layer between the database and the service.
I'd go with an object/collection solution. So if you are returning one row from a table you have one object, if you are returning several you'd use a generic collection. Generic collection will avoid a boxing/unboxing hit.
[edit]seems I was beaten to it[/edit]
You should create an entity layer, having classes representing each table in database. And then return lists of those classes.