How to setup data model for customizable application - asp.net

I have an ASP.NET data entry application that is used by multiple clients. The application consists of multiple data entry modules that are common to all clients.
I now have multiple clients that want their own custom module added which will typically consist of a dozen or so data points. Some values will be text, others numeric, some will be dropdown selections, etc.
I'm in need of suggestions for handling the data model for this. I have two thoughts on how to handle. First would be to create a new table for each new module for each client. This is pretty clean but I don't particular like it. My other thought is to have one table with columns for each custom data point for each client. This table would end up with a lot of columns and a lot of NULL values. I don't really like either solution and suspect there's a better way to do this, so any feedback you have will be appreciated.
I'm using SQL Server 2008.

As always with these questions, "it depends".
The dreaded key-value table.
This approach relies on a table which lists the fields and their values as individual records.
CustomFields(clientId int, fieldName sysname, fieldValue varbinary)
Benefits:
Infinitely flexible
Easy to implement
Easy to index
non existing values take no space
Disadvantage:
Showing a list of all records with complete field list is a very dirty query
The Microsoft way
The Microsoft way of this kind of problem is "sparse columns" (introduced in SQL 2008)
Benefits:
Blessed by the people who design SQL Server
records can be queried without having to apply fancy pivots
Fields without data don't take space on disk
Disadvantage:
Many technical restrictions
a new field requires DML
The xml tax
You can add an xml field to the table which will be used to store all the "extra" fields.
Benefits:
unlimited flexibility
can be indexed
storage efficient (when it fits in a page)
With some xpath gymnastics the fields can be included in a flat recordset.
schema can be enforced with schema collections
Disadvantages:
not clearly visible what's in the field
xquery support in SQL Server has gaps which makes getting your data a real nightmare sometimes
There are maybe more solutions, but to me these are the main contenders. Which one to choose:
key-value seems appropriate when the number of extra fields is limited. (say no more than 10-20 or so)
Sparse columns is more suitable for data with many properties which are filled out infrequent. Sounds more appropriate when you can have many extra fields
xml column is very flexible, but a pain to query. Appropriate for solutions that write rarely and query rarely. ie: don't run aggregates etc on the data stored in this field.

I'd suggest you go with the first option you described. I wouldn't over think it. The second option you outlined would be a bad idea in my opinion.
If there are fields common to all the modules you're adding to the system you should consider keeping those in a single table then have other tables with the fields specific to a particular module related back to the primary key in the common table. This is basically table inheritance (http://www.sqlteam.com/article/implementing-table-inheritance-in-sql-server) and will centralize the common module data and make it easier to query across modules.

Related

SQL server inserting lots of data from ASP.NET?

I have this application, where there is a parent child table, and customers can order products. The whole structure is quite complex to post here but suffice to say, there is one Order table and one OrderDetails table for storing the orders. Currently what we are doing is INSERT one record in Order table, and then for each item the customer added, insert each item in a loop to OrderDetails table. The solution is not scalable for obvious reasons. It works fine for 100 or so items, but if user goes over 1000 items, or 1000 qty of a item or so, one can start to notice the unresponsiveness of the application.
There are a couple of solutions that come to mind, but I am not sure which one would scale well. One is I use BulkInsert from my asp.net application to insert into the OrderDetails table. Second is I generate XML and then pass that to a sql proc and extract / insert data into OrderDetails table from that XML, but that have associate overhead of memory consumption of the XML generated. I know I could benchmark and see for myself what would suit best for my application, but I would like to know what is the most common strategy and would scale better when compared to other. Also, if there is another technique that I could use instead of these two, that would be better performance wise ( I know performance is subjective word, but let me narrow it down to speed ) I could use that. Which is generally used the most? What do you use in your application?
You could consider exploring the option of using a table valued parameter in the database. You will have to create a table type object, whose structure will mimic that of the OrderDetails table. The stored proc for inserting the data will accept an input parameter of this type (such parameters are always READONLY).
In your server side code, you can construct a DataTable object containing all the Order Details data, which will be mapped to the input parameter of the stored proc. Ensure that the order of columns in the DataTable object exactly matches the order in the table valued parameter. Upon executing the query, all the data will be inserted in one shot. This will save you from looping for each row of data that is there, and will also prevent the overhead of XML parsing. This approach though will involve passing an entire object over the network.
You can read more about it here : MSDN Table Valued Parameters
1000 items for an order does seem quite excessive!
Would it be feasible to introduce a limit of 100 items per order into the business logic of the application?

Riak solution for querying data by books or unique pages

Consider a set of data called Library, which contains a set of Books and each book contains a set of Pages.
Let's say you are using Riak to store this data, and you need to be access the data in two possible ways:
- Query for a particular page (with a unique id)
- Query for all pages in a particular book (with a unique name)
Additionally, you need to be able to easily update and delete pages of a particular Book.
What would be the best way to accomplish this in Riak?
Obviously Riak Search will do the trick, but maybe is inefficient for what I am trying to do. I am wondering if it makes sense to set up buckets where each bucket can be a Book (which would make for potentially millions of "Book" buckets). Maybe that is a bad idea...
Can this be accomplished with secondary indexes?
I am trying to keep this simple...
I am new to Riak and I am trying to find the best way to accomplish something that is probably relatively simple. I would appreciate any help from the Stack Overflow community. Thanks!
A common way to model master-detail relationships in Riak is to have the master record contain a list of detail record IDs, possibly together with some information about the detail record that may be useful when deciding which detail records to retrieve.
In your example, you could have two buckets called 'books' and 'pages'. The master record in the 'books' bucket will contain metadata and information about the book as a whole together with a list of pages that are included in the book. Each page would contain the ID of the 'pages' record holding the page data as well as the corresponding page number. If you e.g. wanted to be able to query by chapter, you could also add information about which chapters a certain page belongs to.
The 'pages' bucket would contain the text of the page and possibly links to images and other media data that are included on that page. This data could be stored in yet another bucket.
In order to get a specific page or a range of pages, one would first retrieve the master record from the 'books' bucket and then based on the contents of the record the appropriate pages. Even though this requires several GET operations, they are all direct lookups based on keys, which is the most efficient and scalable way to retrieve data from Riak, so it is will perform and scale well.
This approach also makes it simple to change the order of pages and/or chapters as only the master record needs to be updated. Adding, deleting or modifying pages would however require both the master record as well as one or more detail records to be updated, added or deleted.
You can most certainly also solve this problem by adding secondary indexes to the objects and query based on this. Secondary index queries in Riak does however have to include processing on a covering set (generally ring size / n_val) of partitions in order to fulfil the request, and therefore puts a bit more load on the system and generally results in higher latencies than retrieving a single object containing keys through a direct key lookup (which only needs to involve the partitions where the object is actually stored).
Although maintaining a separate object containing indexes adds a bit of extra work when inserting or deleting pages/entries, this approach will generally result in more efficient reads, as only direct key lookups are required. If your application is heavy on reads, it probably makes sense to use this approach, while secondary indexes could be more efficient for a write heavy application as inserts and modifications are made cheaper at the expense of more expensive reads. You can however always add secondary indexes just in case in order to keep your options open.
In cases like this I would usually recommend performing some benchmarks to test the solutions and chech which solution that best matches you particular performance and scaling requirements.
The most efficient way will be to store hole book as an one object, and duplicate it's pages as another separate objects.
Pros:
you will be able to select any object by its key(the most cheapest op
in riak is kv query)
any query will be predicted by latency
this is natural way of storing for riak
Cons:
If you need to update any page you must update whole book, and then page. As riak doesn't have atomic ops, you must to think how to recover any failure situation (like this: book was updated, but page was not).
Riak is about availability predictable latency, so if you will use something like 2i to collect results, it will make unpredictable time query, which will grow with page numbers

How to implement gapless, user-friendly IDs in NHibernate?

I'm designing an application where my Order objects need to have a sequential and user-friendly Id field. I'm avoiding the HiLo algorithm because of the rather large gaps it produces (see here). Naturally, Guid values would make my corporate users go bananas. I'm also avoiding Oracle sequences because of the major disadvantages of it:
(From: NHibernate POID Generators revealed)
Post insert generators, as the name
suggest, assigns the id’s after the
entity is stored in the database. A
select statement is executed against
database. They have many drawbacks,
and in my opinion they must be used
only on brownfield projects. Those
generators are what WE DO NOT SUGGEST
as NH Team.
> Some of the drawbacks are the
following:
Unit Of Work is broken with the use of
those strategies. It doesn’t matter if
you’re using FlushMode.Commit, each
Save results in an insert statement
against DB. As a best practice, we
should defer insertions to the commit,
but using a post insert generator
makes it commit on save (which is what
UoW doesn’t do).
Those strategies
nullify batcher, you can’t take the
advantage of sending multiple queries
at once(as it must go to database at
the time of Save).
Any ideas/experience on implementing user-friendly IDs without major gaps between them?
Edit:
User friendly Id fields are ones my corporate users can memorize and even discuss and/or have phone conversations talking about a particular Order by its code, e.g. "I'm calling to know why the order #1625 was denied.".
The Id doesn't need to be strictly gapless, but I am worried that my users would get confused when they see gaps like 100, 201, 305. For my older projects, I currently implement NHibernate using Oracle sequences which occasionally lose a few sequences when exceptions are thrown, but yet keep a rather tidy order to them. The downside to them is how they break the Unit of Work which results in additional hits to the database for every Save command with or without the Session.Flush.
One option would be to keep a key-table that simply stores an incrementing value. This can introduce a few problems, namely possible locking issues as well as additional hits to the database.
Another option might be to refine what you mean by "User-friendly Id". This could consist of a combination of a Date/Time and a customer-specific sequence (or including the customer id as well). Also, your order id does not necessarily have to be the actual key on the table. There is nothing to say that you can't use a surrogate key with a separate "calculated" column which represents the order id.
The bottom-line is that it sounds like you want to use a surrogate key, but have the benefits of a natural key. It can be very difficult to have it both ways and a lot comes down to how you actually plan on using the data, how users interpret the data, and personal preference.

Entity Attribute Value (EAV) vs. XML Column for New Product Atttributes

I have an existing, mature schema to which we need to add some new Product attributes. For example, we have Products.Flavor, and now need to add new attributes such as Weight, Fragrance, etc. Rather than continue to widen the Products table, I am considering a couple of other options. First is a new Attributes table, which will effectively be a property bag for arbitary attributes, and a ProductsAttributes table to store the mappings (and values) for a particular product's attributes. This is the Entity-Attribute-Value (EAV) pattern, as I've come to understand it. The other option is to add a new column to the Products table called Attributes, which is of type XML. Here, we can arbitrarily add attributes to any product instance without adding new tables.
What are the pros/cons to each approach? I'm using SQL Server 2008 and ASP.NET 4.0.
This is (imho) one of the classic database design issues. Call it "attribute creep", perhaps, as once you start, there will always be another attribute or property to add. They key decision is, do you store the data within the database using the basic tools provided by the database (tables and columns) to structure and format the data, or do you store the data in some other fashion (XML and name/value pairs being the most common alternates). Simply put, if you store the data in a form other than that supported by the DBMS system, then you lose the power of the DBMS system to manage, maintain, and work with that data. This is not much of a problem if you only need to store it as "blob data" (dump it all in, pump it all out), but once you start have to seek, sort, or filter by this data, it can get very ugly very fast.
With that said, I do have strong opinions on name/value pairs and XML, but alas, none are positive. If you do have to store your data this way, and yes it can be an entirely valid business/design decision, then I would recommend looking long and hard on how the data you need to store in the database will be used and accessed in the future. Weight the pros and cons of each methodology in light of how it will be used, and pick the once that's easiest to manage and maintain. (Don't pick the one that's easiest to implement, you'll be supporting it for a lot longer than you'll be writing it.)
(It's long, but the "RLH" essay is a classic example of name/value pairs run amok.)
(Oh, and if you're using it, look into SQL Server 2008's "Sparse Columns" option. Doesn't sound like what you need, but you never know.)

Add new columns in asp .net application

I am facing this question in a new little project:
The system to be built will allow user to add new columns to a table in the system, and then the user will be able to maintain the data, I think there is two ways to implement this:
1) create a few tables including "columns" table with "columnName" "columnValue" "datatype" etc to store the column definition, aonther table "XXCoumn" to store the value of the column (entered by user), and user a store procedure to query/update column data.
2) create the column in the table schema when user enter a new column, then the maintain of the table data is just as normal
which way do you guys reckon? or any new suggestion?
Some additional info: the data volumn is small, and I need to create reports.
Any good recommendations would require a much better understanding of your requirements, but here are some comments on the options you mentioned, as well as some additional thoughts.
1) Entity-Attribute-Value (EAV) Design: This is the option you describe where you have a table that has columns for ColumnName, Type and Value. This option has the advantage of being able to accomodate unlimited new columns easily, but I have found it to be painful when the time comes to retrieve meaningful data back. For example, say you have rows in this EAV table for {Color, varchar}{Red, Green, Blue}, and {Size, varchar}{Small, Medium, Large}. If you want to find all the small green items, you need something like this (untested SQL of course):
SELECT *
FROM ITEMS
WHERE ITEMID IN (SELECT ITEMID
FROM ITEM_ATTRIBUTES ATT INNER JOIN ITEM_VALUES VLS
ON ATT.AttributeID = VLS.AttributeID
WHERE ATT.ColumnName = 'Color' AND VLS.Value = 'Green')
AND ITEMID IN (SELECT ITEMID
FROM ITEM_ATTRIBUTES ATT INNER JOIN ITEM_VALUES VLS
ON ATT.AttributeID = VLS.AttributeID
WHERE ATT.ColumnName = 'Size' AND VLS.Value = 'Small')
Contrast this with having actual columns on the items table for color and size:
SELECT *
FROM ITEMS
WHERE COLOR = 'Green' AND SIZE = 'Small'
In addition, you will have a difficult time maintaining data integrity, if that is important for this app (and it is almost always important, even when you are told otherwise). In the example above, you will need to implement extra logic if "Color" should be limited to Blue, Green, and Red. Also, you will need to implement even more logic if certain colors only come in certain sizes (example - blue items are only available in small and medium)
2) User-Defined Columns: Just giving the user the ability to add additional columns to the table has the advantage of making data retrieval simpler, but all the data integrity issues remain. Also, your app usually requires extra logic to deal with the unknown columns.
3) Pre-Existing Custom Columns: I have worked with a few apps, such as CRMs, that provide a dozen or more columns already in place for user definition. Basically, the designers put in columns like "Text1","Text2","Text3","Number1","Number2", etc. The users then provide header and description information for these columns, and that is what the app uses for display purposes. This model has the advantage of easy data retrieval, as well as a pre-defined DB schema which should simplify app logic. Data integrity issues remain, however. The other obvious downside is that you will run out of pre-defined columns, which is what you are usually trying to avoid with this type of solution.
As with most design issues, there are tradeoffs to each solution. My experience has been that while many users/clients say they want solutions like these, in reality they are simply trying to ensure they don't get trapped with an app that can't grow with their needs. I have found that there are actually very few places where a design like this is needed. I can almost always create a design that addresses the expansion desires of the client without putting them into the role of database designer.
"The system to be built will allow user to add new columns to a table in the system..."
Really - that's the user story? Sounds like you've already made up your mind on the solution, to me.
Whether it's a good idea or not to allow a user to extend schemas is pretty context dependent. I'd have little problem in an admin-like, limited use way. But it'd be a horribly bad idea in a MySpace type way. I suspect your situation lies somewhere between those 2 extremes.
Extending the schema would lead to greatly more efficient queries - as you could add indexes and such - but it does expose some relational rules on your users. Also, the extension would (probably) lock the entire table and concurrent edits would need to be dealt with.
If this is centrally hosted by you, I would suggest NOT allowing user-input data to change the schema of the database (i.e. drive the creation of new tables).
Rather you may want to look into using XML fields in SQL to store variable field names of data, or a more generic table structure... this technique works pretty well if we're not talking crazy amounts of data...
Is it possible you're looking at your solution sideways? It sounds like you need a mapping table (sort of like your #1). You have a table, say "objects" for example, a table called "properties" which holds what you're calling columns and then a table that holds the values, so it just has object_id, property_id, value.
To put in a smarter way than I said it, take a look at the Entity-attribute-value model.

Resources