MVC3 Best solution to manage multiple lookup tables - asp.net

I'm having hard time to find the right solution to manage multiple modificable lookup tables (more than 40), most of them with the same structure. I'm using the repository approach but I can't make it work. Has someone a working example?

Take a look at my repository pattern for EF4.1/4.2 it allows you to easially connect to an EF4.1 DB and query it throughout your solution.
http://blog.staticvoid.co.nz/2011/10/staticvoid-repository-pattern-nuget.html
a working source application is also available here:
http://dl.dropbox.com/u/37129059/StaticVoid.Repository.Demo.zip

A tool that worked great for me is a t4 code generation template that generates enums out of lookup tables. It can save you a lot of work and keep your source code up to date when you add some new items to the look up tables.
I also use a DB with ~30 lookup tables and it was easy to set up, modify and use (even though I did not knew much about t4 templates before).
http://erraticdev.blogspot.com/2011/01/generate-enum-of-database-lookup-table.html

Related

Database table/field mapping in SilverStripe, integrating additional database

I'm well aware of the standard SilverStripe Data Structure and table/field naming conventions. But how do you integrate SilverStripe with a pre-existing database? Is there any way to map existing tables/fields with a different naming convention to be useable by the SilverStripe ORM and DataObjects? Also, is it possible to use the ORM with two different databases?
In a recent project I had the same issue, and I solved creating views in the SS database over the CRM database, in order to present to SilverStripe the data in the way it likes. Obviously I also created DataObjects mapping the data, and so no dev/build is needed. It's not an easy way to do it, but if you're lucky and the second database logic is similar to SS logic it's a feasible task.
Now I have a CRM that write data into its database with its logic, and SS that reads it through views as if it were its own DataObject.
Good luck :)
I am afraid that, as far as I know, the answer to both questions is no.
I guess the best option would be to write an importer that connects to the old database, fetches the data, and then creates silverstripe objects for it.
If you have to run both systems at a time it will be come trick. The first thing I would consider here would probably be a rest api between the 2 systems, but not sure how well that would work out.

Use an excel template and update it programatically

I found this tool but I wonder if it still the right way nowdays with net 4.0 or is there any straight forward oob alternatives.
I just need to add columns and update excel stuff programatically. There are many ways but I need to keep the original document as a template. The link above explains exactly what the requeriments are and why they created such "ExcelPackage" library.
A quick look at the link you provided seems like it will in fact keep the original template intact and just return a populated version of that template. This is a pretty common way to create and populate Excel documents using Open XML since it helps to minimize the amount of code you have to write. If you did not specify the layout, styles, formats, etc in a template you would be forced to define those when coding and that could lead to some bloated code. Overall, a project like this or using the Open XML SDK 2.0 to create the documents is the way to go.

Which one to use? EAV or Blobs in the database?

I am currently working to rework the data system of our application. Basically, it is designed so that people can add all the custom fields they want, with only a few constant/always-there fields.
Our current design is giving us plenty of maintenance problems. What we do is dynamically(at runtime) add a column to the database for each field. We have to have a meta table and other cruft to maintain all of these dynamic columns.
Now we are looking at EAV, but it doesn't seem much better. Basically, we have many different types of fields, so there would be a StringValues, IntegerValues, etc table... which makes things that much worse.
I am wondering if using JSON or XML blobs in the database may be a better solution, specifically because in most use cases, when we retrieve anything out of these tables, we need the entire row. The problems is that we need to be able to create reports for this data as well.. No solution really makes custom queries look easy. And searching across such a blob database will surely be a performance nightmare when reports are ran.
Each "row" needs to have anywhere from about 15 to 100(possibly more) attributes/columns associated with it.
We are using SQL Server 2008 and our application interfacing with the database is a C# web application(so, ASP.Net).
what do you think? Use EAV or blobs or something else entirely? (Also, yes, I know a schema free database like MongoDB would be awesome here, but I can't convince my boss to use it)
What about the xml datatype? Advanced querying is possible against this type.
We've used the xml type with good success. We do most of our heavy lifting at the code level using linq to parse out values. Our schema is somewhat fixed, so that may not be an option for you.
One interesting feature of SQL server is the sql_variant type. It's fully supported in .NET and quite easy to use. The advantages is you don't need to create StringValue, IntValue, etc... columns, just one Value column that can contain all the simple types.
This very specific type favors the EAV option, IMHO.
It has some drawbacks though (sorting, distinct selects, etc...). So if you want to use it, make sure you read all the documentation and understand its limit.
Create a table with your known columns and "X" sparse columns using a sequential name such as DataColumn0001, DataColumn0002, etc. When there is a definition for a new column just rename a column and start inserting data. The great advantage to the sparse column is it is indexable.
More info at this link.
What you're doing is STUPID with a database that doesn't support your data type. You should work with a medium that meets your needs which include NoSQL databases such as RavenDB, MongoDB, DocumentDB, CouchBase or Postgres in RDMBS to name several.
You are inherently using the tool in a capacity it was neither designed for, and one it specifically attempts to limit you from achieving success. NoSQL database solutions frequently use JSON as an underlying storage because JSON is inherently schemaless. Want to add a property? Sure go ahead, want to add a whole sub collection? Sure go ahead. NoSQL databases were in part, created specifically to remove rigid schema requirements of RDBMS.
2015 Edit: Postgres now natively supports JSON. This is a viable option for RDBMS. My answer is still correct that you need to use the correct tool for the problem. It is a polygot persistence world.

How do I do this in Drupal?

Im currently evaluating Drupal to see if we can use it to replace our framework. My problem is I have this legacy tables which I would want to try to reflect in Drupal. It involves a join table. There's quite a lot of this kind of relationship in our existing web app so I am looking for possible ways to solve it.
Thank you for your insight!
There are several ways to do this, and it's hard to know which is best with no context about what you're actually doing with the data, but here are some options:
One way to do this is to make a content type representing each table (using CCK) with the foreign keys represented by type-specific node reference fields. Doing everything as nodes gives you a bunch of prebuilt functionality around nodes, but has a bit of overhead you may want to avoid.
Another option is to leave your database just like it is now. Drupal can do direct database queries, or you can use Data to expose your tables to Views.
Another option, if those referenced tables really only have 1 non-ID field, is to do the project_companies_assignments as nodes and do the other 3 as taxonomies. But this won't work if those are really more complex entities, and wouldn't be very flexible if they might become more complex.
What about using hook_views_api and exposing your legacy tables in hook_views_data? i tried something like this myself - not sure if that is what you want...
try and let me know if that works for you.
http://drupalwalla.blogspot.com/2011/09/how-do-you-expose-your-legacy-database.html
Going with Views and CCK, optionally with the additional Data module has one huge disadvantage: it comes with complexity.
My preferred alternative, is to write your own module. Drupal offers little help wrt database abstraction, it comes not with a proper ORM or such. But with some simple CRUD functions for the data in the database, a few simple forms in front, and a menu-callback with some pages to present the data, you can -quite often- get your datamodel worked out much faster then going the route of the overly complex, often poorly documented CCK and views modules. KISS.

(LINQ-To-SQL) Creating classes first, database table second, how to auto-connect the two?

I'm creating a data model first using the LINQ-To-SQL graphical designer by using right-click->Add->Class. My idea is that I'll set up everything first using test repositories, design the entire website, then as a final step, create a database using the LINQ-To-SQL classes as a model for the database tables and relationships. My reasoning is that it's easy to edit the classes, but hard to modify DB tables (especially if there's already data in them), so by doing the database part last, it becomes much easier to design the structure.
My question is, is there an automatic way to link the two once I have the DB tables created? I know you can manually fill out the class properties for the LINQ-To-SQL entities but this is pretty cumbersome if you have a lot of tables to deal with. The other option is to delete your manually-created classes and drag the tables from the database into the designer to auto-generate the classes, but I'm not sure if this is the best way of doing it.
Linq to Sql is intended to be a relatively thin ORM layer over a database. While you can of course just add properties to a data context and use them as a sort of mock, you are correct, it isn't really easy to work with.
Instead of relying solely on Linq to Sql generated classes to give you freedom from the database implementation, you may want to look into the repository design pattern. It allows you to have a smooth separation between your database, domain model, and your middle tier; I have used it on two projects now, and have been able to (for the most part) build everything top-down, leaving the actual database for last. Below is a link to a good tutorial on the pattern (better than I could scribble down here).
https://web.archive.org/web/20110503184234/http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/10/08/the-repository-pattern.aspx
Depending on your database permissions, you may call your datacontext's DeleteDatabase() and CreateDatabase() methods as a ungraceful way of resyncing your classes and tables. This is not much of an option when you have actual data in the database, but does work when you are in your development stages.
Take a look at my add-in (which you can download from http://www.huagati.com/dbmltools/ , free 45-day trial licenses are also available from the same site).
It can generate SQL-DDL diff scripts with the SQL-DDL statements for updating your database with only the portions that has changed in the L2S model (e.g. add missing columns, missing tables, missing FKs etc), instead of the L2S-out-of-the-box support for recreating the entire db from scratch.
It also supports syncing the other way; updating the model from the database.

Resources