Using codefirst with an existing database can be quite a challenge as things dont match up the conventions.Still dont know what all conventions are but there you go..
Now if I were to remove all conventions do I still need to do the mapping of fks and many to many etc...
Are there any problems in doing so.
any suggestions?
If you remove all conventions you will have to map almost everything with fluent API. Any automatic detection like foreign keys, primary keys etc. is done by conventions. Also translation of data annotation attributes into mapping is done by conventions.
Related
Would it be possible to create a similar entity based on another one? For example, what if I'd like to have user specific tables that are based on one entity. Without any ORM I would just create the same table with a different prefix and do the queries on the table with the specific prefix.
Not sure how to tackle the problem with Symfony 2.5 and Doctrine and I just can't find a concrete example anywhere around, but seems like the solution might be around the Doctrine Event Manager and the Load ClassMetadata event. I just can't make sense out of the documentation.
Without exactly knowing how your schema looks or what you're trying to achieve, it's hard to give a precise answer. But let's try:
If you have two entities which share a common set of properties, but differ in others, you basically do the typical OOP inheritance thing, you create an abstract parent class with the common stuff, and two children with their specific properties.
In Doctrine, there are different inheritance strageties. Read about them at http://doctrine-orm.readthedocs.org/en/latest/reference/inheritance-mapping.html
Each of them has their pros and cons. Basically, you can select if you want everything to be in one or in two tables. Set up a test case and check what works better for you.
Note: The class properties in an abstract superclass (no matter which strategy) must always be private.
I am currently working to rework the data system of our application. Basically, it is designed so that people can add all the custom fields they want, with only a few constant/always-there fields.
Our current design is giving us plenty of maintenance problems. What we do is dynamically(at runtime) add a column to the database for each field. We have to have a meta table and other cruft to maintain all of these dynamic columns.
Now we are looking at EAV, but it doesn't seem much better. Basically, we have many different types of fields, so there would be a StringValues, IntegerValues, etc table... which makes things that much worse.
I am wondering if using JSON or XML blobs in the database may be a better solution, specifically because in most use cases, when we retrieve anything out of these tables, we need the entire row. The problems is that we need to be able to create reports for this data as well.. No solution really makes custom queries look easy. And searching across such a blob database will surely be a performance nightmare when reports are ran.
Each "row" needs to have anywhere from about 15 to 100(possibly more) attributes/columns associated with it.
We are using SQL Server 2008 and our application interfacing with the database is a C# web application(so, ASP.Net).
what do you think? Use EAV or blobs or something else entirely? (Also, yes, I know a schema free database like MongoDB would be awesome here, but I can't convince my boss to use it)
What about the xml datatype? Advanced querying is possible against this type.
We've used the xml type with good success. We do most of our heavy lifting at the code level using linq to parse out values. Our schema is somewhat fixed, so that may not be an option for you.
One interesting feature of SQL server is the sql_variant type. It's fully supported in .NET and quite easy to use. The advantages is you don't need to create StringValue, IntValue, etc... columns, just one Value column that can contain all the simple types.
This very specific type favors the EAV option, IMHO.
It has some drawbacks though (sorting, distinct selects, etc...). So if you want to use it, make sure you read all the documentation and understand its limit.
Create a table with your known columns and "X" sparse columns using a sequential name such as DataColumn0001, DataColumn0002, etc. When there is a definition for a new column just rename a column and start inserting data. The great advantage to the sparse column is it is indexable.
More info at this link.
What you're doing is STUPID with a database that doesn't support your data type. You should work with a medium that meets your needs which include NoSQL databases such as RavenDB, MongoDB, DocumentDB, CouchBase or Postgres in RDMBS to name several.
You are inherently using the tool in a capacity it was neither designed for, and one it specifically attempts to limit you from achieving success. NoSQL database solutions frequently use JSON as an underlying storage because JSON is inherently schemaless. Want to add a property? Sure go ahead, want to add a whole sub collection? Sure go ahead. NoSQL databases were in part, created specifically to remove rigid schema requirements of RDBMS.
2015 Edit: Postgres now natively supports JSON. This is a viable option for RDBMS. My answer is still correct that you need to use the correct tool for the problem. It is a polygot persistence world.
Im currently evaluating Drupal to see if we can use it to replace our framework. My problem is I have this legacy tables which I would want to try to reflect in Drupal. It involves a join table. There's quite a lot of this kind of relationship in our existing web app so I am looking for possible ways to solve it.
Thank you for your insight!
There are several ways to do this, and it's hard to know which is best with no context about what you're actually doing with the data, but here are some options:
One way to do this is to make a content type representing each table (using CCK) with the foreign keys represented by type-specific node reference fields. Doing everything as nodes gives you a bunch of prebuilt functionality around nodes, but has a bit of overhead you may want to avoid.
Another option is to leave your database just like it is now. Drupal can do direct database queries, or you can use Data to expose your tables to Views.
Another option, if those referenced tables really only have 1 non-ID field, is to do the project_companies_assignments as nodes and do the other 3 as taxonomies. But this won't work if those are really more complex entities, and wouldn't be very flexible if they might become more complex.
What about using hook_views_api and exposing your legacy tables in hook_views_data? i tried something like this myself - not sure if that is what you want...
try and let me know if that works for you.
http://drupalwalla.blogspot.com/2011/09/how-do-you-expose-your-legacy-database.html
Going with Views and CCK, optionally with the additional Data module has one huge disadvantage: it comes with complexity.
My preferred alternative, is to write your own module. Drupal offers little help wrt database abstraction, it comes not with a proper ORM or such. But with some simple CRUD functions for the data in the database, a few simple forms in front, and a menu-callback with some pages to present the data, you can -quite often- get your datamodel worked out much faster then going the route of the overly complex, often poorly documented CCK and views modules. KISS.
I am using asp.net and c#.
I have a some classes. Some of the classes are having same methods insert, update and delete.
Each insert will insert different data to different table. (same for update and delete). What type of pattern can be applied for this kind of class.
please suggest.
Patterns are things you should identify in your code from usage. Choosing a design pattern and then applying it, is perhaps a classic mistake.
Use a pattern, where a pattern is required.
Uhm... Generics?
Otherwise this sounds like the Template Method pattern.
While I agree wholeheartedly with Mitch Wheat, based on your description I do believe you're already using a pattern, without being aware of it completely.
Based on you description, I believe you are talking about the Table Gateway pattern. Each class represents access to a specific table, so you can perform your CRUD operations on a class. This offers a very tight binding of your model to your database logic, which can sometimes be useful (and more often than not can become very restrictive).
BTW: I am making the assumption here that when you say "each insert will insert different data to different table (same for update and delete)", you actually meant "each class".
I'm considering using SubSonic to create and access an SQLite database.
Not sure yet what flavor fits better for me though I tend to prefer the SimpleRepository approach.
Indeed I don't expect my DB to do much more than storing my objects and basic querying.
I've been through to docs but there are still a few points unclear to me or that I'd like to have confirmation for:
1/ Does "3.0 Migrations" fully support SQLite ?
2/ Using SimpleRepository, is the auto-migration feature equivalent to the 'regular' migration feature or does it support only a subset of it (apart from the incremental aspect) ?
3/ In particular, how can one specify a foreign key like it can be done with Migration.CreateForeignKey(TableColumn oneTable, TableColumn manyTable)?
I would love a [SubSonicForeignKey(Table, Column)] attribute to flag a property as such for helping relationship navigations and also indexing the column.
I suppose I'm dreaming, and the best solution I've found so far is like described in this post:
http://www.frozenmountain.com/blog/post/Automatic-Foreign-Objects-in-SubSonic3-SimpleRepository.aspx
4/ But this still can't address the missing index. So to the Subsonic Team: Any chance to see a [SubSonicIndex] attribute some day?
Thanks!