Database table/field mapping in SilverStripe, integrating additional database - silverstripe

I'm well aware of the standard SilverStripe Data Structure and table/field naming conventions. But how do you integrate SilverStripe with a pre-existing database? Is there any way to map existing tables/fields with a different naming convention to be useable by the SilverStripe ORM and DataObjects? Also, is it possible to use the ORM with two different databases?

In a recent project I had the same issue, and I solved creating views in the SS database over the CRM database, in order to present to SilverStripe the data in the way it likes. Obviously I also created DataObjects mapping the data, and so no dev/build is needed. It's not an easy way to do it, but if you're lucky and the second database logic is similar to SS logic it's a feasible task.
Now I have a CRM that write data into its database with its logic, and SS that reads it through views as if it were its own DataObject.
Good luck :)

I am afraid that, as far as I know, the answer to both questions is no.
I guess the best option would be to write an importer that connects to the old database, fetches the data, and then creates silverstripe objects for it.
If you have to run both systems at a time it will be come trick. The first thing I would consider here would probably be a rest api between the 2 systems, but not sure how well that would work out.

Related

Meteor Approach to Collections Considering Join Aren't Supported by Core Yet?

I'm creating my main project functionality right now so it's kind of a big decision to make in my project, I want efficient & scalable solution. I use different API's to fetch users products ultimately for 1 collection to display products information inside a table with possible merge by SKU TITLE from different sources.
I have thought of 2 approaches (In both approaches we add Meteor.userId() to collection insert so each users has it's own products:
1) to create each API it's own collection and fetch the products to it, after or in middle of the API query where I insert it to sourceXProducts also add the logic of merge products by sku and add it to main usersProducts Only the fields I need, and we have the collection of the sourceXproducts if we ever need anything we didn't really include to main usersProducts we can query it and get it so we basically keep all the information possible (because it can come handy)
source1Products = new Meteor.Collection('source1Products');
source2Products = new Meteor.Collection('source2Products');
usersProducts = new Meteor.Collection('usersProducts');
Pros: Honestly I'm not sure, It makes it organized also the way I learned Meteor it seems to be used a lot.
Cons: Meteor collection joins is not supported in core yet, So I have to use a meteor package such as: meteor-publish-composite which seems good but this way might hit performance
2) Create 1 collection and just insert everything the API resonse has and additional apiSource field so we can choose products from X user X api.
usersProducts = new Meteor.Collection('usersProducts');
Pros: No joins, possibly better performance
Cons: Not organized, It can become a large collection maybe it's not good for mongodb
3) Your ideas? :)
First, you should improve the question. You do not tell us anything precise about your schema. What are the entities you have and what type of relations are there and what type of joins do you think you will be doing. How often you will be doing them?
Second, you should rethink your schema and think in the terms of a non-relational database. I see many people coming from SQL world and then they simply design their schema in the same way. Wrong. MongoDB is not SQL and things you learned there you should not try to just reuse here. You should start using features like subdocuments and arrays which can help you solve many basic things you would do in SQL with joins. So, knowing your schema would help us help you design the schema. For example, see this answer and especially the comments for the discussion for a similar type of question you are asking here.
Third, have you evaluated various solutions which exist out there? There are many, but you have not shown us that you tried any of them and how it worked for you. What were pros and cons of them, for you and your project?
Fourth, if you are lazy to evaluate, you can just use peerlibrary:peerdb and peerlibrary:related. They are simply perfect. You should trust me. I am their author.

Is it possible to map an entity to the result of a stored procedure in Entity Framework?

I'm attempting to setup Entity Framework using Code First on an existing database. The database isnt in great shape (poor naming convention and some constraints are needed). The application I'm building is an MVC app. I have a "Model", "Repository", and "Web" (mvc) tiers.
To setup EF and map my model objects (POCO objects), is it required that I match my objects to database tables? Can I, instead, use my own stored procedures for CRUD operations? Would it help if I use WCF data services?
I'm planning on using fluent configurations to map my objects, but having some issues due to the database schema. I'm know considering redesigning the database just to get EF to map correctly. Any suggestions would be greatly appreciated!!
Looks awfully similar to this question:
Does Entity Framework Code First support stored procedures?
The answers there may be helpful to you, as well as the discussion surrounding how Code First and Stored Procedures don't make a whole lot of sense.
Wow, Julie Lerman answered!
Yes, it's possible, but not particularly easy. In particular, you must call context.Database.SqlQuery<T>() where T is the entity type you want to return, and your SQL must match up to that entity type. Otherwise, you will have to massage a result set into a type.
In general, you will have to do most of this manually, although you could probably figure out a T4 template to generate it for you, or maybe use some other tool like CodeSmith.

MySql's Connector/Net with MVC 3

I'd like to get some information regarding using MySQL alongside ASP.NET (particularly MVC 3). From what I've found and experienced, it doesn't quite seem as customizable in terms of the Membership and User classes which come with Asp.Net, especially when it comes to validation or registration.
For example, after configuring my web.config file to use MySQL, I found myself realizing that, although a fair amount of tables were auto-generated for me to use, I wasn't able to change the names of them. Because of this, it seemed as though if I were to change a column name, or add a column to the table, it wouldn't quite work with the system, since everything has been pre-built.
Yet, with ADO.Net/Entity Framework, it appears that I might actually be able to have more freedom in how I go about creating my websites using MsSQL. Is this true? Is MySQL just not meant for ASP.Net, despite the the fact that you can install and use it at your leisure. Or is it that it just requires more work to get everything working, and you kind of have to reinvent the wheel by creating your own database classes and validation tools?
I'm not trying to bash either MySql or MsSql, I'm simply looking for a good analysis on the topic, as Google hasn't helped me much in this area.
This is more an issue with the default providers, and one of the many reasons why the 1st thing I did when I learnt about them was to try and make my own. (To be clear, creating your own one from scratch does require a fair amount of work, there are a few good tutorials out there that can give you a quick start)
[It'd make all our lives easier if the .Net framework used Interfaces for the providers rather than the base class... ]
To be clear, the big thing with the auto generated providers is the sprocs they use require the specified names, if you want to change the table names then you'll have to also update all the Sprocs as well. (This is true for any custom provider you may chose to build/use)

Which one to use? EAV or Blobs in the database?

I am currently working to rework the data system of our application. Basically, it is designed so that people can add all the custom fields they want, with only a few constant/always-there fields.
Our current design is giving us plenty of maintenance problems. What we do is dynamically(at runtime) add a column to the database for each field. We have to have a meta table and other cruft to maintain all of these dynamic columns.
Now we are looking at EAV, but it doesn't seem much better. Basically, we have many different types of fields, so there would be a StringValues, IntegerValues, etc table... which makes things that much worse.
I am wondering if using JSON or XML blobs in the database may be a better solution, specifically because in most use cases, when we retrieve anything out of these tables, we need the entire row. The problems is that we need to be able to create reports for this data as well.. No solution really makes custom queries look easy. And searching across such a blob database will surely be a performance nightmare when reports are ran.
Each "row" needs to have anywhere from about 15 to 100(possibly more) attributes/columns associated with it.
We are using SQL Server 2008 and our application interfacing with the database is a C# web application(so, ASP.Net).
what do you think? Use EAV or blobs or something else entirely? (Also, yes, I know a schema free database like MongoDB would be awesome here, but I can't convince my boss to use it)
What about the xml datatype? Advanced querying is possible against this type.
We've used the xml type with good success. We do most of our heavy lifting at the code level using linq to parse out values. Our schema is somewhat fixed, so that may not be an option for you.
One interesting feature of SQL server is the sql_variant type. It's fully supported in .NET and quite easy to use. The advantages is you don't need to create StringValue, IntValue, etc... columns, just one Value column that can contain all the simple types.
This very specific type favors the EAV option, IMHO.
It has some drawbacks though (sorting, distinct selects, etc...). So if you want to use it, make sure you read all the documentation and understand its limit.
Create a table with your known columns and "X" sparse columns using a sequential name such as DataColumn0001, DataColumn0002, etc. When there is a definition for a new column just rename a column and start inserting data. The great advantage to the sparse column is it is indexable.
More info at this link.
What you're doing is STUPID with a database that doesn't support your data type. You should work with a medium that meets your needs which include NoSQL databases such as RavenDB, MongoDB, DocumentDB, CouchBase or Postgres in RDMBS to name several.
You are inherently using the tool in a capacity it was neither designed for, and one it specifically attempts to limit you from achieving success. NoSQL database solutions frequently use JSON as an underlying storage because JSON is inherently schemaless. Want to add a property? Sure go ahead, want to add a whole sub collection? Sure go ahead. NoSQL databases were in part, created specifically to remove rigid schema requirements of RDBMS.
2015 Edit: Postgres now natively supports JSON. This is a viable option for RDBMS. My answer is still correct that you need to use the correct tool for the problem. It is a polygot persistence world.

How do I do this in Drupal?

Im currently evaluating Drupal to see if we can use it to replace our framework. My problem is I have this legacy tables which I would want to try to reflect in Drupal. It involves a join table. There's quite a lot of this kind of relationship in our existing web app so I am looking for possible ways to solve it.
Thank you for your insight!
There are several ways to do this, and it's hard to know which is best with no context about what you're actually doing with the data, but here are some options:
One way to do this is to make a content type representing each table (using CCK) with the foreign keys represented by type-specific node reference fields. Doing everything as nodes gives you a bunch of prebuilt functionality around nodes, but has a bit of overhead you may want to avoid.
Another option is to leave your database just like it is now. Drupal can do direct database queries, or you can use Data to expose your tables to Views.
Another option, if those referenced tables really only have 1 non-ID field, is to do the project_companies_assignments as nodes and do the other 3 as taxonomies. But this won't work if those are really more complex entities, and wouldn't be very flexible if they might become more complex.
What about using hook_views_api and exposing your legacy tables in hook_views_data? i tried something like this myself - not sure if that is what you want...
try and let me know if that works for you.
http://drupalwalla.blogspot.com/2011/09/how-do-you-expose-your-legacy-database.html
Going with Views and CCK, optionally with the additional Data module has one huge disadvantage: it comes with complexity.
My preferred alternative, is to write your own module. Drupal offers little help wrt database abstraction, it comes not with a proper ORM or such. But with some simple CRUD functions for the data in the database, a few simple forms in front, and a menu-callback with some pages to present the data, you can -quite often- get your datamodel worked out much faster then going the route of the overly complex, often poorly documented CCK and views modules. KISS.

Resources