How do I do this in Drupal? - drupal

Im currently evaluating Drupal to see if we can use it to replace our framework. My problem is I have this legacy tables which I would want to try to reflect in Drupal. It involves a join table. There's quite a lot of this kind of relationship in our existing web app so I am looking for possible ways to solve it.
Thank you for your insight!

There are several ways to do this, and it's hard to know which is best with no context about what you're actually doing with the data, but here are some options:
One way to do this is to make a content type representing each table (using CCK) with the foreign keys represented by type-specific node reference fields. Doing everything as nodes gives you a bunch of prebuilt functionality around nodes, but has a bit of overhead you may want to avoid.
Another option is to leave your database just like it is now. Drupal can do direct database queries, or you can use Data to expose your tables to Views.
Another option, if those referenced tables really only have 1 non-ID field, is to do the project_companies_assignments as nodes and do the other 3 as taxonomies. But this won't work if those are really more complex entities, and wouldn't be very flexible if they might become more complex.

What about using hook_views_api and exposing your legacy tables in hook_views_data? i tried something like this myself - not sure if that is what you want...
try and let me know if that works for you.
http://drupalwalla.blogspot.com/2011/09/how-do-you-expose-your-legacy-database.html

Going with Views and CCK, optionally with the additional Data module has one huge disadvantage: it comes with complexity.
My preferred alternative, is to write your own module. Drupal offers little help wrt database abstraction, it comes not with a proper ORM or such. But with some simple CRUD functions for the data in the database, a few simple forms in front, and a menu-callback with some pages to present the data, you can -quite often- get your datamodel worked out much faster then going the route of the overly complex, often poorly documented CCK and views modules. KISS.

Related

Meteor Approach to Collections Considering Join Aren't Supported by Core Yet?

I'm creating my main project functionality right now so it's kind of a big decision to make in my project, I want efficient & scalable solution. I use different API's to fetch users products ultimately for 1 collection to display products information inside a table with possible merge by SKU TITLE from different sources.
I have thought of 2 approaches (In both approaches we add Meteor.userId() to collection insert so each users has it's own products:
1) to create each API it's own collection and fetch the products to it, after or in middle of the API query where I insert it to sourceXProducts also add the logic of merge products by sku and add it to main usersProducts Only the fields I need, and we have the collection of the sourceXproducts if we ever need anything we didn't really include to main usersProducts we can query it and get it so we basically keep all the information possible (because it can come handy)
source1Products = new Meteor.Collection('source1Products');
source2Products = new Meteor.Collection('source2Products');
usersProducts = new Meteor.Collection('usersProducts');
Pros: Honestly I'm not sure, It makes it organized also the way I learned Meteor it seems to be used a lot.
Cons: Meteor collection joins is not supported in core yet, So I have to use a meteor package such as: meteor-publish-composite which seems good but this way might hit performance
2) Create 1 collection and just insert everything the API resonse has and additional apiSource field so we can choose products from X user X api.
usersProducts = new Meteor.Collection('usersProducts');
Pros: No joins, possibly better performance
Cons: Not organized, It can become a large collection maybe it's not good for mongodb
3) Your ideas? :)
First, you should improve the question. You do not tell us anything precise about your schema. What are the entities you have and what type of relations are there and what type of joins do you think you will be doing. How often you will be doing them?
Second, you should rethink your schema and think in the terms of a non-relational database. I see many people coming from SQL world and then they simply design their schema in the same way. Wrong. MongoDB is not SQL and things you learned there you should not try to just reuse here. You should start using features like subdocuments and arrays which can help you solve many basic things you would do in SQL with joins. So, knowing your schema would help us help you design the schema. For example, see this answer and especially the comments for the discussion for a similar type of question you are asking here.
Third, have you evaluated various solutions which exist out there? There are many, but you have not shown us that you tried any of them and how it worked for you. What were pros and cons of them, for you and your project?
Fourth, if you are lazy to evaluate, you can just use peerlibrary:peerdb and peerlibrary:related. They are simply perfect. You should trust me. I am their author.

Database table/field mapping in SilverStripe, integrating additional database

I'm well aware of the standard SilverStripe Data Structure and table/field naming conventions. But how do you integrate SilverStripe with a pre-existing database? Is there any way to map existing tables/fields with a different naming convention to be useable by the SilverStripe ORM and DataObjects? Also, is it possible to use the ORM with two different databases?
In a recent project I had the same issue, and I solved creating views in the SS database over the CRM database, in order to present to SilverStripe the data in the way it likes. Obviously I also created DataObjects mapping the data, and so no dev/build is needed. It's not an easy way to do it, but if you're lucky and the second database logic is similar to SS logic it's a feasible task.
Now I have a CRM that write data into its database with its logic, and SS that reads it through views as if it were its own DataObject.
Good luck :)
I am afraid that, as far as I know, the answer to both questions is no.
I guess the best option would be to write an importer that connects to the old database, fetches the data, and then creates silverstripe objects for it.
If you have to run both systems at a time it will be come trick. The first thing I would consider here would probably be a rest api between the 2 systems, but not sure how well that would work out.

Which one to use? EAV or Blobs in the database?

I am currently working to rework the data system of our application. Basically, it is designed so that people can add all the custom fields they want, with only a few constant/always-there fields.
Our current design is giving us plenty of maintenance problems. What we do is dynamically(at runtime) add a column to the database for each field. We have to have a meta table and other cruft to maintain all of these dynamic columns.
Now we are looking at EAV, but it doesn't seem much better. Basically, we have many different types of fields, so there would be a StringValues, IntegerValues, etc table... which makes things that much worse.
I am wondering if using JSON or XML blobs in the database may be a better solution, specifically because in most use cases, when we retrieve anything out of these tables, we need the entire row. The problems is that we need to be able to create reports for this data as well.. No solution really makes custom queries look easy. And searching across such a blob database will surely be a performance nightmare when reports are ran.
Each "row" needs to have anywhere from about 15 to 100(possibly more) attributes/columns associated with it.
We are using SQL Server 2008 and our application interfacing with the database is a C# web application(so, ASP.Net).
what do you think? Use EAV or blobs or something else entirely? (Also, yes, I know a schema free database like MongoDB would be awesome here, but I can't convince my boss to use it)
What about the xml datatype? Advanced querying is possible against this type.
We've used the xml type with good success. We do most of our heavy lifting at the code level using linq to parse out values. Our schema is somewhat fixed, so that may not be an option for you.
One interesting feature of SQL server is the sql_variant type. It's fully supported in .NET and quite easy to use. The advantages is you don't need to create StringValue, IntValue, etc... columns, just one Value column that can contain all the simple types.
This very specific type favors the EAV option, IMHO.
It has some drawbacks though (sorting, distinct selects, etc...). So if you want to use it, make sure you read all the documentation and understand its limit.
Create a table with your known columns and "X" sparse columns using a sequential name such as DataColumn0001, DataColumn0002, etc. When there is a definition for a new column just rename a column and start inserting data. The great advantage to the sparse column is it is indexable.
More info at this link.
What you're doing is STUPID with a database that doesn't support your data type. You should work with a medium that meets your needs which include NoSQL databases such as RavenDB, MongoDB, DocumentDB, CouchBase or Postgres in RDMBS to name several.
You are inherently using the tool in a capacity it was neither designed for, and one it specifically attempts to limit you from achieving success. NoSQL database solutions frequently use JSON as an underlying storage because JSON is inherently schemaless. Want to add a property? Sure go ahead, want to add a whole sub collection? Sure go ahead. NoSQL databases were in part, created specifically to remove rigid schema requirements of RDBMS.
2015 Edit: Postgres now natively supports JSON. This is a viable option for RDBMS. My answer is still correct that you need to use the correct tool for the problem. It is a polygot persistence world.

Module Multi-instance in OrchardCMS

assuming i have a contacts orchard module which manages contacts
can i have two instance like so
mysite.com/WorkContacts/...
mySite.com/HomeContacts/....
and have the data partitioned by instance/location type etc.
I assume it should be but want to be sure before i dig any deeper
It's not possible by default (although I'm not saying impossible at all).
Each module has it's unique, hardcoded Id which prevents multi-instancing of modules by design. There are also many other reasons why it wouldn't be a good idea...
Achieving such behavior is possible of course, but in slightly different way. As Orchard is mainly about content, you are free to build your own, different content types for different contact types from existing parts and fields. And then you're free to create instances of those. It's described very well here.
HTH
This would probably be better asked over on the Orchard sites.
If you look at blogs functionality you can have multiple of those, following a similar pattern of code you could have multiple of the contacts modules.
The path /HomeContacts ... etc would be set through the routing functionality of Orchard.
I think what you're looking for might the multi-tenancy module, available from the gallery. The only difference with what you describe is that the instances would need different server names rather than subfolders like you decribed.
Then again it's not quite clear whether you only want to separate just the data for that module (in which case the suggestion to model it after blog is a good one) or for the whole site (that would be multi-tenancy).

When not to use a Drupal node?

I've recently created a very simple CRUD table where the user stores some data. For the data, I created a custom node. The functionality works great for creating, editing, and deleting data in the CRUD table using the basic node functionality (I'm actually amazed how fast and easy it was to program the basic functionality with proper access controls using only a tiny bit of code)....
Since the data isn't meant to be treated the same way as 'content' such as a blog post (no title, no body, no commments, no revisions, shouldn't show up on ?q=node page, no previews, no teasers, etc)... I find that I'm spending most of my time 'turning off' and modifying the stuff that drupal does automatically for nodes.
I know its a matter of taste, but where should one draw the line on what should be treated as a node and what shouldn't? In other words, would it be better to program this stuff from scratch without using nodes?
Using nodes for custom data has quite some additional benefits besides easy edit/update/delete functionality:
possible categorization via taxonomy
implicit 'ownership' via author tracking
implicit tracking of creation/modification time
basic access control by default, expandable by a huge selection of modules
flexible query generation/listing/filtering via views
possible ad hoc extensions/annotations via CCK fields
possible definition of workflows, actions and the like
a huge number of hooks to programmatically intercept/adjust almost every usage aspect/scenario
commenting, voting, rating and tons of other functionality provided by all contributed modules that work on/with nodes ...
Given all this, I'd say you need a very good reason to not use nodes to store data in Drupal. Nodes are simply the fundamental building blocks for just about everything in the Drupal ecosystem, and the overhead of removing some unwanted default 'features' seems pretty small in comparison to the gains.
That said, one possible reason/argument to handle data separate from the node system might be if that data is directly aimed at annotating other nodes (think taxonomy). But since you can easily reference nodes from other nodes (with lots of different options on how to do this), the argument is not to strong.
Another (much stronger) argument would be data integrity - Drupal is not very strong (to put it politely) concerning normalized, relational data storage, referential integrity, transaction handling and other related topics. If you have requirements in that direction, you might have no choice but to skip the node concept and create and maintain a separate data island within the system on your own.
It helps to think also that a node doesn't need to be public either. Some nodes are private/internal and can be controlled further with access controls. The way you are doing it, whatever you're doing, makes all the scalability and extending it on your shoulders.
I would probably approach it with CCK/Taxonomy depending on what I was doing. That way, I get the added benefit of Views/Panels/etc module integration without writing any additional code.

Resources