When not to use a Drupal node? - drupal

I've recently created a very simple CRUD table where the user stores some data. For the data, I created a custom node. The functionality works great for creating, editing, and deleting data in the CRUD table using the basic node functionality (I'm actually amazed how fast and easy it was to program the basic functionality with proper access controls using only a tiny bit of code)....
Since the data isn't meant to be treated the same way as 'content' such as a blog post (no title, no body, no commments, no revisions, shouldn't show up on ?q=node page, no previews, no teasers, etc)... I find that I'm spending most of my time 'turning off' and modifying the stuff that drupal does automatically for nodes.
I know its a matter of taste, but where should one draw the line on what should be treated as a node and what shouldn't? In other words, would it be better to program this stuff from scratch without using nodes?

Using nodes for custom data has quite some additional benefits besides easy edit/update/delete functionality:
possible categorization via taxonomy
implicit 'ownership' via author tracking
implicit tracking of creation/modification time
basic access control by default, expandable by a huge selection of modules
flexible query generation/listing/filtering via views
possible ad hoc extensions/annotations via CCK fields
possible definition of workflows, actions and the like
a huge number of hooks to programmatically intercept/adjust almost every usage aspect/scenario
commenting, voting, rating and tons of other functionality provided by all contributed modules that work on/with nodes ...
Given all this, I'd say you need a very good reason to not use nodes to store data in Drupal. Nodes are simply the fundamental building blocks for just about everything in the Drupal ecosystem, and the overhead of removing some unwanted default 'features' seems pretty small in comparison to the gains.
That said, one possible reason/argument to handle data separate from the node system might be if that data is directly aimed at annotating other nodes (think taxonomy). But since you can easily reference nodes from other nodes (with lots of different options on how to do this), the argument is not to strong.
Another (much stronger) argument would be data integrity - Drupal is not very strong (to put it politely) concerning normalized, relational data storage, referential integrity, transaction handling and other related topics. If you have requirements in that direction, you might have no choice but to skip the node concept and create and maintain a separate data island within the system on your own.

It helps to think also that a node doesn't need to be public either. Some nodes are private/internal and can be controlled further with access controls. The way you are doing it, whatever you're doing, makes all the scalability and extending it on your shoulders.
I would probably approach it with CCK/Taxonomy depending on what I was doing. That way, I get the added benefit of Views/Panels/etc module integration without writing any additional code.

Related

How do I create a feed that transfers nodes between Drupal 7 websites?

I have a Drupal 7 website with content types like "events" and "news".
I would like for nodes of these content types to be automatically imported into other websites.
I played with Feeds, XPath on the 'client' websites and Views RSS fields' onn the 'server' side, but I realized that there would be problems with content type fields like files... Any suggestions? I would like to be able to create new views for this content in the other websites.
P.S. The content types will be identical between the websites (but they don't have to, if your solution includes something else).
You probably have more success with services and content Distribution. RSS feeds are not well suited for transfer of semantic data. They are highly focused on lists-of-articles and typically lack information such as "event-start-date".
Services allows you to expose services on the server-drupal-site, exposing the nodes as e.g. RESTfull json. The client side-drupalsite can then use services and content-distribution to import nodes from said server.
That said, the services suit plugs into views, and is really heavy, large and complex. If you are allergic to large and complex projects (like I am), you may be better of writing simple modules:
events-service: a 20+ lines module that grabs the events from the database, and presents them as json.
news-service: a 10+ lines module that fetches a list of news-nodes and presents them as json.
events-client: a small module (~400-800 lines?) that eats said json at the given url and turns them into nodes. It will keep a register of some UUID next to the nodes table, to avoid re-creating nodes on changes upstream (but instead finding the associated one and updating that).
news-client: a small module. Same as above.
Writing such modules is very rewarding, because instead of fighting with poorly documented views-plugins, complex layers around services and such, you have full control and full understanding. It also allows for a lot better tuning and performance.
The one large downside is that Drupal, more specific: CCK or Fields, dictate the database and its structure. There will be a point when some tiny config change on your site breaks your modules SQL query: all of a sudden you are blasting SQL errors because Drupal decided to rename or move out some table, column or reference.
Maybe you can just share data by creating xmls/json (server side) that will be used by the client side.
services is a good way to go. But I find it complex for simple stuffs.
What you can do is create views that will output as xml/json... You can do this by doing preprocess functions in your module/template file.
After which the client side (maybe run cron) will take the xml/json and create nodes programmatically.

What are the rules for deciding when a new catalog should be created?

I'd like to learn about using catalogs correctly.
I have about 30 useful content types, about 50 indexes in catalog.xml, and about 45 metadatas. There are just three types which account for most of the site's data - and I may need millions of these. I've been reading, and there's lots to do, but I want to have the basic configuration right before I begin all that.
This page told me that any non-default indexes should not be added to the portal_catalog. I've even read people explaining how removing one, or two of the default indexes makes a performance difference.
My question is: what are the rules for dividing up the indexes into different catalogs, and for selecting which catalog(s) index which type(s)?
So far I have created one additional catalog, used to catalog all indexes for my 'site-setup' objects (which I have caused to no longer be indexed in portal_catalog). The site-setup indexes are very often used, but more rarely modified than others, so I thought it was correct to separate them from objects which are reindexed more often. I'm not sure if that's the main consideration though.
Another similar question (a good example of the kind of thing I want to solve): how would you handle something like secondary workflow review_state variables? I give each workflow's review_state variable an index (and search on them quite often), but some of my workflows are only used on just a few types. (my most prolific objects have secondary workflows...)
I'd be very grateful for advice!
Campbell
This won't cover everything but I'll bring up some points..
Anything not in the portal_catalog won't work with collections, folder_contents view, getFolderContents method, search, portlet collections, related items(I think) and anything else the assumes you're using the portal_catalog.
I like to use an additional catalog when I need to be able to query the data but it only affects a sub-set of the content objects.
Use collective.indexing to speed up indexing operations.
Mount the catalogs on their own mount points so you can cache them differently from the rest of the site(so you can cache the whole catalog). Then, you can even serve the the catalogs from dedicated zeoserver.
Also, if your content doesn't have to be cataloged by the portal_catalog(with all the constraints listed), you may even want to think about if you need it as a full-fledged (archetype|dexterity) type in the first place. You can use a more slim repoze.catalog to catalog arbitrary objects(which could be very simple data) for whatever your purpose is and get even more performance. Or better yet, look into Solr for indexing it for VERY good performance.
On more thing, depending on the type of data you're storing, you could even look into using a relational database for a data store. But I don't know what kind of queries, indexes, data, etc you have...
30 different types seems like a lot but I don't know what your use case is. Care to share? Perhaps there is a better way to do it.

node_load or direct query?

What rule of thumb do you use for deciding to use node_load() or just writing a direct db_query()?
In a situation I'm looking at right now I need to get some node data and resolve data on two nodereference fields. So that would be 3 calls to node_load(). At some point here, would it be more efficient to construct the query with Joins directly?
This is for use in a self contained module that won't be distributed or used anywhere else, so I don't believe I need to worry about subverting node modification hooks (or do I?).
Edit:
Thinking about my question more, node_load() is only really applicable when you have one node to grab (and then maybe drilling down further into nodereferences like in my example). But as soon as you need to return more than one node based on some criteria, you're pretty much forced to use db_query right? Does Drupal have any abstracted API for writing queries like this?
Not a full answer (Not sure myself), just some hints.
node_load() is using a static cache (in Drupal 7, you can even use the entity_cache module to make it a permanent cache). If the nodes you are loading are being used a second time on the same page, that call will be free.
Querying CCK-tables is tricky. The schema structure can change completely based on configuration, for example when using a single or multiple values.
The reasoning behind using API methods for DB calls over direct DB calls is to provide a DB abstraction layer so that your app could move between supported database engines etc, also it enables your app to gracefully handle any schema changes (however unlikely) that core/module may make to the tables in question. It's also likely easier as #Berdir says for CCK fields and Node_Ref fields, but that depends on which you are more confident with Drupal API& PHP or MySQL...the payoff of doing it the Drupal way is increased future productivity and understanding of the codebase and what is possible :)
Oh and my rule of thumb is - Do it the Drupal way if at all possible (possible being variable depending on app time/cost/performance/whatever requirements)

EF4 ASP.NET - Managing Entity Edits between HTTP Posts and Rollback

I am struggling with the following use-case:
User amends an existing order. The order is complex - lots of related 'entities' (addresses, post options, suppliers, makes, models, various items etc). Across multiple http posts.
User wants to discard the changes.
--
I have an order entity and as the user is editing this I am making various changes to the entity associations e.g changing order.address, order.items.add(item)...
In a single post this is fine, but across posts I don't know how best store state. If I store the entities then I cannot save the changes as they are across different data contexts. I have read that it is bad practice to store the data context in the session state i.e. long-lived context. I can't save changes after each edit/post because I cannot roll-back (?). I really would like to work with the entities during the editing process rather than one big save at the end (taking UI settings and applying these in one chunk).
This must be a pretty common problem - it's driving me mad. Any help really appreciated.
Cheers!
We have a similar problem where we are building a complex business object through a multi-page wizard.
Instead of creating a partially complete business object at each step of the wizard, we create a dedicated wizard object that looks pretty similar to the business object, populate that through the wizard. At each step in the wizard, the wizard object is saved into the database. At the end the user can accept it and it is converted to a real business object and then becomes visible to everyone else, or they can bin it and no-one else ever knows it existed.
If this kind of approach was not suitable, I suspect you're looking at some kind of difference tracking, either at the entity or database levels. Neither are simple to implement, work with or manage in a system. The former would be some kind of calculation and storage of n changes to the entities and developing an algorithm to undo them, the latter depends on your RDBMS, but might include versioned rows or similar.
Yes its pretty much common for us. In most scenarios we use the MVC approach. Even without the actual ASP .NET MVC Projects, we use similar ViewModel with our Views/Pages/Scenarios etc. where there is no direct/single entity mapping to the Business Layer (in other words, Business.Entities). This is pretty much similar to DTOs.
It is always easy to use Disconnected EF. We retrieve data and discard the context, then transform the Entities into ViewModels/DTOs if necessary. When you need to persist changes, all you have to do is to create a new context, find the latest entity instance do the changes.
The Views/Pages/Controllers will be managing these ViewModels/DTOs. Tracking Changed and Deleted content can be done by introducing a HistoryList<T> (you can extend a List<T> to implement this).
Once done, using a Controller/Workflow/Component you can observe the ViewModel/DTO and do the necessary changes to your Entities using a new Context to retrieve and persist.
It involves a bit of a coding and I would say its not a perfect solution since it has its own pros and cons.
/KP

Best way to save extra data for user in Drupal 6

I am developing a site that is saving non visible user data upon login (e.g. external ID on another site). We are going to create/save this data as soon as the account is created.
I could see us saving data using
the content profile module (already in use on our side)
the profile module
the data column in the user table
creating our own table
I feel like #1 is probably the most logical place, however creating a node within a module does not seem to be a trivial thing.
#3 feels like a typical way to solve this, but just having a bunch of serialized data in a catchall field does not feel like the best design.
Any suggestions?
IMO, each option has its pro's and con's, and you should be the one to make the final call, given that you are the only one to know what your project is about, what are the critical points of the project, what is the expected typical user pattern, what are the resources available, etc...
If I was totally free to chose, my personal favourites would be option #4, #1 and #5 (wait! #5? Yes: see below!). My guiding principles in making the choice would be:
Keep it clean
Keep it simple
Make it extensible
#1 - The content profile module
This would be a clean solution in that you would make easier for developer to maintain your code, as all the alteration to the user would pass through the same channel, and it would be easier to track down problems or add new functionality.
I do not find it particularly simple as it requires you to interact with the custom API of that module.
As for extensibility that depends from how well the content profile module API is designed. The temptation could be to simply use the tables done by said module for your purpose, bypassing the API's but that exposes you to the possibility that on a critical security update one day in which you are in a hurry the entire system will break down because the schema has changed...
#4 - Creating your own table
This would be a clean solution because you could design your table (and your module to do exactly what you need to), and you could create your own API to be used by other modules. On the other hand you would introduce yet another piece of code altering the registration process, and this might make it more difficult for devs to track problems and expand the system in a consistent way.
This would be very simple to implement code-wise. Also the DB design would benefit though: another thing to consider is that the tables would be very easy to inspect and query. Creating a new handler for views is dead easy in most of the cases: 4 out of 5 you simply use one of the prototype objects shipping with views.
This would be extremely easy to extend, of course. Once you created the module for one field, you could manage as many as you want by mostly copy/pasting the code for one field to another (or to inherit from the same ancestor if you go OOP).
I understand you are already knowledgeable about drupal, but if you need a pointer on how to do that, I gave some direction in this other answer.
#5 - Creating your own table and porting already existing fields there
That would essentailly be #4 minus the drawback of scattering functionality across various modules... Of course if you are already managing 200 fields the other way this is not a viable option, but if you are early into your design, you could consider this.
In my experience nearly every project that requires system integration (meaning: synchronising data for the same user in multiple systems) has custom needs for user registration, and I found this solution the one that suits my need best for two reasons:
I found I reuse a lot of the custom code I wrote from project to project.
It's the most flexible way to integrate data with the other system (in some case I even split data for the user in two custom tables managed by the same module: one that contains custom fields used by drupal only and one that contains the "non visible fields", as you call them. I find this very handy in a lot of scenarios, as it makes very easy to inspect and manipulate the data of the two systems separately.
HTH!
If you're already using the Content Profile module, then I'd really suggest continuing to use it and attach the field to it. You're saving the data when the account is created, and creating a node for the user at the same time isn't that hard. Really.
$node = new stdClass();
$node->title = $user->name; // what I'd use, or you can have node auto title handle the title for you. Up to you!
$node->field_hidden_field[0]['value'] = '$*#$f82hff';
$node->uid = $user->uid;
node_save($node);
Bob's your uncle!
I would go for option 3. Eventually even other modules will store the data in the database against a user. So you could directly save it yourself probably in a more efficient way than those modules.
Of course depending on the size of the data, you can take a call whether to add a new column in the users table or to create a new table for your data with the user's id as the foreign key.

Resources