Combining databases in Drupal - drupal

So here's the scenario:
I am building a series of event websites that need to be separate Drupal-6 installations on the same server. Each uses the same architecture but different URLs and themes.
What I'd like the sites to be able to share is content in 2 content types, teachers and sponsors. So that there would be one database with all the teachers and sponsors that each individual event site could then pull from by means of nodequeues.
So that new teacher nodes and sponsor nodes can be created and edited from within any of the Drupal installations.
It would also be convenient to share a user table as well, but not absolutely necessary.
Any ideas?

Does this help you?
The comments in settings.txt explain
how to use $db_prefix to share tables:
* To provide prefixes for specific tables, set $db_prefix as an array.
* The array's keys are the table names and the values are the prefixes.
* The 'default' element holds the prefix for any tables not specified
* elsewhere in the array. Example:
*
* $db_prefix = array(
* 'default' => 'main_',
* 'users' => 'shared_',
* 'sessions' => 'shared_',
* 'role' => 'shared_',
* 'authmap' => 'shared_',
* );
Another way, although I don't know if that would work, is to create one database per Drupal installation and create in each database a VIEW eg. install1_users, install2_users that refers to one shared table shared_users in a shared database. A carefully constructed view should be updatable just like a normal table.

Sounds like a job for the Domain Access project, a suite of modules that provide tools for running a group of affiliated sites from one Drupal installation and a single shared database. The module allows you to share users, content, and configurations across a number of sites. By default, these sites share all tables in your Drupal installation but the Domain Prefix module allows for selective, dynamic table prefixing for advanced users.
IMHO this is much better than rolling a custom solution which poses considerable complexity and risk of not being able to upgrade your site. See Share tables across instances (not recommended).

There are a couple of ways of doing this.
If it's okay to share a single database, you can use prefixing as follows:
$db_prefix = array(
'default' => 'main_',
'users' => 'shared_',
'sessions' => 'shared_',
'role' => 'shared_',
'authmap' => 'shared_',
);
However, keep in mind that there is a hard limit to the number of tables that a MySQL database can hold. According to this thread, it's 1792. If you suspect that you will reach this limit, you can use this hack/bug.
$db_prefix = array(
'default' => '',
'users' => 'maindb.',
'sessions' => 'maindb.',
'role' => 'maindb.',
'authmap' => 'maindb.',
);
where maindb is another shared database that contains the data that you need. This is not best practice, but it works in Drupal 6 (haven't tested in D7).

Related

Doctrine Entity with fetch="EAGER" causing N+1 queries in dropdown

I have two entities, Client and Contact. There is a one-to-many relationship between them, Clients have many Contacts. The association is annotated with fetch="EAGER", and it's reasonable to say that Contact information will always be needed when viewing a Contact (not true in all cases, but a fair generalisation).
However, rather than just always ensuring that only one query is run by automatically joining to Contacts when viewing Clients, I'm observing some behaviour that seems strange to me, specifically when displaying a list of Clients in a Choice field on a form. Reviewing Doctrine log output, rather than doing a single query and using the results to display a list of Client names, instead a query gets all the Clients and then multiple queries get the data for all the Contacts, one per Contact. In this case the Contact data is not in fact needed, and I've successfully prevented the behaviour by simply removing `fetch="EAGER". This seems contrary to the desired behaviour of eager fetching, it's actually having the opposite effect.
When used to populate a Choice field, why is Contact data not being fetched along with the Client info using a single query as expected?
I've followed the code through the framework to watch the queries get fired one by one in the Doctrine hydration code, but this hasn't made it clear to me why it's happening. My pet theory involves something complicated to do with the Choice field "interrupting" what Doctrine would typically do and forcing it to fetch the additional data at a later stage than normal, but I've nothing much to back that up.
Client form field
$form->add(
'client',
'entity',
[
'class' => 'AppBundle:Client',
'choice_label' => 'name',
]
);
Client Contact assoc
/**
* #ORM\OneToMany(targetEntity="Contact", mappedBy="client", fetch="EAGER", cascade={"persist", "remove"})
* */
private $contacts;
Have you tried to add query_builder to your EntityType filed?
Example
$form
->add('client', EntityType::class, [
'class' => Client::class,
'choice_label' => 'name',
'query_builder' => function (EntityRepository $entityRepository) {
return $entityRepository->createQueryBuilder('this');
},
]);

Create custom column in post table in wordpress

I want to add two custom field to table wp_posts, and that I need for my plugin.
now to enable these fields I have changed core file of wordpress
wordpress/wp-admin/post.php
$data = compact( array( 'post_author', 'post_date', 'post_date_gmt', 'post_content', 'post_content_filtered', 'post_title', 'post_excerpt', 'post_status', 'post_type', 'comment_status', 'ping_status', 'post_password', 'post_name', 'to_ping', 'pinged', 'post_modified', 'post_modified_gmt', 'post_parent', 'menu_order', 'post_mime_type', 'guid' ) );
here I have added two fields which I want.
Now I want these things as installable( I have manually added two fields here).
So how to do that within plugin.
I have read one post
http://wp.tutsplus.com/tutorials/creative-coding/add-a-custom-column-in-posts-and-custom-post-types-admin-screen/
here hooks are used in theme function.php,but i want to do in plugin itself.
I am using wordpress 3.6.
still any confusion please comment I will update.
As pointed in the question comments, you should never edit wp core files(reason: they get overwritten at updates), and you should never modify wp tables (can cause crashes at updates).
You are developing a plugin, then you have a few options for the database setup:
1) you can use existing database tables
* you might use the postmeta table
2) if for any reason you can't use post meta, create your own table
* add the 2 columns that you need and a column as post id, this way things will run smoothly
PS: you can use in your plugin all wordpress functions, just search them in the codex and see how you need to use them.
also check this info about creating database tables with plugins

Assigning post-specific roles to users in WordPress

I am designing a plugin that need to assign users to a particular post (of a custom type).
For instance, I have a custom post type: ClassifiedFile
For posts of that type, I will need somewhere in the interface to assign users to it, each having different capabilities. In that case, that could be:
Reviewers assigned to a particular classified file can read it and mark it as approved
Readers assigned to a particular classified file can only read it
Or course, readers and reviewers do not have access to other classified files than the ones they are allowed to see.
Managers who can assign Reviewers and Readers to classified files
PluginAdmin who can assign Managers to classified files
Ideally, the solution should lend to efficient requests of the type:
I want to list all classified files a user can read (be it as reviewer or as reader).
So far :
I have stored a few particular properties in the meta data of the post (such as the approval status).
I have created custom capabilities : "plugin-admin", "manage-file-users", "approve-file" and "read-file"
I provide two custom roles for that : File Group Manager ("plugin-admin"), File Group Manager ("manage-users") , File Reviewer ("approve-file", "read-file") and File Reader ("read-file")
I must say I am struggling to find a nice way to address the listing and storing of priviledges per classified file. Ideally I'd like to avoid having to create a separate DB table but if that is the way to do it then I'll do that.
Assigning roles was not much a problem, the question was more a matter of where and how to store the information.
I have decided to do that:
I define groups (which are in fact WP_Role objects) which are common to all ClassifiedFile
For each ClassifiedFile, I store a meta value which is an array of strings containing the role and id of a user. For instance: [ reviewer|50|, reader|123|, reader|13| ]
I can then query all ClassifiedFile of a particular user using the criteria:
'meta_query' => array( array(
'key' => 'users',
'value' => '|' . $user_id . '|',
'compare' => 'LIKE'
))
Or using a group:
'meta_query' => array( array(
'key' => 'users',
'value' => $group . '|' . $user_id . '|',
'compare' => '='
))

SQL Server to CSV to Wordpress - is it possible?

I have a custom made CMS system with a relatively small number (~1000) of articles inside my SQL Server database.
I want to migrate the whole site to Wordpress. Is it possible and if so, then how, to migrate the data (one table - Article) from SQL Server to a CSV file and then import it into Wordpress?
Thanks!
update:
The structure of table Article looks like this:
ID_Article - int (pk)
Title - nvarchar(max)
Summary - nvarchar(max)
Contents - nvarchar(max)
Date - datetime
ID_Author - int (fk)
Image - nvarchar(max)
Promoted - bit
Hidden - bit
There also categories done with an associative CategoryToArticle table. The only thing that needs to be moved is CONTENTS (optionally merged with Summary) DATE and TITLE (would be cool if it was merged with author's name "John Doe: My Article title."). It can be categorised as "Archive" or soemthing like that, image and other flags can be dropped completely.
You could use an automation software such as BlogSenseWP to import the CSV items in as new posts, but you would have limited control over the dating and tagging would have to be auto-regenerated using the Yahoo Tags API (included in the software's features). Categorization might be tricky as well, but there are keyword based auto-categorization filters that could help you get close.
Otherwise you would might want to investigate a custom PHP script that imports the CSV fields desired into variables, and then loops through them adding them to the wordpress database using wordpress functions.
Here is a custom way of adding a post to wordpress:
$permalink_name = sanitize_title_with_dashes( $title );
$post = array(
'post_author' => $author_id
'post_category' => $cat_id,
'post_content' => $description,
'post_date' => $date_placeholder,
'post_date_gmt' => $gmt_date,
'post_name' => $permalink_name,
'post_status' => 'publish',
'post_title' => $title,
'post_type' => 'post',
'tags_input' => "$tags",
'original_source'=> $link
);
$post_id = wp_insert_post( $post, $wp_error );
You would have to include the wp-config.php file at the top of this script to load the wordpress code environment.
The above is a crude summary. It would take a thorough understanding of PHP to fill in the ommited code and complete the concept script.
Use the SQLCMD command line tool. You can look at this question to see how to export data into a CSV.
It is possible. However, it heavily depends on the structure of your current table. If you have tags, categories, different post statuses (published, pending etc).
One possible way would be to write a script to read from your database, and drop the same data into the wordpress tables.
MSSQL -> CSV -> MySQL is also possible. You'll just have to read the CSV and dump the data into MySQL.
Maybe if you can give your table structure, we can give you a better way.
this might help you
first create a table in mysql DB
CREATE TABLE Genesis (
id INT(10),
title varchar(255),
description text,
date timestamp,
PRIMARY KEY (id)
);
then use some php code to updae wordpress data.
source -
http://web-design101.com/createawebsite/featured-articles/insert-wordpress-posts-through-mysql

migrating node references

I am working on a project to migrate a website from asp.net to drupal architecture. But the site content is very hierarchal and has a lot of references between entities.
for example: each content belongs to a category, and each category belongs to another category section. Now there may be another level of hierarchy even.
I am planning to make use of migrate module for migrating the database content and linking the migrated nodes via a node reference field.
But i am getting stuck with migrate module as i can't find a way to migrate the node reference field anywhere...
can anyone help me out with this...
Actually, it doesnt seem to be that hard .. in 2012. Yes, you have to keep track of source IDs versus import IDs, but the migrate module does this for you, in a neat little table. You can join that table in your source query, and update the node reference field with the nid of the .. referenced node. Ofcourse, the referenced nodes should have already been imported. If they werent, you can run an run an 'update' later and referenced nids get entered based on the latter imports too. In practice:
$query = Database::getConnection(
'default', 'mysourcedb'
)->select(
'mysourcetable','source'
)->fields('source', array(
'id',
'title',
'whatever'
'rel_rec_id'
)
);
$query->leftJoin('migrate_map_relimport','relmap','relmap.sourceid1=source.rel_rec_id');
$query->addField('relmap','destid1','rel_node_id');
The code above assumes you have a 'mysourcedb' with a 'mysourcetable' in it that refers to a 'rel_rec_id', and theres another import called RelImport that imports the rel table that rel_rec_id is refering to; it should have already run (or will run before you run an additional update). Do a migrate-status once you have the RelImport class to make sure the table exists.
To be able to make joins to the 'migrate_map_relimport' table, make sure map tables are written to the source database, not the drupal database. This is not always necessary, but here it is:
$this->map = new MigrateSQLMap(
$this->machineName,
array(
'id' => array(
'type' => 'int',
'unsigned' => true,
'not null' => true,
'alias' => 'source'
)
),
MigrateDestinationNode::getKeySchema(),
'mysourcedb' // connection to use to write map tables
);
and finally, assign the retrieved rel_node_id to your node reference:
$this->addFieldMapping( 'field_rel_node', 'rel_node_id' );
Yay, it is rocket science .. YMMV
As far as I know, you will not be able to do this entirely within the migrate module. You'll have to run a few queries directly in MySQL.
Essentially, you'll have to create an extra field in each content type to house their legacy ID's and an additional field for each legacy reference (in addition to the actual nodereference field). Load all the data into the content types (leave the nodereference field empty). Then once all of the entities are loaded in, you run mysql queries to populate the nodereference fields based on the legacy IDs and legacy reference fields. Once that's done you can safely delete these legacy fields.
Not the most elegant solution, but I've used it many times.
Caveats:
This is for Drupal 6; Drupal 7's fields implementation is totally different AFAIK.
For very large migrations, you might want to do your legacy field deletions through MySQL.
You might also take a look at the code for the Content Migrate module (more information at https://drupal.org/node/1144136). It's for migrating D6 Content Construction Kit (CCK) content to D7 Fields and it's been integrated with the References module.
It won't do what you need out of the box, as you're coming from an ASP.net site instead of a D6 site, but it may provide some clues.

Resources