How to make ExpressionEngine use SQLite - sqlite

I’m using EE for the first time. I design/develop on two machines, so I use private github repos for all my client work. I’d like to run this EE project on SQLite locally while I’m developing and designing, and then switch it to MySQL once I’m ready to deploy to the staging environment for client review. That way, I can track DB changes easily between machines, as there is a lot of content on this particular site, and exporting/importing DB tables every time I switch machines (three or more times a day) is getting old.
I realize I could use the client’s MySQL DB remotely, but would rather keep everything local, if at all possible. So, how can I run EE with SQLite?
This question (or something similar) was made almost three years ago here. The accepted answer hints at possible support in EE v2.0+ (via CodeIgniter’s then-new database abstraction), but I can’t find any recent information (official or otherwise) on how to make it happen.

I short: no. Active Record is not universally-used throughout EE, and especially not in third party add-ons. In fact, the documentation for the Database class does not mention Active Record at all.
I'd suggest just installing MySQL locally.

I don't think you'll have much luck with non-MySQL DBs — MySQL statements are hard-coded into EE, despite the EE 2.0 promises of a fully-fledged CI app with various DB connectors.
I think your best results might be to find a centralized host for your dev DB. EE is especially painful to work with on multiple machines because of the DB — the whole thing is really built with "one server, one developer" as the primary use case.
You can simplify it a bit by keeping your templates saved as files, and rigging your settings to be server-agnostic. I also keep little scripts around to quickly dump/import the DB when I have to, which makes moving the site around in a git repo a little easier. A few details here: http://www.viget.com/inspire/expressionengine-on-multiple-machines/

Related

What is the best way to merge development database with production?

We develop WordPress websites and we have returning customers who request revisions to their website that require us to develop in a development environment before publishing to production (live). The challenge has always been how to merge two WordPress databases that are out of sync but have mutually new content that must be retained.
This content ranges from Pages, Posts and Comments to Plugin settings and other subtle information. So we export both databases and use FileMerge, for Mac, to display the differences and manually merge them. This would be fine if the WordPress databases didn't contain so much meta information that is, otherwise, human unreadable (examples below).
My question is whether or not someone out there has a recommendation on how to solve this problem? Essentially if we could just omit the items below (assuming that's safe to do?) then we would be perfectly happy continuing to use FileMerge. But with those items it makes it feel like we're decrypting the worlds longest password.
FYI WP Migrate DB (including Pro) is not capable of this. We spoke with the authors.
Thanks.
The following is a list of database records that I think could be omitted without causing any conflicts in the database.
- _transient_random_seed
- active_plugins "a" value (which I believe represents the number of active Plugins)
- _site_transienty_timeout_browser
- recently_edited
- db_version
- cron
- _transient_timeout_feed_mod
- _transient_feed_mode
- rewrite_rules
- Plugin Versions
- _edit_locks
I've been looking at two potential solutions to this problem, however both have the limitation of requiring php exec() on the development/staging server. So hosts like wpengine (which we're using) won't work with them.
If this isn't a limitation for you then check out https://revisr.io/ or http://versionpress.net/
Revisr is open source and VersionPress is currently using a paid Early Access Program to support development. Both use git to manage all aspects of Wordpress syncing from the WP-Admin.
Try MySQL Data Compare from Red Gate. I use the MSSQL version myself and its one of my favorite development tools. You can select exactly which tables and even the individual records you want to sync. Now for the bad news it only runs on Windows and it is not cheap.
If I wouldn't be developing on a Windows machine myself I would buy one just for this program.
http://www.red-gate.com/products/mysql/mysql-data-compare/
I'm looking for the same. The closest I got was WP Stagecoach which is one of the few things out there that actually merges data! The downside is you create a dev site from the the live site - which means you have to do all your changes on there... you can then merge back with the live site (data and files).
It means you can't work locally and have to do it on the Stagecoach servers.
Also, one thing to keep an eye on is Datahawk, there's no timeframe of when they will release it but it's from the WP Migrate DB Pro guys... I've been told by the developer it will handle merges, how this turns out though I don't know.

Can I install WAMP on Microsoft Azure (Bizspark account)?

I have got a Bizspark account from Microsoft and they are providing a basic Azure account. I have been told that it can run PHP, however I would like to use a more tested solution like WAMP. On top of that, I want to place a quite heavy WordPress / BuddyPress installation (that I hope will bring a lot of trafic :)
Has anyone done something similar to this? If so, what is your experience / pitfalls etc.?
Thanks
Stelios
Yes, you can do this. At the end of the day you are just using Windows Server, so anything that installs there will install in the cloud as well. I have done this myself for hosting WordPress in Windows Azure.
However, there are some pitfalls here. Mostly the pitfalls are around the M (MySQL). To setup MySQL in Windows Azure is not really that hard, but you have several considerations on how to make sure it is always available. You can:
Setup a single instance of MySQL in
a role and store the db on local
disk (this is a bad idea).
Setup a single instance of MySQL in
a role and store the db on a drive
(blob backed storage)
Setup 2 instances of MySQL to each
point to a shared drive
(hot-failover). Only one drives will
be able to mount. Now, you have reliability and failover, but a single instance at a time working for you.
Setup 1 writer of MySQL on a drive,
and multiple readers on a snapshot
of a drive. Put in some logic via
connection strings to make sure only
writes goto a single one and reads
to the others. Snapshot every X
mins to update readers.
Setup multiple instances of MySQL
and use native replication features
(each storing to local disk) and
rely on that if you lose an
instance.
There are probably more permutations, but the gist of the problem is how you scale out MySQL to be available and reliable. In Windows Azure, you don't get to rely on the fact that the local disk will always be around or that you will always have the same instance. In fact, you can guarantee that your instances will be down for some period of time each month and eventually, given enough time, you will lose the local disk.
Overall, with multiple instances however, you can guarantee they won't be down simultaneously (to the service SLA level at least). So, you need to make sure MySQL works with multiple instances (or live with single instance downtime) and that your data is backed by blob storage to guarantee it is persisted.
Or you can scrap all that crap and just use SQL Azure, which solves all those problems. So, it become WASP. SQL Azure can also be more economical as well for smaller DBs.
Or you can scrap all that crap and just use SQL Azure, which solves all those problems. So, it become WASP. SQL Azure can also be more economical as well for smaller DBs.
Ditto.
Installing MySQL on an Azure role is not a good idea for plenty of reasons, most notably (lack of) scalability and reliability. (That's just for deploying on Azure, MYSQL itself is great)
To set it up remotely reliably you're going to need a dedicated instance which will run you at least $40 a month, going with SQL Azure is $10/Gb, or free if you get an introductory offer or Bizspark.
If you're just looking to play around with a single instance app, I'd suggest you rather use SQLite or some other in memory db, it'll be a lot less painful.

Good way to make changes to production database / source code

I'm interested to find out what would be the good way to make changes to production database and source code in web application (ASP.NET, SQL Server 2008).
A little bit more details, we develop on local machines, and then we need to transfer the code and database changes to production (pretty much standard story).
At the moment we do it in the evening, change the database directly from management studio on production server, and then just overwrite the existing asp.net code (copy/past).
You're talking about Release management. What you're asking about is a big subject with a LOT of different answers. The best solution for you is not something we can tell you. There are trade offs to consider.
For example, what you're describing is a very basic release management process that would be considered an "immature" process.... It does not take into account rollback plans, versioning, separation of concerns, proper testing, or any of a hundred other factors that a "mature" release management process involves.
A mature process is very good, but if you don't have the resources, it's not feasible.
To get to the point, I don't think you question can be answered fully here. I'd suggest starting to research "change management", "release management", "Application Lifecycle management", and "Applicaiton Development Lifecycle". I'll have a few good starter links for you in a minute.
Just a forewarning, though, you are asking a question that's going to open your eyes and your world in ways you probably haven't considered. There are things like automated builds to consider, tools to do it for you (high priced, free, and everything in between)
http://en.wikipedia.org/wiki/Release_management
http://en.wikipedia.org/wiki/Application_lifecycle_management
A few simple options for JUST what you're asking about can be found here:
http://msdn.microsoft.com/en-us/library/7hd4c0x3(VS.80).aspx
Also, since you talked about source code without mentioning which source control you're using, I need to say... if you're not already using source control, you need to. You'll wonder how you ever lived without it once you start using it.
Depends on whether it's the first deployment of a new app, or an update to the app.
For small updates, record all your database changes as sql scripts. You must strictly enforce that all changes to development are applied as sql scripts. Put the scripts in source control. Deploy the update by running the scripts on production.
For new apps you may have thousands of scripts. You can't run them individually. Consolidating them into a master script takes too much time. (although you still want to check EVERY script into source control). In this case you reach a milestone in development then FREEZE the development database, and declare it a baseline. Use the database tools to generate a master script(s). Deploy production by running this script(s). Manually create data scripts for your lookup tables to keep it separate from junk dev data.
Avoid a database copy. Avoid changing by hand through the GUI. Scripts are the way. How you go about collecting the scripts, consolidating to master scripts, generating the scripts, etc is another story.

deploying changes on a living drupal site

I really like drupal somehow. But what disturbs me most is that i can't figure out a clear way of deployment. Drupal stores a lot of stuff inside the database (views, cck, workflow, trigger etc) that needs to be updated.
I've seen some modules that could be used for this task (eg features) and I'm not sure if they are sufficient. Yet they are only for drupal6 and i currently have to work on a drupal5 site where updating is not yet an option.
Any ideas?
This is a weakness. Drupal doesn't have the developer tools built in that make development and deployment easy like Rails does (for example). One problem is Drupal isn't aware of it's environment natively. Secondly, there are too many different methods and modules that require special care. It can get very confusing. But things are getting better with drush and drush make.
I'm assuming here that you have a development environment on your local machine and a live or staging server you upload to.
The first thing you have to do is work out how to get your database fixture and your code to and from your server to your development environment very quickly. You need to make this proceedure as painless as possible so you can keep different versions of your site in sync without much effort. This will mean you will hopefully be able to manage less change every time you deploy. Hopefully...
Moving the database around isn't too hard. You could use phpMyadmin or mysqldump but the backup migrate module is my favorite tool.
To upload code from your local repository or site can be done in a few ways. If you use a version control system like git, you can commit on your local machine and check out again on the staging server. There are also special deployment tools like capistrano you should take a look at. (if you know this stuff already it may benefit others to read). If you're using FTP you should probably try something different.
If you're working with a site that is still in production, you can afford to make small incremental changes to your local site, then repeat on the live site and down load the new version of the database when your changes are in place. This does mean you double handle the database but can be a safe way of doing things. It keeps both your database closer to each other and minimises risk.
You can also export views backup to your server in either your code or importing them into your live site. There is a hack to get around deploying cck changes here: http://www.tinpixel.com/node/53 it works OK but cannot truly manage changes like rollbacks. (Respect to the guy who wrote that)
You can also use hook_updateN to capture changes and then run update.php to apply them. I worked on a d5 site with dozens of developers and this was the only way to keep things moving forward. This may be a good option if your site is live or if you need all database schema changes captured in a version control system (so you can roll back).
Also: Take a look at drush and drush make. These tools can be of great benefit. I can't remember how much support is for d5.
One final method of dealing with this is not to use cck or views (and use hook updates). But this is really only suitable for enterprise sites where you have big developer resources. This may seem like a strange suggestion but it can negate this whole problem completely.
Sorry I could not give you a clear answer. This is because one does not exist yet. You'll end up finding your own rhythm once you get into. Just keep backups of your database if you can roll back to them easy enough.

Should we have separate database instance for each developer?

What is the best way for developing a database based application? We can have two approaches.
One common database for all the developers.
Separate database for all the developers.
What are the pros and cons of each? And which one is better way?
Edit: More then one developer is supposed to update the database and we already have SqlExpress 2005 on each developer machine.
Edit: Most of us are suggesting a common database. However if one of the dev has modified the code and database schema . He has not committed the code changes but the schema changes has gone to the common database. Will it not possibly break the other developers code.
Both -
I like a single database that changes are tested on before going live, or going to a 'formal' test environment. This is your developer's sanity check; it stays up to date with the live system and it makes sure they always consider each others changes. The rule should be that changes don't go on here if they might break something else.
A database per developer is great (even essential) when more than one developer is making updates. It allows them all the development flexibility they want without breaking things for other developers.
The key is to have a process for moving database changes from development through to your live system, and stick to your process.
Shared database
Simpler
Less cases of "It works on my machine".
Forces integration
Issues are found quickly (fail fast)
Individual databases
Never affect other developers, but this is also a bad thing, in continuous integration
We use a shared development database and it works out nicely. Our schema rarely changes in a way that makes it backwards incompatible, but occasionally a design change will occur before we go live, and we simply ask the other developers to update.
We do have separate development application (web) servers, but they share the same database. Our developers do have the option to use their own database, as they know how to set this up, and will do that on occasion, but only temporarily. The norm, for us, is to share the database.
Thought I'd throw this out there, but why not let every developer host their own instance of SQL Server Developer on their desktops and then have a shared server for each of the other environments (development, QA, and prod)? I think even the basic MSDN that comes with Visual Studio Pro (if you opt for it) includes a license for SQL Server Developer.
The developer can work on their desktop without impacting the others and then you can have them move the code to the next shared environment as you see fit (at will, with daily/weekly builds, etc.).
EDIT:
I should add that the desktop instance allows developers to do things that he DBAs often restrict on shared environments. This includes database creation, backup/restore, profiler, etc.. These things are not essential but they allow the developer to become so much more productive while reducing the demands they make against your DBAs.
The shared environment is completely necessary for testing - I would not recommend going from desktop to production. But you can add so much by allowing the developers to have 100% control over a given database environment (including isolation from others) with a relatively minor cost.
Depends on your development, testing and maintenance cycles. Also on the size and location of the development team (and of course organization). If you support several versions of the database you might need even more environments.
In real world I found the following approach rather satisfying:
single central database/application for testing purposes, gets all the changes by various developers periodically merged into it
local copies for development (so you are free to drop and reload the whole database)
upgrade scripts are maintained for any changes to schema, auxiliary and sample data sets
Here are some further points:
If two developers (two teams) are working on changes that can affect each other then they should complete their tasks independently and then integrate/merge and test. For this it is much better to have separate development environments (unless they have to work together in which case I consider them to be a part of the same team; still they can work on their own copies of the database and share it if necessary)
If they work on the changes that do not influence each other they could work on the main server. Or on their own local copies of the database.
So, developing on the local copy has all the benefits with no risk in a general case (when you support multiple versions of the system and maintain upgrade scripts anyway).
Still it is great if you can share test cases so ability to dump/restore the database easily and quickly is a big plus.
EDIT:
All of the above assume that having a copy on the local machine of the whole system for testing purposes is feasible (size, performance, licenses, etc).
I would opt for solution #1 : One common database for all the developers.
Pros
Less expensive for the infrastructure;
Only one dump is required when it's time to refresh the development database;
Everyone develops with the same data, so it closely represents the production environment;
Cons
If one developer performs a bad operation, this could impact a larger amount of developers.
As for solution #2 : One independant database for each of the developers;
Pros
This could be useful for new features developments, when development requires isolation;
Cons
More expensive for the company (infrastructure, licences...);
Multiplication of problems caused by eager isolation development environment (works in devloper's environement, not integrated);
Multiplication of dumps by the DBAs of the same copy from the production environment.
Considering the above, I would recommend, depending on your company size:
One database for development;
One database for testing the integration;
One database for acceptance tests;
One for new feature development that will perhaps require integration tests.
If your company doesn't require integration tests, then go with acceptance tests, this step is crucial before going to production.
One per developer plus a continuous integration and build server to run unit and integration tests. That gives you the best of both worlds.
Having all developers modify a single dev database quickly becomes less productive once the amount of database change reaches a certain level because it forces a developer to deploy changes to the shared database before he is ready to check-in, which means other parts of the code line may break unnecessarily.
Simple answer:
Have one development database, and if the developers want their own, they can just run their own instance on their own machines. Just be sure to test/publish on the shared.
We do both:
We use code generation where I'm at and our database is generated as well. So we have an instance on each developer's box where the database is generated. Then we use the scripts that are generated to apply the changes to a central test database. If that goes well we apply the changes to the production database during a release.
What's nice with this approach is that when our "source of truth" is checked in to source control, all the database changes are automatically distributed to the other developers when they rebase and regenerate. It works well for us.
The best way is single database on Test/QA server and one database (probably on developer's local computer) for each developer (so, 10 developers work with 10 + 1 databases).
The same approach as for general development: each developer has own copy of source code on local machine.
Also, multiple-database approach simplifies the keeping database schema in version control systems. We are keeping database creation scripts in SVN.
We are using the approach, described here:
http://www.sqlaccessories.com/Howto/Version_Control.aspx
You might also want to look at Refactoring Databases. Aside from discussing database changes, he includes discussions on going from development to production in a way that reduces risk.
Why on earth would you want a separate database for all developers?
Have one common database for all, that way the table structure is consistent and the sql statements are as well.
The biggest problems with developers having their own databases are:
First it is unlikely to be the size
of the real production database (if
you take all the databases we need to
work with here, they would take up
several hundred gigabytes of space, I
don't have that available on my
machine), this causes bad code to be
written that will never work on a
large database for performance
reasons. SQL code should never be written against a data set significantly smaller than the one on prod.
Second, developers who use their own
database create problems when they
spend a long time developing
something and then find out only
after they merge with a real datbase
that it affects something else. You
find this stuff much faster when you
share the environment. So there is
inthe end less wasted development
time.
Third developers working on related
things need to know about the changes
you are making, it will affect their
change.
When you know you are going to affect others, I think you tend to be more careful what you do which isa plus in my book.
Now the shared database server should have what we call a scratch database, a place where people can create and test table changes, so if they are doing something that might need to drop and recreate a table (which should be a rare case!), they can test the process first by copying the table to the scratch database and running their process there and then changin to the real database when they are sure it works. Or we often copy a backup table to the scratch database before testing a particular change, so we can easily recreate the old data if it goes bad.
I see no advantages at all to using individual databases.

Resources