I want to create a docker image for wordpress, which creates a ready to use wp instance with already setup and completed installation, activated theme and plugins and other configurations.
I was exploring WP CLI to do that as part of the Dockerfile build, however it turned out that this is a stateful process (it requires existing mysql database already in place), which is impossible during docker build.
I'm thinking about creating a bash script which brings up a docker WP CLI container and a pure ubuntu/php container. The wp cli container will then install and setup the wp instance on the ubuntu/php container which will become the defacto WP container, and at last the wp cli container will be shut down by the script.
I'm not sure if this is a good solution, do you have better advises for this use case?
It's a broad and opinion-based question. I think you basically need to figure out three things:
Find and choose a Composer template to manage your dependencies. For example https://roots.io/bedrock/.
WP-CLI commands. I see you are already at it.
Find a way to export and import your WordPress config. For example https://wordpress.org/plugins/wp-cfm/
A basic setup routine then could maybe look like this:
# Download WordPress core and plugins.
composer install -n --no-dev
# Default DB name and user.
wp config create --dbname="test" --dbuser="root" --dbpass="" --dbhost="127.0.0.1"
# Install a basic site. The title doesn't matter.
wp core install --url="blog.localhost" --title="Lorem ipsum dolor sit amet" --admin_user="admin" --admin_password="admin" --admin_email="webmaster#example.com" --skip-email
# Language.
wp language core install de_DE
# Activate WP-CFM and import default conf.
wp plugin activate wp-cfm
wp config pull default
You don't necessarily need Composer. You can download WordPress and plugins with just WP-CLI as well. The advantage of Composer is the versioning, and that you only need a composer.json and -.lock file to keep your repo clean. It's then just one command composer install that downloads you and your colleagues everything. Big advantage also is vendor/ can be cached during builds depending on the composer.lock checksum for example. This can shorten build time a lot.
Alternatively to the above sample routine you could also try to get an existing database backup imported and then just update the base URL and import the updated config. That would skip the WordPress install step and give you dummy content to maybe target automated tests at.
Related
For the continuous integration and deployment of websites, I am using this pipeline:
But for many CMS like wordpress, prestashop, magento and others, the configuration of the website and the installation of plugins is done in the back-office of the deployed website.
For now, I am building the docker image on top of the CMS base image, then replacing all the /var/html directory with the files in github. Then Kubernetes is deploying the containers and plug a database and a persistent storage
Hence, this is breaking my pipeline: imagine that someone is installing and configuring a plugin in the back-office, then someone else is doing a modification on a file and pushes it to github. The github repo doesn't have the info that a plugin was installed and will build and deploy a new image without it.
How to integrate all the modifications done in the back-office in my github repository?
The solution we use is an override of the DB class.
So we monitor a number of tables (Configuration, module, hook, etc ...) and we store all queries about it in a sql file.
So during commit, we also have a .sql actions to perform on the database side.
Once deployed, either you manually execute the sql, or a script detect that new SQL are present and executes.
In this way we are always up to date.
This solution we developed in the form of Prestashop modules to track all actions.
Regards
My (by any means not ideal) working solution:
Create plugins folder outside docker and symlink this folder in dockered /wp-content/plugins
recreate above in production
Then installing new plugin doesn't break CI flow but requires two independent installations and configurations, if you (or dev team) need to install something new.
So you basically treat plugin files same way as you already do it with DB.
I'm working on figuring out deploying a Wordpress site from BB Pipelines to AWS EB. It all makes sense except for one thing, I want the repository to only contain the theme and plugin files.
I do not want the full WP directory being deployed each time. Media would be handled via an S3 bucket and the DB would use RDS.
What is the best way to get WP installed but only have the theme and plugins deploy through Pipelines? And when I want to update to the latest version of WP, how would that work?
Or am I going about this wrong?
The simple solution, and best practice in my humble opinion, is to repo the entire WordPress installation, including the WordPress core, and all your custom themes and plugins.
Having the entire installation in one repo solves many problems: you can can tag and release versions, and you can install all software locally with a simple git clone.
Regarding the file system, definitely consider EFS instead of S3. It is much more reliable and easier to mount in a linux-based system. Make sure and define set the file path environment variable, so you can point WordPress to the files. You will want mount this outside of the software file tree.
I have been running this kind of setup for 3 years with no problems. We do releases through the code deploy service daily. Very straightforward and easy to maintain.
To upgrade WordPress just check out the current version from the repo, then apply the upgrades release, do comparisons, test commit and release.
I'm a little new to Composer so this is just a general question. I've been developing a WP plugin, and I'm requiring a few libraries in via composer. I've uploaded the plugin to a server and I'm having problems. Am I required to run composer install on the server as well as on my localhost?
If you prefer to manage your dependencies with composer during development, you can do so.
But none of the WP workflows (neither the base installation, nor plugins) use composer. WordPress simply expects a folder with your code, it doesn't care about any of the internals, as long as you follow some simple rules.
If your plugin is public, you will have to submit it to the WordPress SVN, but there's no such thing as a build process. Also, WP plugin users will usually neither be interested in, nor have the possibility to, execute composer.
It is up to you if you create your own build process before committing to the WP SVN, or if you create your plugin in a way that it can run from your development code.
However, if you “build” your code before committing it to WP SVN (e.g. creating cache files, removing development-only dependencies etc.), you could run into discussions with people who insist on getting the original sources, too.
I have a couple of working Drupal installations on OpenShift. I prefer OpenShift for testing and development.
However of late when I try to install a new instance of Drupal the 'php' folder is missing and as a result editing of any files is a real nightmare. I decided to create a 'php' folder and build my drupal from there.
The challenge however has been that with every update that I push my settings.php file is deleted and I have to fix it via SSH just to get working. This is a real bother and am looking for a better alternative to working with Drupal in peace on OpenShift.
The QuickStart has changed to automatically deploy a Drupal instance in your data dir and symlink the changes. This means at create time you get the latest Drupal and you can configure and deploy Drupal live. If you prefer the old model, just copy the php folder off the gear and check it in to your source repo. The hooks (which you can also change) won't deploy Drupal if you have a php folder - meaning your base Drupal still works.
We made this change so that folks creating a new instance got something running immediately that was up to date with security patches (and so you install modules to the server directly). Settings.php lives in your data directory and is symlinked in vs being copied.
You can continue to use the old Drupal-example repo as well.
I have been version controlling my WordPress site with Git and pushing to GitHub for quite some time. I develop locally, push to GitHub, and pull from GitHub to my production server.
I would like to scrap my current WordPress core on my local environment and replace it with a fresh new copy to then be pushed to GitHub and pulled to my production server.
My question is...am I going to run into any sort of tracking errors wit Git by replacing my WordPress core? Any other suggestions for me when I do this?
I handle this by adding the Wordpress core as a subrepository. WordPress maintains a mirror of their SVN repository on GitHub. Then you can update to a new version easily by going into the subdirectory containing the WordPress core files and checking out the tag for the current version:
git fetch --tags
git checkout 3.3.2
The repo also includes beta releases, if you prefer those.
Here's the guide that I used to set up this process: Install and Manage WordPress with Git
I understood you're talking about updating wordpress. You should note that, like documented in the update instructions, during update you should
check the requirements
take a backup of your database contents
disable the plugins you have
So all this you should do regardless. Also, you should be sure to update your database content, too, if needed.
Wordpress contains instructions on updating using svn, and it seems it is, according to them, a perfectly fine approach to have. I would see no major difference for you to do essentially the same thing with git, provided you can tell exactly which files are appropriate to store on version control and which aren't.