I'm trying to get an installation of Wordpress running in Kubernetes, as well as have an option of running the same configuration locally in minikube. I want to use the standard Docker image of Wordpress: https://hub.docker.com/_/wordpress/.
I'm having trouble with making sure that the plugins and templates are in sync though. The Docker container exposes a Volume at /var/www/html. Wordpress installation, as well as my plugins will live there.
Assuming I do the development on Minikube, along with the installation of plugins etc. How do handle the move between Persistent Volumes between my local cluster and the target cluster? Should I just reinstall Wordpress every time when the Pod is scaled?
You can follow Writing Portable Configuration (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#writing-portable-configuration) guide for persistent volume if you are planning to migrate it to different cluster.
In a real production scenario you would want to use a standard tool to backup and migrate persistent volumes between clusters. Valero is such a tool which enables you to achieve that.
Related
I just installed bitnami/wordpress image using helm. Is it possible to sync it with git and when I will change some files in this git repository to update kubernetes pods?
I mean update Wordpress source code because Im modyficating plugins in wp-content/plugins dir.
You can use ArgoCD or Flux to automate this types of GitOps workflow. Check their documentation. They are pretty powerful and popular for GitOps in Kubernets.
A possible solution is to use git-sync in a sidecar container. It will periodically pull files down from a repository and copy them to a volume.
Here is a sample manifest which uses git-sync to update the content hosted on a simple nginx web server:
https://github.com/nigelpoulton/ps-vols-and-pods/blob/master/Multi-container-Pods/sidecar.yml
One way I managed it (although possibly a rookie way) was through github actions.
Here's an example of mine
And here's official docs from docker to configure with github actions
You basically want to tell github actions to recreate and push your image and then tell your cluster to refresh like so:
If you're using kubectl to manage your cluster check if your version supports kubectl rollout restart. You can use it to force any deployment to restart and smoothly recreate your pods (it also re-pulls the supporting image).
e.g.: kubectl rollout restart deployment/my_deployment
I've been playing with Google Cloud, trying to figure out the most cost-effective way to host multiple low-traffic WordPress websites.
With Bitnami, it seems to me that for every new WordPress instance, I'm having to provision a new virtual machine. I also tried Google click-to-deploy WordPress setup, and it forced me to provision a cluster with 3 VM's.
Each of the new VM's cost money, so I'm wondering if there's a way to do something similar to shared Linux hosting, where I could host multiple WordPress instances on a single Virtual Machine.
You can use the Bitnami Wordpress multisite stack, which allows multiple sites to run on one server.
In you don't want to use the Bitnami Multisite solution, you can also install multiple WordPress apps in the same server without installing multiple database or web servers. Bitnami provides modules to install on top of an installed stack (normally LAMP stack) and the WordPress module allows you set the name of the blog you want to create.
The module can be downloaded from here but you will need to run the following commands in the instance (these commands will download the current version)
wget https://bitnami.com/redirect/to/269995/bitnami-wordpress-4.9.8-0-module-linux-x64-installer.run
chmod a+x bitnami-wordpress-4.9.8-0-module-linux-x64-installer.run
sudo ./bitnami-wordpress-VERSION-module-linux-x64-installer.run --wordpress_instance_name NEW_BLOG_NAME
Once you have the module installed, you will be able to access it through http://localhost/NEW_BLOG_NAME.
More info in the Bitnami documentation
https://docs.bitnami.com/installer/apps/wordpress/configuration/install-several-wordpress-modules/
I found the following post which has explains how it might be done.
http://designhack.slashlab.net/en/how-to-setup-multiple-wordpress-without-multisite-ft-bitnami/
Make sure to back up important data before you start.
There is the chance to set up multiple websites using bitnami, but i recommend to kept separate every site to avoid database confusion, and to extend the functionalities of every website.
https://bitnami.com/stack/wordpress-multisite
Im using a single VM per domain to avoid confusion with DNS.
Forgive me if this is an obvious question, but I am trying to figure out what the best way is of handling autoscaling of EC2 instances running WordPress such that their themes and plugins (along with their associated configurations) are preserved.
I am already able to decouple the data and content layers via RDS and S3, respectively, but I am struggling how to preserve the themes and plugins through an EC2 instance autoscaling event.
My EC2 instances are configured as follows:
EC2 bootstrap script installs WordPress onto blank Amazon Linux AMI
EC2 runs behind an ELB
Database is on RDS
Web content is on S3 (using W3 Total Cache plugin)
Plugins/themes are installed on the local EC2 filesystem
To preserve themes and/or plugins through an EC2 autoscaling event, I could:
Install the themes/plugins I need first, then upload the /wp-content/plugins and /wp-content/themes dirs to S3, downloading them automatically each time an EC2 instance restarts via the bootstrap script. DISADVANTAGES: need to update S3 every time i make a config change, not all plugins are installed neatly within the /themes subdir, and changes to one instance don't flow to all (need to restart the cluster everytime a change is made).
Install the themes/plugins I need first, then take an AMI snapshot of the entire instance. Use this AMI as a template when launching new instances. DISADVANTAGES: need to update the AMI every time i make a config change (seems tiresome), and changes to one instance don't flow to all.
Create a symbolic link out of the /wp-content/plugins and /wp-content/themes dirs, pointing to an EFS filesystem that is mounted on all EC2 instances. DISADVANTAGES: EFS can be a bit slow, not all plugins are installed entirely within the /themes subdir.
Anybody have any experience with this? Am I over-engineering this? Perhaps the themes/plugin files don't really change much throughout the lifespan of your WordPress blog (ie, once you're set up, you don't really find yourself changing them much), in which case maybe Option 1 (zip to S3 and download via bootstrap script) is the best option for me, and Option 3 (EFS) is over-engineered.
I would love to get your take on this if you have experience with this conundrum!
Thanks in advance!
You can take a look at this link:
https://cloudonaut.io/wordpress-on-aws-smooth-and-pain-free/
It provides a CloudFormation template that sets up an ASG backed by EFS, installs wordpress and some plugins there, uses RDS for their database, sets up CloudFront as CDN and a few other goodies.
I tweaked their template for our use case and added an extra ASG with spot instances, replaced all the VPC stuff with references to my VPC template and tweaked the LaunchConfig so it automatically sets up the S3 Offload plugin with a bucket created on my template. It also automatically sets up the certificate for the ELB and a few other goodies.
I thought that would be the end and that I would be able to forget about WordPress and leave another team to work on it. Wrong. They complained it felt sluggish and some plugins failed to install with timeouts (and installing them manually using wp-cli took way too long, one of them up to 2 minutes and a half).
So here my 2 cents: set up RDS and CloudFront, use a reserved instance for wordpress and offload your static assets to S3 using a plugin. Once the site is completely set up bake an AMI or take a snapshot of your EBS and set alarms so in case the instance breaks down you can quickly spin up a new one with your AMI/snapshot.
Either that or have your dev team baking AMIs from a dev environment so you can set up an ASG in your production environment with those.
I got to a point where I believe trying to set up WordPress for a non-dev team (meaning, they can install/upload plugins and themes from the browser) in an ASG is just madness without support from a dev team (baking AMIs, updating stacks). You could, of course, automate all of this. You could, of course, develop a full new site using anything else. Your call.
/rant
I have an app running on my own digitalocean VM that I'm trying to play around with to figure out how to run a meteor production server. I deployed it with meteor build, but now I'm a bit unsure about how to push updates. If I build a new tarball on my own machine, I will loose file references that my users have made to files in bundle/uploads, because the remote filesystem isn't incorporated into my local project. I can imagine some hacky ways to work around this, but besides hosting the files on s3 or another 3rd party server, is there any way to "hot code push" into the deployed app without needing to move files around on my server?
Am I crazy for wondering what the meteor equivalent of git push/pull is in production, or just ignorant?
You can use dokku (https://github.com/progrium/dokku). DigitalOcean allows you to create an instance pre-installed with dokku too.
Once you've set up your ssh keys, set the environment variables, ROOT_URL, PORT and MONGO_URL you can add that server as a git remote and simply git push to it.
Dokku will automatically build up the Meteor app and have it running, and keep it up to date whenever you git push.
I find Dokku is very convenient. There's also flynn and deis which are able to do the same in multi tenant environment with way more options.
Just one thing to keep in mind with this is to push the guys who own the repo to keep the Node version in the buildpack up to date. Meteor is a bit overzealous when it comes to using the latest version of Node and refusing older versions.
Meteor does lack a bit in this department. I can't remember where I may have heard this, but I believe they intend on adding this very popular Meteor deployment package to their library. Short of switching to a more compatible host, I'm not aware of any better solutions.
The Setup:
I'm setting up a Wordpress-powered application using Elastic Beanstalk from Amazon Web Services. All development is being done locally under a MAMP apache2/php5 server environment with a GIT repository controlling the entire application root.
Deployment Workflow:
After committing any code changes (edits, new plugins, etc) to the repo the application is deployed using AWS EB CLI's eb deploy command which pushes the latest version out to any running EC2 instances managed by Elastic Beanstalk.
My Issue:
Sometimes the code changes aren't exactly syncing up between my development/production environments and I'm not sure how to overcome it. Especially when trying to install and setup plugins like W3 Total Cache or WP Super Cache.
Since my local environment doesn't have things like a memcahced server installed, but my production environment does (ElastiCache) I'm unable to save the proper settings file and deploy it for use in my production environment. These plugins won't allow me to select the needed services because it sees them as not available...
It seems I can only get W3 Total Cache to work if I install it directly onto a live production environment, which seems like a bad idea.
Given the above:
Am I going about deployments the wrong way?
Should plugins like W3 Total Cache be installed and configured on
local development environments and pushed to production environments?
I cannot comment on the issues specific to Elastic Beanstalk, but based on experience I can make a suggestion about the second part of your issue statement:
You are better off running a development environment that mirrors your production environment as closely as possible. I suggest that you convert from MAMP to a VM environment like VirtualBox. You might want to check out puphpet.com for help in getting it set up. It requires some startup effort, but gives you an environment similar to or the same as your production servers. For example, you could run memcached yourself so you could actually test it with W3 Total Cache.
As for your second question, just installing a plugin in the production environment without testing it beforehand has obvious risks (but then again clients do that all the time). I would prefer to test first. To a certain extent it probably depends on how critical it is if the site experiences downtime or weirdness.
I would suggest you to create another environment on Beanstalk.
It's easy, fast and more reliable than a VM in your case because it will allow you to test your deployment process as well.
I usually have 3 environment for a every website. Each environment is on its own branch. If your configuration is different between environment (url and database access for example), just store your wp-config and other config files into S3 (you may not want production password in your git repository), and through ebextensions you can download them into your website automatically.
I use AWS Beanstalk that way for 16 websites and some are wordpress one. All with autoscaling and able to get thousands of users simultaneously.
Don't hesitate to ask me for further details.