I just installed bitnami/wordpress image using helm. Is it possible to sync it with git and when I will change some files in this git repository to update kubernetes pods?
I mean update Wordpress source code because Im modyficating plugins in wp-content/plugins dir.
You can use ArgoCD or Flux to automate this types of GitOps workflow. Check their documentation. They are pretty powerful and popular for GitOps in Kubernets.
A possible solution is to use git-sync in a sidecar container. It will periodically pull files down from a repository and copy them to a volume.
Here is a sample manifest which uses git-sync to update the content hosted on a simple nginx web server:
https://github.com/nigelpoulton/ps-vols-and-pods/blob/master/Multi-container-Pods/sidecar.yml
One way I managed it (although possibly a rookie way) was through github actions.
Here's an example of mine
And here's official docs from docker to configure with github actions
You basically want to tell github actions to recreate and push your image and then tell your cluster to refresh like so:
If you're using kubectl to manage your cluster check if your version supports kubectl rollout restart. You can use it to force any deployment to restart and smoothly recreate your pods (it also re-pulls the supporting image).
e.g.: kubectl rollout restart deployment/my_deployment
Related
I'm trying to get an installation of Wordpress running in Kubernetes, as well as have an option of running the same configuration locally in minikube. I want to use the standard Docker image of Wordpress: https://hub.docker.com/_/wordpress/.
I'm having trouble with making sure that the plugins and templates are in sync though. The Docker container exposes a Volume at /var/www/html. Wordpress installation, as well as my plugins will live there.
Assuming I do the development on Minikube, along with the installation of plugins etc. How do handle the move between Persistent Volumes between my local cluster and the target cluster? Should I just reinstall Wordpress every time when the Pod is scaled?
You can follow Writing Portable Configuration (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#writing-portable-configuration) guide for persistent volume if you are planning to migrate it to different cluster.
In a real production scenario you would want to use a standard tool to backup and migrate persistent volumes between clusters. Valero is such a tool which enables you to achieve that.
I'm new into Git and I'm working with Wordpress themes.
I was always using FTP client to push every small change into my remote server... I mean sometimes it was just one line of code to check the change of CSS. It was easy and nice but there will be always problem with reverting changes and since I'm learning Git, I want to change it.
I've found two ways to do it:
git-ftp
i've tried to connect my local respository with GitHub and my intention was to automatic pull changes into my remote server from GitHub (it's not working yet, i need to configure it better)
BUT, do I have to commit every single small change? Because I cant just save file and check changes with Browsersync on second monitor, I will have to commit so many times. Also which way will be better for me - maybe there are another, better ways?
I really want to improve my performance, but it looks like that's not easy or I'm doing something wrong? I know about existence things like WP-CLI, webpack, gulp but often I'm creating small websites and probably I will spend more time on configurating those things than create theme. Also I thought about working on localhost, but I really think that I'm complicating things and my job.
Really sorry if it's wrong section, but I'm new on stackoverflow - hey! I will be really thankful if you can help me, because I think that i need knowledge of someone experienced.
I'm not sur to be helpful but I'll try :
First, even for a small project, I always prefer to install a local environment for testing. It avoid risks on your remote server !
You can take a look here : https://make.wordpress.org/core/handbook/tutorials/installing-a-local-server/
Then, if you have an SSH access to your server, may be you can try to configure it to push directly from your local environment to your remote server. Here is a simple tutorial : https://gist.github.com/noelboss/3fe13927025b89757f8fb12e9066f2fa
It depends on what remote system or vps you are using.
It could be from GCP, AWS, DIGITAL OCEAN, or WP itself.
It looks like that you are using the wordpress hosting your website.
If so, you might use wp cli to login the server.
①As for the frequent testing and updating, it is a good idea to copy the remote project to your localhost. Run your web app using wampserver. And create a new repository in the github and connect it with your local folder.
Then you could use git to version control your codes, do pull and push, stash or whatever.
And after testing, you could upload the specific files or folder to the remote server via ftp or sftp periodically.
②Another way is to install the git bash or git software in your server side.
It depends on the OS you are using. If it is a win or linux.
$ add-apt-repository ppa:git-core/ppa
$ apt update; apt install git
and create new user, add it into the sudo group
create a repository in your server side and link it to the github remote repository.
I am not sure whether the second way would work.
I recommend you try the first method.
Hope this could help. Happy coding.
Suppose we already have:
Existing infrastructure with few instances in a load balancer
Existing github account and application already deployed in the instances
How can I achieve following using aws code-deploy?
We have multiple commits since last pull to production servers, how can we achieve git pull on production multiple instances using code-deploy.
Pull out an instance from load balancer
git pull
restart/reload server
instance Add instance again to the load balancer
Kindly suggest.
Thanks in advance.
CodeDeploy gives you the option for deploying an application directly from Github.
If you need to build your code before deploying or you are not willing to introduce appspec file in the github repo, you can create a different deployable bundle and put all the commands to pull / build, attach / detatch from load balancer, etc in the hook scripts.
In case you are using AWS Elastic load balancing, we have some sample scripts you can borrow from:
I hope this helps you get set up.
Thanks,
Amartya Datta Gupta
Currently have a pipeline that I use to build reports in R and publish in Jekyll. I keep my files under version control in github and that's been working great so far.
Recently I began thinking about how I might take R, Ruby and Jekyll and build a docker image that any of my coworkers could download and run the same report without having all of the packages and gems set up on their computer. I looked at Docker Hub and found that the automated builds for git commits were a very interesting feature.
I want to build an image that I could use to run this configuration and keep it under version control as well and keep it up to date in Docker Hub. How does something like this work?
If I just kept my current setup I could add a dockerfile to my repo and Docker Hub would build my image for me, I just think it would be interesting to run my work on the same image.
Any thoughts on how a pipeline like this may work?
Docker Hub build service should work (https://docs.docker.com/docker-hub/builds/). You can also consider using gitlab-ci or travis ci (gitlab will be useful for privet projects, it also provides privet docker registry).
You should have two Dockerfiles one with all dependencies and second very minimalistic one for reports (builds will be much faster). Something like:
FROM base_image:0.1
COPY . /reports
WORKDIR /reports
RUN replace-with-requiered-jekyll-magic
Dockerfile above should be in your reports repository.
In 2nd repository you can crate base image with all the tools and nginx or something for serving static files. Make sure that nginx www-root is set to /reports. If you need to update the tools just update base_mage tag in Dockerfile for reports.
I have an app running on my own digitalocean VM that I'm trying to play around with to figure out how to run a meteor production server. I deployed it with meteor build, but now I'm a bit unsure about how to push updates. If I build a new tarball on my own machine, I will loose file references that my users have made to files in bundle/uploads, because the remote filesystem isn't incorporated into my local project. I can imagine some hacky ways to work around this, but besides hosting the files on s3 or another 3rd party server, is there any way to "hot code push" into the deployed app without needing to move files around on my server?
Am I crazy for wondering what the meteor equivalent of git push/pull is in production, or just ignorant?
You can use dokku (https://github.com/progrium/dokku). DigitalOcean allows you to create an instance pre-installed with dokku too.
Once you've set up your ssh keys, set the environment variables, ROOT_URL, PORT and MONGO_URL you can add that server as a git remote and simply git push to it.
Dokku will automatically build up the Meteor app and have it running, and keep it up to date whenever you git push.
I find Dokku is very convenient. There's also flynn and deis which are able to do the same in multi tenant environment with way more options.
Just one thing to keep in mind with this is to push the guys who own the repo to keep the Node version in the buildpack up to date. Meteor is a bit overzealous when it comes to using the latest version of Node and refusing older versions.
Meteor does lack a bit in this department. I can't remember where I may have heard this, but I believe they intend on adding this very popular Meteor deployment package to their library. Short of switching to a more compatible host, I'm not aware of any better solutions.