So I'm trying to deploy the same app on multiple dokku server instances.
So far I just found the standart way of adding the dokku remote like so:
git remote add dokku dokku#dokku.me:ruby-getting-started
I do not think I can add another remote like:
git remote add server2 dokku#dokku2.me:ruby-getting-started
Right?
So how would I approach this?
Thanks in advance!
Related
I just installed bitnami/wordpress image using helm. Is it possible to sync it with git and when I will change some files in this git repository to update kubernetes pods?
I mean update Wordpress source code because Im modyficating plugins in wp-content/plugins dir.
You can use ArgoCD or Flux to automate this types of GitOps workflow. Check their documentation. They are pretty powerful and popular for GitOps in Kubernets.
A possible solution is to use git-sync in a sidecar container. It will periodically pull files down from a repository and copy them to a volume.
Here is a sample manifest which uses git-sync to update the content hosted on a simple nginx web server:
https://github.com/nigelpoulton/ps-vols-and-pods/blob/master/Multi-container-Pods/sidecar.yml
One way I managed it (although possibly a rookie way) was through github actions.
Here's an example of mine
And here's official docs from docker to configure with github actions
You basically want to tell github actions to recreate and push your image and then tell your cluster to refresh like so:
If you're using kubectl to manage your cluster check if your version supports kubectl rollout restart. You can use it to force any deployment to restart and smoothly recreate your pods (it also re-pulls the supporting image).
e.g.: kubectl rollout restart deployment/my_deployment
I am trying to find some guide or documentation that discusses best practices for setting up gitlab CI/CD to auto deploy a web server (nginx) / Centos or any Linux. Setting up the CI/CD as user root is easy, but i don't like the idea of having a root key in gitlab.
If i create a 'gitlab' user and assign it to the same group as NGINX, I am stuck because i can't CHOWN -R nginx to the folder and files once all the files deploy. So what are my options here? I suppose i could add the ssh key as user NGINX, but seems odd.
Are there any decent ways to do this?
Ideally, you would:
connect as nginx directly to make the installation
don't manage the private/public key through GitLab, but through a deployment tool like Ansible (see "How to use GitLab and Ansible to create infrastructure as code")
That way, no chown to do, and the keys are managed in Ansible, which knows how to connect to the target machines.
Is there a way to connect to cache instance (csession) remotely?
Let's say the intersystems is on a container, and I want to use csession on the remote server from my local machine, is there a way (without direct ssh) to run the cache instance?
I'm looking for an alternative way of these steps:
1- scp the cache script into the box
2- ssh into the box
3- run the csesion on the box
Any comments is really appreciated
You could use telnet (encrypted) But this wouldn't allow you to load scripts local to your machine.
One way would be to have your scripts in a git repository and add the loading of them into your instance as post-receive hook.
You might consider using https://intersystems-ru.github.io/webterminal/.
That is "web-based terminal for InterSystems Caché".
Suppose we already have:
Existing infrastructure with few instances in a load balancer
Existing github account and application already deployed in the instances
How can I achieve following using aws code-deploy?
We have multiple commits since last pull to production servers, how can we achieve git pull on production multiple instances using code-deploy.
Pull out an instance from load balancer
git pull
restart/reload server
instance Add instance again to the load balancer
Kindly suggest.
Thanks in advance.
CodeDeploy gives you the option for deploying an application directly from Github.
If you need to build your code before deploying or you are not willing to introduce appspec file in the github repo, you can create a different deployable bundle and put all the commands to pull / build, attach / detatch from load balancer, etc in the hook scripts.
In case you are using AWS Elastic load balancing, we have some sample scripts you can borrow from:
I hope this helps you get set up.
Thanks,
Amartya Datta Gupta
Is there a way to run a command in puppet master that will bring changes to the hosts right away?
I have used different scripts using crontab to schedule my tasks and it changes whenever I have specified, but I am trying to learn if there is a way I can just hit the command in puppet master and boom! bring changes in the hosts(clients) immediately.
Let's say I want to change password in "cofig.properties" file in my 5 hosts. What would be the best way to do it from master without scheduling?
At work we
push git versioned control repos to a puppetmaster using a jenkins job,
then we use r10k to retrieve the control repo puppet module & hiera dependencies,
then we connect remotely using SSH to each node we want to update to run the relevant "puppet apply" command
It works smoothly.
I've solved that problem in my infrastructure by using Puppet for Configuration Management and SaltStack for orchestration.
I have the Puppet Agent apply a SaltStack module to automatically configure each node as a minion of my Salt Master (which is also my Puppet Master), then I just SSH into my master server and tell SaltStack to run the Puppet Agent on nodes that match my criteria.
There are several SaltStack modules on the Puppet Forge.
You could certainly use other tools, such as RunDeck, or even Puppet's own MCollective, but I personally found them to be more complicated to work with.