I tried to add:
mypack:
pkg:
- installed
- pkgs:
- mercurial
- git
cmd.run:
- name: 'mkdir -p /opt/mypack'
cmd.run: 'hg pull -u -R /opt/mypack || hg clone -R /opt https://...'
cmd.run: 'ln -s /opt/mypack/etc/init.d/xxx /etc/init.d/xxx'
But for some reason this the state seems to execute/install but the commands are not executed, or at least not all of them.
I need a solution to run multiple commands and to fail the deployment if any of these fails.
I know that I could write a bash script and include this bash script, but I was looking for a solution that would work with only the YAML file.
You want this:
cmd-test:
cmd.run:
- name: |
mkdir /tmp/foo
chown dan /tmp/foo
chgrp www-data /tmp/foo
chmod 2751 /tmp/foo
touch /tmp/foo/bar
Or this, which I would prefer, where the script is downloaded from the master:
cmd-test:
cmd.script:
- source: salt://foo/bar.sh
- cwd: /where/to/run
- user: fred
In addition to the above (better) suggestions, you can do this:
cmd-test:
cmd.run:
- names:
- mkdir -p /opt/mypack
- hg pull -u -R /opt/mypack || hg clone -R /opt https://...
- ln -s /opt/mypack/etc/init.d/xxx /etc/init.d/xxx
For reasons I don't understand yet (I'm a Salt novice), the names are iterated in reverse order, so the commands are executed backwards.
You can do as Dan pointed out, using the pipe or a cmd.script state. But it should be noted that you have some syntax problems in your original post. Each new state needs a name arg, you can't just put the command after the colon:
mypack:
pkg:
- installed
- pkgs:
- mercurial
- git
cmd.run:
- name: 'my first command'
cmd.run:
- name: 'my second command'
However, that actually may fail as well, because I don't think you can put multiple of the same state underneath a single ID. So you may have to split them out like this:
first:
cmd.run:
- name: 'my first command'
second:
cmd.run:
- name: 'my second command'
As one of the users pointed out above, this works in proper order (salt 3000.2)
install_borg:
cmd.run:
- names:
- cd /tmp
- wget https://github.com/borgbackup/borg/releases/download/1.1.15/borg-linux64
- mv borg-linux64 /usr/local/bin/borg
- chmod u+x /usr/local/bin/borg
- chown root:root /usr/local/bin/borg
- ln -s /usr/local/bin/borg /usr/bin/borg
- unless: test -f /usr/bin/borg
Related
Apologies in advance as I'm not very confident writing GitLab pipelines. I have a pair of public and private keys encrypted, committed to the GitLab repo. I have introduced a new stage into my pipeline in order to decrypt the keys and deploy.
decryption:
stage: decryption
allow_failure: false
before_script:
- mkdir -p ~/.ssh
- eval $(ssh-agent -s)
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- chmod 660 ./keys/vault_password.txt
- echo $ANSIBLE_VAULT_PASSWORD > ./keys/vault_password.txt
- chmod 660 ./keys/private.key
- chmod 660 ./keys/public.key
- ansible-vault decrypt --vault-password-file ./keys/vault_password.txt ./keys/private.key
- ansible-vault decrypt --vault-password-file ./keys/vault_password.txt ./keys/public.key
- echo "$(cat ./keys/private.key)"
- echo "$(cat ./keys/public.key)"
artifacts:
untracked: true
My next stage is build.
build:
stage: build
allow_failure: false
dependencies:
- decryption
script:
- rm -rf vendor/drupal/coder
- composer install
- ./vendor/bin/robo ci:build
- ls -la vendor/drupal/coder
- echo "$(cat ./keys/private.key)"
- echo "$(cat ./keys/public.key)"
artifacts:
name: "mycompany_build_{$CI_COMMIT_SHA}"
expire_in: '1 week'
paths:
- ./build
When I try to echo the keys in the decryption stage I can see the decrypted keys. But, when I try to access the keys like this in the build stage like below, it shows me the encrypted files. I'm just trying to see if I can access the decrypted files at the build stage and then I can pass these keys to be deployed. So clearly something is not correct with the pipeline.
- echo "$(cat ./keys/private.key)"
- echo "$(cat ./keys/public.key)"
Maybe the way I have written my pipeline needs to be changed in order to pass the changed untracked public.key and private.key into the build stage and possibly to the deploy stage as well.
Could someone please point me in the correct direction on this?. Do I have to change something in the artifacts ?. How can I do that?. Thanks in advance.
I don't know too much about GitLab-ci but I think you are not referencing properly the decrypted file, on the decrypting step you should save the decrypted value to a variable and then call it on the build step, the way you are doing now is referencing the file itself in the build step the file is not decrypted, you decrypt on the decrypt step and save the decrypted value to use late.
I'm not sure if this will work, but maybe you can get the idea:
Decrypt:
decryption:
stage: decryption
allow_failure: false
before_script:
- mkdir -p ~/.ssh
- eval $(ssh-agent -s)
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- chmod 660 ./keys/vault_password.txt
- echo $ANSIBLE_VAULT_PASSWORD > ./keys/vault_password.txt
- chmod 660 ./keys/private.key
- chmod 660 ./keys/public.key
- ansible-vault decrypt --vault-password-file ./keys/vault_password.txt ./keys/private.key
- ansible-vault decrypt --vault-password-file ./keys/vault_password.txt ./keys/public.key
- echo "private_key_value=$(cat ./keys/private.key)"
- echo "public_key_value=$(cat ./keys/public.key)"
artifacts:
untracked: true
And then the build step:
```yml
uild:
stage: build
allow_failure: false
dependencies:
- decryption
script:
- rm -rf vendor/drupal/coder
- composer install
- ./vendor/bin/robo ci:build
- ls -la vendor/drupal/coder
- echo $private_key_value
- echo $public_key_value
artifacts:
name: "mycompany_build_{$CI_COMMIT_SHA}"
expire_in: '1 week'
paths:
- ./build
After a long search I did not find the answer
There is a playbook Ansible.
- name: myscript
hosts: myhost
tasks:
- name: myscript
docker_container:
name: myscript
image: myimage
detach: false
working_dir: "/opt/R/project"
command: Rscript $(find ./*_Modules -iname *_Script.R)
This command works: Rscript ./01_Modules/02_Script.R
This command NOT works: Rscript $(find ./*_Modules -iname *_Script.R) - Treats $(find not as a command, but as a path.
At the same time, in linux, this line is successfully run and finds the script.
How do I pass full-fledged linux commands with && and similar features to command?
Here is a simplified version of your problem
- name: Create a test container
docker_container:
name: test
image: busybox
command: ls |grep var && echo 'it doesn\'t work!'
Output :
ls: |grep: No such file or directory
ls: &&: No such file or directory
ls: echo: No such file or directory
ls: it fails: No such file or directory
var:
spool
www
If I wrap it in quote and use
/bin/sh -c
- name: Create a test container
docker_container:
name: test
image: busybox
command: /bin/sh -c "ls |grep var && echo 'it works!'"
Output :
var
it works!
So i'm having a problem setting up a Wordpress site on EB. I got the EFS to mount correctly on wp-content/uploads/wpfiles (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-hawordpress-tutorial.html) however this only allows the pages to be stored and not the plugins. Is it possible to mount the entire wp-content folder onto EFS, I've tried and so far failed
I'm not sure if this issue was resolved and it passed silently. I'm having the same issue as you, but with a different error. My knowledge is fairly limited so take what I say with a grain of salt, according to what I saw in your log the problem is that your instance can't see the server. I think that it could be that your EB application is getting deployed in a different Availability Zone than your EFS. What I mean is that maybe you have mount targets for AZ a, b and d and your EB is getting deployed in AZ c. I hope this helps.
I tried a different approach (it basically does the same thing, but I'm manually linking each of the subfolders instead of the wp-content folder), for it to work I deleted the original folders inside /var/app/ondeck (that will eventually get copied to /var/app/current/ that is the folder which get served). Of course, once this gets done your Wordpress won't work since it doesn't have any themes, the solution here would be to quickly log in to the EC2 instance in which your ElasticBeanstalk app is running and manually copying the contents to the mounted EFS (in my case the /wpfiles folder). To connect to the EC2 instance (you can find the instance ID under your EB health configuration) you can follow this link and to mount your EFS you can follow this link. Of course, if the config works you won't have to mount it since it would be already mounted though empty. Here is the content of my config file:
option_settings:
aws:elasticbeanstalk:application:environment:
EFS_NAME: '`{"Ref" : "FileSystem"}`'
MOUNT_DIRECTORY: '/wpfiles'
REGION: '`{"Ref": "AWS::Region"}`'
packages:
yum:
nfs-utils: []
jq: []
files:
"/tmp/mount-efs.sh" :
mode: "000755"
content: |
#!/usr/bin/env bash
mkdir -p $MOUNT_DIRECTORY
EFS_REGION=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.REGION')
EFS_NAME=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_NAME')
MOUNT_DIRECTORY=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.MOUNT_DIRECTORY')
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $EFS_NAME.efs.${EFS_REGION}.amazonaws.com:/ $MOUNT_DIRECTORY || true
mkdir -p $MOUNT_DIRECTORY/uploads
mkdir -p $MOUNT_DIRECTORY/plugins
mkdir -p $MOUNT_DIRECTORY/themes
chown webapp:webapp -R $MOUNT_DIRECTORY/uploads
chown webapp:webapp -R $MOUNT_DIRECTORY/plugins
chown webapp:webapp -R $MOUNT_DIRECTORY/themes
commands:
01_mount:
command: "/tmp/mount-efs.sh"
container_commands:
01-rm-wp-content-uploads:
command: rm -rf /var/app/ondeck/wp-content/uploads && rm -rf /var/app/ondeck/wp-content/plugins && rm -rf /var/app/ondeck/wp-content/themes
02-symlink-uploads:
command: ln -snf $MOUNT_DIRECTORY/uploads /var/app/ondeck/wp-content/uploads && ln -snf $MOUNT_DIRECTORY/plugins /var/app/ondeck/wp-content/plugins && ln -snf $MOUNT_DIRECTORY/themes /var/app/ondeck/wp-content/themes
I'm using another config file to create my EFS as in here, in case you have already created your EFS you must change EFS_NAME: '`{"Ref" : "FileSystem"}`' to EFS_NAME: id_of_your_EFS.
I hope this helps user3738338.
You can do following this link - https://github.com/aws-samples/eb-php-wordpress/blob/master/.ebextensions/efs-mount.config
Just keep a note it uses uploads, you can change it for wp-content.
I have a question about SaltStack variables.
I want to set a folder name, something like:
{% set exim4_folder = salt['cmd.run']('ls /tmp | grep exim4') %}
but the folder I am trying to get is not available till the state I ran before that assignment:
download_source_code:
cmd.run:
- cwd: /tmp
- names:
- apt-get -y source exim4
- apt-get -y build-dep exim4
Is there a way to tell salt to run that assignment after I run "download_source_code"?
The problem you're going to run into here is that all the jinja sections of your sls file will be evaluated before any of the yaml Salt states are evaluated.
So your 'ls /tmp | grep exim4' will always be executed before your download_source_code state is executed.
I'm getting my feet wet with SaltStack. I've made my first state (a Vim installer with a static configuration) and I'm working on my second one.
Unfortunately, there isn't an Ubuntu package for the application I'd like my state to install. I will have to build the application myself. Is there a "best practice" for doing "configure-make-install" type installations with Salt? Or should I just use cmd?
In particular, if I was doing it by hand, I would do something along the lines of:
wget -c http://example.com/foo-3.4.3.tar.gz
tar xzf foo-3.4.3.tar.gz
cd foo-3.4.3
./configure --prefix=$PREFIX && make && make install
There are state modules to abstract the first two lines, if you wish.
file.managed: http://docs.saltstack.com/ref/states/all/salt.states.file.html
archive.extracted: http://docs.saltstack.com/ref/states/all/salt.states.archive.html
But you could also just run the commands on the target minion(s).
install-foo:
cmd.run:
- name: |
cd /tmp
wget -c http://example.com/foo-3.4.3.tar.gz
tar xzf foo-3.4.3.tar.gz
cd foo-3.4.3
./configure --prefix=/usr/local
make
make install
- cwd: /tmp
- shell: /bin/bash
- timeout: 300
- unless: test -x /usr/local/bin/foo
Just make sure to include an unless argument to make the script idempotent.
Alternatively, distribute a bash script to the minion and execute. See:
How can I execute multiple commands using Salt Stack?
As for best practice? I would recommend using fpm to create a .deb or .rpm package and install that. At the very least, copy that tarball to the salt master and don't rely on external resources to be there three years from now.
Let's assume foo-3.4.3.tar.gz is checked into GitHub. Here is an approach that you might pursue in your state file:
git:
pkg.installed
https://github.com/nomen/foo.git:
git.latest:
- rev: master
- target: /tmp/foo
- user: nomen
- require:
- pkg: git
foo_deployed:
cmd.run:
- cwd: /tmp/foo
- user: nomen
- name: |
./configure --prefix=/usr/local
make
make install
- require:
- git: https://github.com/nomen/foo.git
Your configuration prefix location could be passed as a salt pillar. If the build process is more complicated, you may consider writing a custom state.