I need to encrypt a file using ansible-vault. I would like to perform the encryption only if the file is not already encrypted by ansible vault. I am trying to use this task in my Ansible playbook:
- local_action: command
ansible-vault encrypt path/to/file
when: <when file is not already encrypted by ansible-vault>
Is there a logic to use in the conditional statement that will check if a file is already encrypted by ansible-vault?
There is likely a myriad of ways to do it, all having little to do with Ansible and Ansible Vault itself. Here's one:
- local_action: shell
head -1 {{ file }} | grep -v -q \$ANSIBLE_VAULT && ansible-vault encrypt {{ file }}
You'll also need --vault-password-file otherwise Ansible will stop processing and wait on prompt.
Related
I have a unix shell script that creates and transfers files from one path to another ( either from same server or from another)
Then there is transfer of files to this folder from either same or different server.
I'm unable to identify a method through which I could verify my file transfer md5 or checksum through a script.usually I take the checksum of the source and destination folders and match them manually.
Please advise
In your script you can insert line like this:
sha1sum <list of files> >files.sha1
to generate file with sha1 sums. Then you transfer all the files (including file with hashes) to the target place, for example:
scp /path/* user#host:localion
and then exec (via ssh for example) this to check the sha1 hash of files on target:
ssh user#host "cd location; sha1sum -c files.sha1"
This all is just example you should tune it for your environment
In Solaris you can use command:
digest -a sha1 location/* >/directory/hash1
scp /path/* user#host:localion
ssh user#host "cd location; digest -a sha1 *" >/directory/hash2
diff /directory/hash1 /directory/hash2
(the last command will compare hashes from local and remote sites)
I am trying to mount a repository with server config files (think nginx, mysql, etc) inside my salt fileserver in order to be able to distribute these files to my minions (Without having to do a checkout of the full repository on all my minions).
If I've understood correctly: All gitfs_remotes will be 'flattened' into one filesystem structure (I can confirm this when I run salt-run fileserver.file_list.
What worries me is that, as far as I know, this 'config file only' repository is now also being searched by Salt for state modules.
Is there some way to either:
Designate a gitfs mount as 'don't search for SLS files'.
Mount the actual salt state repository (which contains my top.sls and state modules) under a subdirectory of the salt fileserver and point salt to the top.sls therein?
I stand open to the possibility that this is a wrong approach entirely of course, my only requirement is that the server config files (again, nginx, mysql, etc) live in a separate repository, and that the entire high state (state modules, top file) lives in git.
master config:
fileserver_backend:
- gitfs
gitfs_remotes:
- git#github.com:MyOrg/salt-configs.git:
- git#github.com:MyOrg/server-config-files.git:
- mountpoint: config-files
Have you considered storing your configuration file in a pillar?
For example:
HostFiles:
LinuxBasic: |
192.168.1.1 server1
192.168.1.2 server2
And then in your state file, when you want to render the hostfile:
LinuxBasicHostFile:
file.managed:
- name: /etc/hosts
- contents_pillar: {{ HostFiles:LinuxBasic }}
You could also GPG that file if it was sensitive using the keys on your Salt master's server:
$ cat nginx.hostfile | sudo gpg --armor --batch --trust-model always --encrypt --homedir <salthomdir> -r <keyname>
Paste the output of that into your pillar:
HostFiles:
LinuxBasic: |
-----BEGIN PGP MESSAGE-----
Xks383...a bunch of encrypted text...BjAs0
-----END PGP MESSAGE-----
And inform your salt master that HostFiles contains GPG encrypted content in your master.conf, or better yet, in a local conf file in /etc/salt/master.d/decrypt.conf:
decrypt_pillar:
- 'HostFiles': gpg
I encrypted a bunch of files (certificates) using the following script
for i in $(find . -type f); do ansible-vault encrypt $i --vault-password-file ~/.vault && echo $i encrypted ; done
During rsyncing I run something like this
- name: Copy letsencrypt files
synchronize:
src: "{{ path }}/letsencrypt/"
dest: /etc/letsencrypt/
rsync_path: "sudo rsync"
rsync_opts:
- "--delete"
- "--checksum"
- "-a"
notify:
- Reload Nginx
The problem I’ve faced is that the files that moved still remained encrypted. I thought ansible was smart enough to detect if it was encrypted and decrypt like how I do here
- name: Copy deploy private key
copy:
content: "{{ private_key_content }}"
dest: "/home/deploy/.ssh/id_rsa"
owner: deploy
group: deploy
mode: 0600
no_log: true
Back to the earlier question, how do I make sure the files in the folder/files are decrypted before rsyncing?
Edit:
I tried using the copy module since it is encryption aware but the module seems to be hanging. Noticed some issues with copy module for directories on ansible github and I am back to synchronize.
I also tried the with_fileglob approach but that flattens the directory structure.
Edit 2:
I got encryption, decryption to work with the copy module but its horribly slow.
There is already an issue https://github.com/ansible/ansible/issues/45161 at the ansible site open and the conclusion is:
Synchronize is a wrapper around rsync, I doubt that you can hook into the
process like that. You might want to implement a custom module doing this
or use something, which supports it.
I need to execute a script on another minion. The best solution seems to be Peer Publishing, but the only documentation I have been able to find only shows how to do it via CLI.
How can I define the following in a module?
salt-call system.example.com publish.publish '*' cmd.run './script_to_run'
You want the salt.client.Caller() API.
#!/usr/bin/env python
import salt.client
salt_call = salt.client.Caller()
salt_call.function('publish.publish', 'web001',
'cmd.run', 'logger "publish.publish success"')
You have to run the above as the salt user (usually root).
Then scoot over to web001 and confirm the message is in /var/log/syslog. Worked for me.
The syntax for the .sls file:
salt-call publish.publish \* cmd.run 'cd /*directory* && ./script_to_run.sh:
cmd.run
Alternative syntax:
execute script on other minion:
cmd.run
- name: salt-call publish.publish \* cmd.run 'cd /*directory* && ./script_to_run.sh
What I specifically did (I needed to execute a command, but only if a published command executed successfully. Which command to publish depends on the role of the minion):
execute script:
cmd.run:
- name: *some shell command here*
- cwd: /*directory*
- require:
- file: *some file here*
{% if 'role_1' in grains['roles'] -%}
- onlyif: salt-call publish.publish \* cmd.run 'cd /*other_directory* && ./script_to_run_A.sh'
{% elif 'role_2' in grains['roles'] -%}
- onlyif: salt-call publish.publish \* cmd.run 'cd /*other_directory* && ./script_to_run_B.sh'
{% endif %}
Remember to enable peer communication in /etc/salt/master under the section 'Peer Publish Settings':
peer:
.*:
- .*
This configuration is not secure, since it enables all minions to execute all commands on fellow minions, but I have not figured out the correct syntax to select minions based on their role yet.
Another note is that it probably would be better to create a custom command containing the cmd.run and then enable only that, since enabling all nodes to execute arbitrary scripts on each other is not secure.
The essence of this answer is the same as Dan Garthwaite's, but what I needed was a solution for a .sls file.
i am using mysql-5.5 and rhel5 and my intention is to use mysqldump to take the encrypted backup and compressed backup
as i am using mysqldump as below
mysqldump -u root -p db_name | gzip >file_name.sql.gz
it will give compressed backup but not encrypted one
How about this:
mysqldump -u root -p db_name | gpg --encrypt -r 'user_id' | gzip >file_name.sql.gz
of course you need the public key of the user that you want to encrypt for.
e.g.
gpg --import keyfile
Instead of using GPG which is frankly, kind of overkill unless you really like GPG, you can use OpenSSL which is likely built-in and has no real dependency structure for making easily portable and decryptable backups. This way you can readily decrypt the backup on just about any Linux system (and many other platforms) without any keyring, just knowing the passphrase.
Read more at this link about how do so.
Backup one database, change what is inside [..]
mysqldump -u root --single-transaction [DataBaseName] | gzip | openssl enc -pbkdf2 -k [MyPassword] > database.sql.zip.enc
Backup all databases separately:
date=`date "+%Y%m%d"`
for DB in $(mysql -u root -e 'show databases' -s --skip-column-names); do
mysqldump -u root --single-transaction $DB | gzip | openssl enc -pbkdf2 -k [MyPassword] > db-$DB-$date.sql.gz.enc;
done
Also note that using -p via command line is really bad practise as the password can be read out via ps aux.
I suggest using openssl as pgp is getting to slow on big files.
The best solution I have found so far which I am regularly using at work now is mysqldump-secure.
It offers openssl encryption and compression as well as other more features and even ships with a nagios monitoring plugin.
I use the following Bash script that uses Dropbox to sync the backups directly to our own company server (followed by automatic backups of that data). Replace the script variables with your own. Then I just add that to my crontab to run it every 12 hours.
FILENAME=dbname.$(date +%Y-%m-%d-%H-%M)
SQLFILE=/root/Desktop/$FILENAME.sql
ZIPFILE=/root/Desktop/$FILENAME.zip
GPGFILE=/root/Dropbox/SQL-Backups/$FILENAME.gpg
mysqldump --user=dbuser --password=password --port=3306 --default-character-set=utf8 --single-transaction=TRUE --databases "dbname" --result-file="$SQLFILE"
zip -9 $ZIPFILE $SQLFILE
gpg --output "$GPGFILE" --encrypt --recipient "recipient#company.com" "$ZIPFILE"
unlink $ZIPFILE
unlink $SQLFILE
This uses GnuPG to encrypt the resulting zipped SQL dump. Remember to never import the private key to the web server. The web server's GPG setup only needs the public key.
You can use the GPG software available for most platforms to create your key and publish the public key to a key server.