I encrypted a bunch of files (certificates) using the following script
for i in $(find . -type f); do ansible-vault encrypt $i --vault-password-file ~/.vault && echo $i encrypted ; done
During rsyncing I run something like this
- name: Copy letsencrypt files
synchronize:
src: "{{ path }}/letsencrypt/"
dest: /etc/letsencrypt/
rsync_path: "sudo rsync"
rsync_opts:
- "--delete"
- "--checksum"
- "-a"
notify:
- Reload Nginx
The problem I’ve faced is that the files that moved still remained encrypted. I thought ansible was smart enough to detect if it was encrypted and decrypt like how I do here
- name: Copy deploy private key
copy:
content: "{{ private_key_content }}"
dest: "/home/deploy/.ssh/id_rsa"
owner: deploy
group: deploy
mode: 0600
no_log: true
Back to the earlier question, how do I make sure the files in the folder/files are decrypted before rsyncing?
Edit:
I tried using the copy module since it is encryption aware but the module seems to be hanging. Noticed some issues with copy module for directories on ansible github and I am back to synchronize.
I also tried the with_fileglob approach but that flattens the directory structure.
Edit 2:
I got encryption, decryption to work with the copy module but its horribly slow.
There is already an issue https://github.com/ansible/ansible/issues/45161 at the ansible site open and the conclusion is:
Synchronize is a wrapper around rsync, I doubt that you can hook into the
process like that. You might want to implement a custom module doing this
or use something, which supports it.
Related
I am trying to mount a repository with server config files (think nginx, mysql, etc) inside my salt fileserver in order to be able to distribute these files to my minions (Without having to do a checkout of the full repository on all my minions).
If I've understood correctly: All gitfs_remotes will be 'flattened' into one filesystem structure (I can confirm this when I run salt-run fileserver.file_list.
What worries me is that, as far as I know, this 'config file only' repository is now also being searched by Salt for state modules.
Is there some way to either:
Designate a gitfs mount as 'don't search for SLS files'.
Mount the actual salt state repository (which contains my top.sls and state modules) under a subdirectory of the salt fileserver and point salt to the top.sls therein?
I stand open to the possibility that this is a wrong approach entirely of course, my only requirement is that the server config files (again, nginx, mysql, etc) live in a separate repository, and that the entire high state (state modules, top file) lives in git.
master config:
fileserver_backend:
- gitfs
gitfs_remotes:
- git#github.com:MyOrg/salt-configs.git:
- git#github.com:MyOrg/server-config-files.git:
- mountpoint: config-files
Have you considered storing your configuration file in a pillar?
For example:
HostFiles:
LinuxBasic: |
192.168.1.1 server1
192.168.1.2 server2
And then in your state file, when you want to render the hostfile:
LinuxBasicHostFile:
file.managed:
- name: /etc/hosts
- contents_pillar: {{ HostFiles:LinuxBasic }}
You could also GPG that file if it was sensitive using the keys on your Salt master's server:
$ cat nginx.hostfile | sudo gpg --armor --batch --trust-model always --encrypt --homedir <salthomdir> -r <keyname>
Paste the output of that into your pillar:
HostFiles:
LinuxBasic: |
-----BEGIN PGP MESSAGE-----
Xks383...a bunch of encrypted text...BjAs0
-----END PGP MESSAGE-----
And inform your salt master that HostFiles contains GPG encrypted content in your master.conf, or better yet, in a local conf file in /etc/salt/master.d/decrypt.conf:
decrypt_pillar:
- 'HostFiles': gpg
I have some files in my repository, and one contains a secret Adafruit key. I want to use Git to store my repository, but I don't want to publish the key.
What's the best way to keep it secret, without having to blank it out everytime I commit and push something?
Depending on what you're trying to achieve you could choose one of those methods:
keep file in the tree managed by git but ignore it with entry in gitignore
keep file content in environment variable,
don't use file with key at all, keep key content elsewhere (external systems like hashicorp's vault, database, cloud (iffy, I wouldn't recommend that), etc.)
First approach is easy and doesn't require much work, but you still has the problem of passing the secret key to different location where you'd use the same repository in a secure manner. Second approach requires slightly more work, has the same drawback as the first one.
Third requires certainly more work then 1st and 2nd, but could lead to a setup that's really secure.
This highly depends on the requirements of the project
Basically, the best strategy security-wise is not to store keys, passwords and in general any vulnerable information inside the source control system. If its the goal, there are many different approaches:
"Supply" this kind of information in Runtime and keep it somewhere else:
./runMyApp.sh -db.password=
Use specialized tools (like, for example Vault by Hashicorp) to manage secrets
Encode the secret value offline and store in git the encoded value. Without a secret key used for decoding, this encoded value alone is useless. Decode the value again in runtime, using some kind of shared keys infra / asymmetric key pair, in this case, you can use a public key for encoding, a private key for decoding
I want to use Git to store my repository, but I don't want to publish the key.
For something as critical as a secret key, I would use a dedicated keyring infrastructure located outside the development environment, optionally coupled to a secret passphrase.
Aside this case, I personnaly use submodules for this. Check out :
git submodule
In particular, I declare a global Git repository, in which I declare in turn another Git repository that will contain the actual project that will go public. This enables us to store at top level everything that is related to the given project, but not forcibly relevant to it and that is not to be published. This could be, for instance, all my drafts, automation scripts, worknotes, project specifications, tests, bug reports, etc.
Among all the advantages this facility provides, we can highlight the fact that you can declare as a submodule an already existing repository, this repository being located inside or outside the parent one.
And what's really interesting with this is that both main repository and submodules remains distinct Git repositories, that still can be configured independently. This means that you don't need your parent repository to have its remote servers configured.
Doing that way, you get all the benefits of a versioning system wherever you work, while still ensuring yourself that you'll never accidentally push outside something that is not stored inside the public submodule.
If you don't have too many secrets to manage, and you do want to keep the secrets in version control, I make the parent repository private. It contains 2 folders - a secrets folder, and a gitsubmodule for the public repository (in another folder). I use ansible crypt to encrypt anything in the secrets folder, and a bash script to pass the decrypted contents, and load those secrets as environment vars to ensure the secrets file always stays encrypted.
Ansible crypt can encrypt and decrypt an environment variable, which I wrap in a bash script to do these functions like so-
testsecret=$(echo 'this is a test secret' | ./scripts/ansible-encrypt.sh --vault-id $vault_key --encrypt)
result=$(./scripts/ansible-encrypt.sh --vault-id $vault_key --decrypt $testsecret)
echo $result
testsecret here is the encrypted base64 result, and it can be stored safely in a text file. Later you could source that file to keep the encrypted result in memory, and finally when you need to use the secret, you can decrypt it ./scripts/ansible-encrypt.sh --vault-id $vault_key --decrypt $testsecret
This bash script referenced above is below (ansible-encrypt.sh). It wraps ansible crypt functions in a way that can store encrypted variables in base64 which resolves some problems that can occur with encoding.
#!/bin/bash
# This scripts encrypts an input hidden from the shell and base 64 encodes it so it can be stored as an environment variable
# Optionally can also decrypt an environment variable
vault_id_func () {
if [[ "$verbose" == true ]]; then
echo "Parsing vault_id_func option: '--${opt}', value: '${val}'" >&2;
fi
vault_key="${val}"
}
secret_name=secret
secret_name_func () {
if [[ "$verbose" == true ]]; then
echo "Parsing secret_name option: '--${opt}', value: '${val}'" >&2;
fi
secret_name="${val}"
}
decrypt=false
decrypt_func () {
if [[ "$verbose" == true ]]; then
echo "Parsing secret_name option: '--${opt}', value: '${val}'" >&2;
fi
decrypt=true
encrypted_secret="${val}"
}
IFS='
'
optspec=":hv-:t:"
encrypt=false
parse_opts () {
local OPTIND
OPTIND=0
while getopts "$optspec" optchar; do
case "${optchar}" in
-)
case "${OPTARG}" in
vault-id)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
opt="${OPTARG}"
vault_id_func
;;
vault-id=*)
val=${OPTARG#*=}
opt=${OPTARG%=$val}
vault_id_func
;;
secret-name)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
opt="${OPTARG}"
secret_name_func
;;
secret-name=*)
val=${OPTARG#*=}
opt=${OPTARG%=$val}
secret_name_func
;;
decrypt)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
opt="${OPTARG}"
decrypt_func
;;
decrypt=*)
val=${OPTARG#*=}
opt=${OPTARG%=$val}
decrypt_func
;;
encrypt)
encrypt=true
;;
*)
if [ "$OPTERR" = 1 ] && [ "${optspec:0:1}" != ":" ]; then
echo "Unknown option --${OPTARG}" >&2
fi
;;
esac;;
h)
help
;;
*)
if [ "$OPTERR" != 1 ] || [ "${optspec:0:1}" = ":" ]; then
echo "Non-option argument: '-${OPTARG}'" >&2
fi
;;
esac
done
}
parse_opts "$#"
if [[ "$encrypt" = true ]]; then
read -s -p "Enter the string to encrypt: `echo $'\n> '`";
secret=$(echo -n "$REPLY" | ansible-vault encrypt_string --vault-id $vault_key --stdin-name $secret_name | base64 -w 0)
unset REPLY
echo $secret
elif [[ "$decrypt" = true ]]; then
result=$(echo $encrypted_secret | base64 -d | /snap/bin/yq r - "$secret_name" | ansible-vault decrypt --vault-id $vault_key)
echo $result
else
# if no arg is passed to encrypt or decrypt, then we a ssume the function will decrypt the firehawksecret env var
encrypted_secret="${firehawksecret}"
result=$(echo $encrypted_secret | base64 -d | /snap/bin/yq r - "$secret_name" | ansible-vault decrypt --vault-id $vault_key)
echo $result
fi
Storing encrypted values as environment variables is much more secure than decrypting something at rest and leaving the plaintext result in memory. That's extremely easy for any process to siphon off.
If you only wish to share the code with others and not the secrets, you can use a git template for the parent private repo structure so that others can inherit that structure, but use their own secrets. This also allows CI to pickup everything you would need for your tests.
Alternatively, If you don't want your secrets in version control, you can simply use git ignore on a containing folder that will house your secrets.
Personally, this makes me nervous, its possible for user error to still result in publicly committed secrets, since those files are still under the root of a public repo, any number of things could go wrong that could be embarrassing with that approach.
For Django
add another file called secrets.py or whatever you want and also another called .gitignore
type secrets.py in .gitignore
paste the secret key into the secrets.py file and to import it in the settings file by using
from foldername.secrets import *
this worked for me.
I'm trying to use SaltStack to setup configuration on a MariaDB instance. I'm trying to make a symbolic link equivalent to this command
ln -s /var/lib/mysql/dbaas/mysql_client.cnf /etc/my.cnf
Is that the right salt syntax ?
link-mysql-client-cnf:
file.symlink:
- name: /etc/my.cnf
- target: /var/lib/mysql/dbaas/mysql_client.cnf
- force: True
For some reason the symlink fails because my.cnf already exists, I read the documentation and set Force to True but it didn't work. Any suggestions please?
I solved the issue.
What was causing the problem is that the engine was created before putting force=True and that was cached even when I modified the salt script. My advice is to clear out the engine and start-over by creating an engine and an instance anytime you change the salt.
Is it mandatory to put files in /srv/salt folder of master to transfer file/directory from master to connected minions.
Can we transfer files without using salt file server
1) Can we transfer files without using salt file server?
2) Also the document says "You can't run interactive scripts" .
Does it mean there are some limitations to execute arbitrary linux commands with cmd.run eg. we can run salt "*" cmd.run ['ls -l /home'] .
Similarly can we run commands like scp,ssh with cmd.run.
You don't have to use the salt file server. You can also set the source to for example a http location. In that case though, you must also declare the hash of the file, for example:
/etc/nginx/sites-enabled/mysite:
file.managed:
- source: http://example.com/mysite
- source_hash: abc123....
The document suggests the default location as /srv/salt as mentioned here
Is it possible to have an SSH session use all your local configuration files (.bash_profile, .vimrc, etc..) on login? That way you would have the same configuration for, say, editing files in vim in the remote session.
I just came across two alternatives to just doing a git clone of your dotfiles. I take no credit for either of these and can't say I've used either extensively so I don't know if there are pitfalls to either of these.
sshrc
sshrc is a tool (actually just a big bash function) that copies over local rc-files without permanently writing them to the remove user's $HOME - the idea being that might be a shared admin account that other people use. Appears to be customizable for different remote hosts as well.
.ssh/config and LocalCommand
This blog post suggests a way to automatically run a command when you login to a remote host. It tars and pipes a set of files to the remote, then un-tars them on the remote's $HOME:
Your local ~/.ssh/config would look like this:
Host *
PermitLocalCommand yes
LocalCommand tar c -C${HOME} .bashrc .bash_profile .exports .aliases .inputrc .vimrc .screenrc \
| ssh -o PermitLocalCommand=no %n "tar mx -C${HOME}"
You could modify the above to only run the command on certain hosts (instead of the * wildcard) or customize for different hosts as well. There might be a fair amount of duplication per host with this method - although you could package the whole tar c ... | ssh .. "tar mx .." into a script maybe.
Note the above looks like it clobbers the same files on the remote when you connect, so use with caution.
Use a dotfiles.git repo
What I do is keep all my config files in a dotfiles.git on a central server.
You can set it up so that when you ssh into a remote machine, you automatically pull the latest version of the dotfiles. I do something like this:
ssh myhost
cd ~/dotfiles
git pull --rebase
cd ~
ln -sf dotfiles/$username/linux/.* .
Note:
To put that in a shell script, you can automate the process of executing commands on a remote machine by piping to ssh.
The "$username" is there so that you can share your config files with other people you're working with.
The "ln -sf" creates symbolic links to all your dotfiles, overwriting any local ones, such that ~/.emacs is linked to the version controlled file ~/dotfiles/$username/.emacs.
The use of a "linux" subdirectory is just to allow for configuration changes across platforms. I also have a mac directory under dotfiles/$username/mac. Most of the files in the /mac directory are symlinked from the linux directory as it's very similar, but there are some exceptions.
Finally, note that you can make this even more sophisticated with hostnames and the like rather than just a generic 'linux'. With a dotfiles.git, you can also raid dotfiles from your friends, which is awesome -- everyone has their own set of little tricks and hacks.
No, because it's not SSH using your config files, but the remote shell.
I suggest keeping your config files in Subversion or some other VCS. Here's how I do it.
Well, no, because as Andy Lester says, the remote machine is the one doing the work, and it has no access back to your local machine to get .vimrc ...
On the other hand, you could use sshfs to mount the remote file system locally and edit the files locally. This doesn't require you to install anything on the remote machine. Not sure how efficient it is, maybe not great for editing big files over slow links.
Or Komodo IDE has a neat "Open >> Remote File" option which lets you edit files on remote
machines by scping them back and forth automatically.
I do this kind of things every day. I have about 15 bash rc files and .vimrc, a few vim plugin scripts, .screenrc and some other rc files. I have a sync script (written in bash) which uses the cool rsync command to sync all these files to remote servers. Every time I update some files on my main server, I would call the script to sync them to remote servers.
Setting up a svn/git/hg repository on the main server also works for me but my remote servers need to be repeatedly reinstalled for testing. So I find it's more convenient to use rsync.
A few years ago I also used the rdist tool which can also meet the requirement for most of the time. But now I prefer rsync as it supports incremental sync which is very efficient.
ssh can be configured to pass certain environment variables through to the other (remote side). And since most shells will check some environment variables for additional settings to apply, you can hack that into applying some local settings remotely. But its a bit complicated and most administrators turn off the ssh environment variable pass-through in the sshd config anyways.
You could always just copy the files to the machine before connecting with ssh:
#!/bin/bash
scp ~/.bash_profile ~/.vimrc user#host:
ssh user#host
This works best if you are using keys to login and no one else logs in as that user.
Here's a simple bash script I've used for this purpose. It syncs over some folders I like to have copied over using rsync and then adds the ~/bin folder to the remote machines .bashrc if it's not there already. It works best if you have have copied your ssh keys to each server. I use this approach instead of a "dotfiles repo" as lots of the servers I connect to don't have git on them.
So to use it, you'd do something like this:
./bin_sync_to_machine.sh server1
bin_sync_to_machine.sh
function show_help()
{
echo ""
echo "usage: SERVER {SERVER2 SERVER3 etc...}"
echo ""
exit
}
if [ "$1" == "help" ]
then
show_help
fi
if [ -z "$1" ]
then
show_help
fi
# Sync ~/bin and some dot files to remote server using rsync
for SERVER in $*; do
rsync -avrz --progress ~/bin/ -e ssh $SERVER:~/bin
rsync -avrz --progress ~/.vim/ -e ssh $SERVER:~/.vim
rsync -avrz --progress ~/.vimrc -e ssh $SERVER:~/.vimrc
rsync -avrz --progress ~/.aliases $SERVER:~/.aliases
rsync -avrz --progress ~/.aliases $SERVER:~/.bash_aliases
# Ensure remote server has ~/bin in the path
ssh $SERVER '~/bin/path_add_to_path.sh'
done
path_add_to_path.sh
pathadd() {
if [ -d "$1" ] && [[ ":$PATH:" != *":$1:"* ]]; then
PATH="${PATH:+"$PATH:"}$1"
fi
}
# Add to current path if running in a shell
pathadd ~/bin
# Add to ~/.bashrc
if ! grep -q PATH:~/bin ~/.bashrc; then
echo "PATH=\$PATH:~/bin" >> ~/.bashrc
fi
if ! grep -q source ~/.aliases ~/.bashrc; then
echo "source ~/.aliases" >> ~/.bashrc
fi
I wrote an extremely simple tool for this that will allow you to natively transport your .vimrc file whenever you ssh, by using SSHd built-in config options in a non-standard way.
No additional svn,scp,copy/paste, etc required.
It is simple, lightweight, and works by default on all server configurations I have tested so far.
https://github.com/gWOLF3/viSSHous
I think that https://github.com/fsquillace/kyrat does what you need.
I wrote it long time ago before sshrc was born and it has more benefits compared to sshrc:
It does not require dependencies on xxd for both hosts (which can be unavailable on remote host)
Kyrat uses a more efficient encoding algorithm
It is just ~20 lines of code (really easy to understand!)
No need of root access or any installations to the remote host
For instance:
$> echo "alias q=exit" > ~/.config/kyrat/bashrc
$> kyrat myuser#myserver.com
myserver.com $> q
exit