I have 4 VM's which all have a different ssh users.
In order to use Ansible to manipulate the Vms I set my file /etc/ansible/hosts to this:
someserver1 ansible_ssh_host=123.123.123.121 ansible_ssh_port=222 ansible_ssh_user=someuser1 ansible_ssh_pass=somepass1
someserver2 ansible_ssh_host=123.123.123.122 ansible_ssh_port=22 ansible_ssh_user=someuser2 ansible_ssh_pass=somepass2
someserver3 ansible_ssh_host=123.123.123.123 ansible_ssh_port=222 ansible_ssh_user=someuser3 ansible_ssh_pass=somepass3
someserver4 ansible_ssh_host=123.123.123.124 ansible_ssh_port=222 ansible_ssh_user=someuser4 ansible_ssh_pass=somepass4
Lets say i have this playbook which only does an ls inside the /root folder:
- name: root access test
hosts: all
tasks:
- name: ls the root folder on my Vms
become: yes
become_user: root
become_method: su
command: chdir=/root ls -all
Using this call ansible-playbook -v my-playbook.yml --extra-vars='ansible_become_pass=xxx-my-secret-root-password-for-someserver1' i can become root on one of my machines but not on all.
How is it possible to supply somepass2, somepass3 and somepass4?
Why not just define ansible_become_pass as an in-line host variable in the inventory like you already have done with the SSH password? So your inventory would now look like this:
someserver1 ansible_ssh_host=123.123.123.121 ansible_ssh_port=222 ansible_ssh_user=someuser1 ansible_ssh_pass=somepass1 ansible_become_pass=somesudopass1
someserver2 ansible_ssh_host=123.123.123.122 ansible_ssh_port=22 ansible_ssh_user=someuser2 ansible_ssh_pass=somepass2 ansible_become_pass=somesudopass2
someserver3 ansible_ssh_host=123.123.123.123 ansible_ssh_port=222 ansible_ssh_user=someuser3 ansible_ssh_pass=somepass3 ansible_become_pass=somesudopass3
someserver4 ansible_ssh_host=123.123.123.124 ansible_ssh_port=222 ansible_ssh_user=someuser4 ansible_ssh_pass=somepass4 ansible_become_pass=somesudopass4
Or, if your login password and sudo password are the same then simply add:
ansible_become_pass='{{ ansible_ssh_pass }}'
Either to an all group_vars file or in an in-line group vars block in the inventory file like this:
[all:vars]
ansible_become_pass='{{ ansible_ssh_pass }}'
Related
I'm currently using an Ansible playbook to extract and then transfer a configuration backup from some network devices (a basic text file) to an external storage.
I'd like to encrypt the configuration backups before sending them to their final storage. What would be the most adequate way to encrypt a file from inside an Ansible playbook task? To me, the obvious way would be to use the shell module to either call an external encryption tool (openssl) or an ansible-vault command to encrypt the backup in a format that ansible itself can read later in some other context; i.e. one of the two tasks below (simplified):
- name: Encrypt stuff with OpenSSL using a password read from vault file
shell:
cmd: openssl {{ openssl_parameters }} -k {{ vaulted_encryption_password }} -in {{ file_to_encrypt }} -out {{ encrypted_file }}
- name: Encrypt stuff with Ansible-Vault
shell:
cmd: ansible-vault encrypt {{ file_to_encrypt }} --vault-password-file {{ vault_password_file }}
However, none of these solutions seem completely secure, given they require passing the encryption password to an external tool via a shell (which can expose the password to anyone monitoring the processes on the host this runs in, for example) or require writing the plain-text password on a file for ansible-vault to use.
Is there a better way of doing file encryption inside an Ansible task that I'm missing here? (a dedicated module, or some other solution?).
Updated answer valid since ansible 2.12
The original answer below was one solution until the availability of ansible-core v2.12. Since then, there is a new ansible.builtin.vault filter which make this much easier.
Here is a complete test (whihc needs to be hardened for complete security of course...)
First, we create a secret.txt file we later want to encrypt:
echo "I'm a file that needs to be encrypted" > secret.txt
Then the playbook encrypt.yml:
---
- hosts: localhost
gather_facts: false
vars:
vault_file: secret.txt
vault_secret: v3rys3cr3t
tasks:
- name: In-place (re)encrypt file {{ vault_file }}
copy:
content: "{{ lookup('ansible.builtin.file', vault_file) | ansible.builtin.vault(vault_secret) }}"
dest: "{{ vault_file }}"
decrypt: false
Gives:
$ ansible-playbook encrypt.yml
PLAY [localhost] ***********************************************************************************************************************************************************************************************************************
TASK [In-place (re)encrypt file secret.txt] ********************************************************************************************************************************************************************************************
changed: [localhost]
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
And we can now check the file was effectively encrypted and still contains the original data
$ ansible-vault view --ask-vault-pass secret.txt
Vault password:
I'm a file that needs to be encrypted
(END)
Note that the above playbook is not idempotent. If you replay the tasks again:
you will have to provide the current vault password to ansible so that the file lookup can read the content.
the file will be changed even if the decrypt and encrypt password are identical.
Previous answer kept for history and still valid for ansible < 2.12
There are no modules I know to use ansible-vault from playbooks directly (besides the obvious intended use which is to decrypt variables and file contents on the fly).
One possible way to improve security (as far as listing processes is concerned) with your ansible-vault example through a command would be to use the interactive prompt mode and fill the password with the expect module. An other security layer can be added by adding the no_log: true parameter to the task so it does not print content of the variables.
Here is a simple example (you will need to pip install pexpect on the target host):
---
- hosts: localhost
gather_facts: false
tasks:
- name: Vault encrypt a given file
vars:
vault_pass: v3rys3cur3
expect:
command: ansible-vault encrypt --ask-vault-pass toto.txt
responses:
New Vault password: "{{ vault_pass }}"
Confirm New Vault password: "{{ vault_pass }}"
Which gives (using the verbose mode to illustrate the no_log feature and provided the given file exist and is not yet encrypted...):
$ ansible-playbook -v test.yml
No config file found; using defaults
PLAY [localhost] **************************************************************************************************************************************************************************************************
TASK [Vault encrypt a given file] *********************************************************************************************************************************************************************************
changed: [localhost] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I want to set "grains_cache" variable to "True" from Salt Master on all Minions. This variable is from default options that exist in minion config file and cannot be overridden by pillar data. So how can I set variables (for example "grains_cache", "grains_cache_expiration" or "log_file") from Master?
this should be an easy one. Manage the minion configuration file using the file.managed function.
A simple sls should help here:
minion_configuration:
file.managed:
- name: /etc/salt/minion
- contents: |
grains_cache: true
backup_mode: minion
salt-minion-restart:
cmd.wait:
- name: salt-call --local service.restart salt-minion
- bg: True
- order: last
- watch:
- file: salt-minion-config
In this example, saltstack ensures that the two lines beneath - contents: | are present within the minions configuration file.
The second state: salt-minion-restart will restart the salt-minion if the minion configuration file is being touched (managed by the first state).
So in short terms, this state adds your variables to the minion's configuration and restarts the minion afterwards.
This formula is os-independent.
The last thing left to do is, to target all of your minions with this.
If you want to know more about the cmd.wait and the shown example, please refer to this documentation.
I hope i could help.
I have a simple system that is managed by Saltstack. The minions are Windows Server 2016 machines. I would like to run some command with Admin privileges. How would I achieve that in the .sls for the minion?
run-my-cmd:
cmd.run:
- cwd: C:\temp\
- bg: True
- name: 'some command'
Execute the command using Runas
C:\Users\abcxyz>Runas /profile /user:Administrator "your command"
I am new to salt-ssh and I have gotten it to work successfully for setting up a remote system. However, I have a login issue that I don't know how to address. What is happening is that when I try to run the salt-ssh commands I have to fight with then initial login process before eventually it just works. I am looking to see if I can narrow down what is causing me to have to fight with login process.
I am using OS X to run my salt-ssh commands against an ubuntu vagrant vm.
I have added my root user's ssh key to the root user authorized_keys on the vagrant vm. I have verified that I can log into the system using ssh without any issues
sudo ssh root#192.168.33.10
Here are what my config files look like:
roster
managed:
host: 192.168.33.10
user: root
sudo: true
Saltfile
salt-ssh:
config_dir: /users/vmcilwain/projects/salt-ssh-rails
roster_file: /users/vmcilwain/projects/salt-ssh-rails/roster
log_file: /users/vmcilwain/projects/salt-ssh-rails/saltlog.txt
master
file_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/states
pillar_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/pillars
I run this command:
sudo salt-ssh -i '*' test.ping
I enter my local user's password and I get this output
Permission denied for host 192.168.33.10, do you want to deploy the salt-ssh key? (password required):
[Y/n]
This is where my fight is. If the vagrant vm has the ssh key for the user I am executing salt-ssh as, why am I being told that permission is denied? Especially when I verified I could ssh into the system without using salt-ssh.
Clicking yes prompts me for the remote root user's password, which I didn't set and don't necessarily want to since an ssh key should have worked.
I'm hoping someone can tell me the best way to setup connections between both systems so that I don't have to have this fight every time.
I needed to set the priv in my roster to the rsa key that I am using to connect to the remote host:
priv: /Users/vmcilwain/.ssh/id_rsa
I'm using Ansible to add a user to a variety of servers. Some of the servers have different UNIX groups defined. I'd like to find a way for Ansible to check for the existence of a group that I specify, and if that group exists, add it to a User's secondary groups list (but ignore the statement it if the group does not exist).
Any thoughts on how I might do this with Ansible?
Here is my starting point.
Command
ansible-playbook -i 'localhost,' -c local ansible_user.yml
ansible_user.yml
---
- hosts: all
user: root
become: yes
vars:
password: "!"
user: testa
tasks:
- name: add user
user: name="{{user}}"
state=present
password="{{password}}"
shell=/bin/bash
append=yes
comment="test User"
Updated: based on the solution suggested by #udondan, I was able to get this working with the following additional tasks.
- name: Check if user exists
shell: /usr/bin/getent group | awk -F":" '{print $1}'
register: etc_groups
- name: Add secondary Groups to user
user: name="{{user}}" groups="{{item}}" append=yes
when: '"{{item}}" in etc_groups.stdout_lines'
with_items:
- sudo
- wheel
The getent module can be used to read /etc/group
- name: Determine available groups
getent:
database: group
- name: Add additional groups to user
user: name="{{user}}" groups="{{item}}" append=yes
when: item in ansible_facts.getent_group
with_items:
- sudo
- wheel
Do you have anything to identify those different host types?
If not, you first need to check which groups exist on that host. You can do this with the command getent group | cut -d: -f1 which will output one group per line.
You can use this as separate task like so:
- shell: getent group | cut -d: -f1
register: unix_groups
The registered result then can be used later when you want to add the user group
- user: ...
when: "'some_group' in unix_groups.stdout_lines"
This is how I'm dealing with this in my playbooks. The idea is simple - to take a list of existing groups and find an intersection between groups a user wants to be member of and groups that exist:
- name: Get Existing Groups
getent:
database: group
- name: Configure Users
user:
name: username
groups: "{{ ['wheel', 'docker', 'video'] | intersect(ansible_facts['getent_group'] | list) }}"
The getent module outputs the 'getent_group' Ansible fact containing all existing groups with their details. By piping it to | list filter I get plain list of group names. The intersect filter finds what's common between two lists.
One of the advantages of this solution is that I don't have to use append parameter of the user module. This way user may be correctly removed from groups that I don't want it to be member of anymore.
I had the similar requirement, then I did the followings,
ansible [core 2.12.2]
---
- hosts: all
become: yes
become_user: root
tasks:
- ansible.builtin.user:
name: test1
password: "&&**%%^^"
uid: 1234
shell: /bin/bash
- ansible.builtin.shell:
"cat /etc/group| grep -o sysadmin"
register: output
#you can omit the debug part
- debug:
var: output
- name: assign user the group
ansible.builtin.shell:
"usermod -G sysadmin test1"
when: "'sysadmin' in output.stdout_lines"
Update: Thanks for the suggestion #Jeter-work
- hosts: localhost
tasks:
- name: Get all groups
ansible.builtin.getent:
database: group
split: ':'
- debug:
var: ansible_facts.getent_group