Ansible copy module does not preserve permissions of directories but only files - directory

The actual folder had 777 permissions as seen below:
drwxrwxrwx 3 destuser destuser 29 Jan 14 08:40 /tmp/mohtas/folder
I took the backup using the below playbook and wanted to preserve the permissions i.e 777 for the backup folder
---
- name: "Play 3"
hosts: all
user: destuser
gather_facts: false
tasks:
- set_fact:
tdate: "bkp.{{ '%d%b%Y_%H%M%S' | strftime }}"
- name: Take Backup when dest_path and source path are the same.
ignore_errors: yes
copy:
src: "/tmp/mohtas/folder"
dest: "/tmp/mohtas/folder.{{ tdate }}"
mode: preserve
However, the backup folder was created with a different permissions as below:
drwxr-xr-x 3 destuser destuser 17 Jan 15 09:07 /tmp/mohtas/folder.bkp.15Jan2021_090700
The strange thing is the backup permissions are preserved if i mention src: /tmp/mohtas/file.txt as a file and not a directory.
I understand that I can use stat module but was looking for a better/quicker solution as i'm dealing with a loop of files/folders.
My ansible version is
[destuser#desthost /]$ ansible --version
ansible 2.4.2.0
config file = /home/destuser/.ansible.cfg
configured module search path = [u'/home/destuser/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5

Related

Unable to set up laravel homestead

i am trying to set up laravel homestead on windows 10 and i am unable to navigate into my projects folder.This is my Homestead.yaml `---
ip: "192.168.10.10"
memory: 2048
cpus: 2
provider: virtualbox
authorize: ~/.ssh/id_rsa.pub
keys:
- ~/.ssh/id_rsa
folders:
- map: C:/LaravelHome
to: /home/vagrant/LaravelHome
sites:
- map: homestead1.test
to: /home/vagrant/LaravelHome/homestead1/public
databases:
- homestead
features:
- mysql: true
- mariadb: false
- postgresql: false
- ohmyzsh: false
- webdriver: false
so everytime i try to cd into LaravelHome so as to set up a project,i get this feedback even though i have the directory on my C drivevagrant#vagrant:~$ pwd
/home/vagrant
vagrant#vagrant:~$ cd LaravelHome
-bash: cd: LaravelHome: No such file or directory
vagrant#vagrant:~$
` What could i be doing wrong
it should be...
folders:
- map: C:\LaravelHome
to: /home/vagrant/code
the slash is in wrong direction...
You may refer to my step-by-step guide
https://medium.com/#dogcomp/ec996f9a2cb6

SaltStack : How I can copy file from minion to SaltStack File Server

I need to copy a file from a minion to the saltstack server file (salt://)
How can I achieve this ?
I tried a state like this but is not working.
copy:
file.managed:
- name: salt://a.txt
- source: /tmp/a.txt
Hicham
You can use cp.push:
copy:
module.run:
- name: cp.push
- path: /tmp/a.txt
- upload_path: /tmp
Note that as documented, for security purposes, you have to set file_recv to True in the master configuration file, and restart the master, in order to enable this feature, and even then the minion is only allowed to upload the file to the minion's cache directory (/var/cache/salt/master/minions/minion-id/files). Specifying upload_path would upload the file to a sub-directory inside the cache directory.

Per module hiera data with eyaml?

I have been using hiera to store information in
./modulename/data
using a hiera.yaml file under ./modulename/hiera.yaml
one looks like this:
#
---
version: 5
defaults:
datadir: data
data_hash: yaml_data
hierarchy:
- name: "source file"
path: "source.yaml"
I would like to use eyaml to encrypt the file, but doing something like this gives me errors in the hiera.yaml...
#
---
version: 5
defaults:
datadir: data
data_hash: eyaml_data
hierarchy:
- name: "authorized_keys"
path: "auth_keys.eyaml"
eyaml:
pkcs7_private_key: data/keys/private_key.pkcs7.pem
pkcs7_public_key: data/keys/public_key.pkcs7.pem
I figure there is some setup to the module specific hiera.yaml that I can use to decrypte the file or specific lines in the file, but I'm unable to find a lot on eyaml beyond how to set it up for use in /etc/puppet/puppet/keys
I've created the pkcs7 keys in ./modulename/data/keys/
the pkcs7_public and private keys do not have to be the ones under data/keys in the module directory, they could be the global ones in /etc/puppet/puppet/keys
I believe I found my answer, it was in some of the docs for hiera-eyaml:
https://github.com/voxpupuli/hiera-eyaml
Hopefully if anyone else has this question my findings can help :)
you can use the hiera.yaml config described in the doc under ./ModuleName/hiera.yaml
Here is my test example, I modified an existing test module to test this working. I think it requires:
PE 2017.1
latest hiera and puppet that comes with v 2017.1
gem install hiera-eyaml & puppetserver gem install hiera-eyaml (I had to run this a few times for the modules to show up correctly as well as some puppet agent -t runs)
log out and log back in for env paths
Here is my module:
$ tree master_cron/
master_cron/
├── data
│   └── secrets.eyaml
├── hiera.yaml
└── manifests
└── init.pp
$ ll /etc/puppetlabs/puppet/keys/
total 8.0K
drwxr-xr-x. 2 pe-puppet pe-puppet 63 Mar 18 16:51 .
drwxr-xr-x. 4 root root 207 Mar 18 17:03 ..
-rw-------. 1 pe-puppet pe-puppet 1.7K Mar 18 16:51 private_key.pkcs7.pem
-rw-r--r--. 1 pe-puppet pe-puppet 1.1K Mar 18 16:51 public_key.pkcs7.pem
$ cat hiera.yaml
---
version: 5
defaults:
datadir: data
hierarchy:
- name: "secret data"
lookup_key: eyaml_lookup_key
path: "secrets.eyaml"
options:
pkcs7_private_key: /etc/puppetlabs/puppet/keys/private_key.pkcs7.pem
pkcs7_public_key: /etc/puppetlabs/puppet/keys/public_key.pkcs7.pem
...
You could specific a key for the module itself, and put in in data/keys...
$ cat data/secrets.eyaml
---
master_cron::jobs:
"chown_pe-puppet":
environment: "PATH=/sbin:/bin:/usr/bin:/usr/sbin:/usr/local/bin"
minute: '*/5'
user: root
command: ENC[PKCS7,MIIBygYJKoZIhvcNAQcDoIIBuzCCAbcCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEACTtCuqFaS+5YS0DN/lLL6oV78W0lB55eQtZYIGug2SNFhLSA2h1FK/NmcsRE/YmVRJXqCeWTgFIxHch2mcWEYSLbdsWmq9WO45giUNt/MQp9hEMHmO27L63Vxl5GsvECP8yfWW6uinroOG6O95swag+W68nTrrLpV26KqP1mq+aoNw8ognNsm6IqG/FBMCgpWtGMmipBeSMXaXoUxS6wFANjMm0Ak0ykaGmwIYK1dHTosnNw8VX7d8u1oAzpgeWEkET0g8U+Q4z0W4ZNeWUIatJY1Lq30r3LOUswg+xIGmZAEro+KfQlOI1ENDx+/4ZG3IokMB9GJ1hzWlGWgbCh7zCBjAYJKoZIhvcNAQcBMB0GCWCGSAFlAwQBKgQQcMG9nWTZaCaqLuO5+m6fBIBg7G+pRWPy77yvbpvKXUb2sjkxXlDkLauSXE7KX5YOhFrBtb8pZ7MN9Rz0/qHmefToZbkhWPRMtWJ+QyVET+v2YaIh+7orEdqgo585Z+fjefIGFChkDstMj3d6Hl4s/DCW]
"chmod_pe-puppet":
environment: "PATH=/sbin:/bin:/usr/bin:/usr/sbin:/usr/local/bin"
minute: '*/5'
user: root
command: ENC[PKCS7,MIIBuQYJKoZIhvcNAQcDoIIBqjCCAaYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAKK4R/j3pD+s4cyzcH8H4PSK7j/uCaaSvOBqG8b1MsLQClKU49DxJQ+rZLYkUJEaEq30gyCRy9uZMwFri5EJGL455flXABiM/7A9NTUNJ0DoXdiWvxgWR0py/WmiLVUnql3wVZUfojqak1MOZeRYLeCnHyVLgdz+ouyPwg0nsAuXJewk5aJa5CSj7xkS4TQKvruRaqfFGsCMBEZM7lPDae9+YZZBgfPM9rqZNO5hoUu9Q3vizzpdRcD5+5U5mqCryEzmG51fvzVO0nK45aW6SiJm58nlumxhXJoWmv12OWT+3t67QJvOV3eciLM4F722UnMrJ7SIA3ttdW2UFHuP+eTB8BgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBAfCd5FJnsOoJqU71XSwos2gFDSY3b+apqgrcOZ3lTT8zRKd3Z5JdgIptbYSluzw42scZslHHMR3kcaYIH/D9EJQmG54VKwwFVQODUfFV8N7kyky9LvFA+xpJUWqP6Lijx3bw==]
This is just a test module I made that creates some cron jobs, I encrypted the commands as a test, not really a practical use for eyaml though ;)
here's what this looks like decrypted:
---
master_cron::jobs:
"chown_pe-puppet":
environment: "PATH=/sbin:/bin:/usr/bin:/usr/sbin:/usr/local/bin"
minute: '*/5'
user: root
command: chown -R pe-puppet:pe-puppet /etc/puppetlabs/code/environments/production/modules
"chmod_pe-puppet":
environment: "PATH=/sbin:/bin:/usr/bin:/usr/sbin:/usr/local/bin"
minute: '*/5'
user: root
command: chmod -R 755 /etc/puppetlabs/code/environments/production/modules
And I use the hiera data in the module as you could without it encrypted:
class master_cron ($jobs) {
create_resources(cron, $jobs)
}

Ansible - supply multiple ansible_become_pass=MYROOTPASSWORD

I have 4 VM's which all have a different ssh users.
In order to use Ansible to manipulate the Vms I set my file /etc/ansible/hosts to this:
someserver1 ansible_ssh_host=123.123.123.121 ansible_ssh_port=222 ansible_ssh_user=someuser1 ansible_ssh_pass=somepass1
someserver2 ansible_ssh_host=123.123.123.122 ansible_ssh_port=22 ansible_ssh_user=someuser2 ansible_ssh_pass=somepass2
someserver3 ansible_ssh_host=123.123.123.123 ansible_ssh_port=222 ansible_ssh_user=someuser3 ansible_ssh_pass=somepass3
someserver4 ansible_ssh_host=123.123.123.124 ansible_ssh_port=222 ansible_ssh_user=someuser4 ansible_ssh_pass=somepass4
Lets say i have this playbook which only does an ls inside the /root folder:
- name: root access test
hosts: all
tasks:
- name: ls the root folder on my Vms
become: yes
become_user: root
become_method: su
command: chdir=/root ls -all
Using this call ansible-playbook -v my-playbook.yml --extra-vars='ansible_become_pass=xxx-my-secret-root-password-for-someserver1' i can become root on one of my machines but not on all.
How is it possible to supply somepass2, somepass3 and somepass4?
Why not just define ansible_become_pass as an in-line host variable in the inventory like you already have done with the SSH password? So your inventory would now look like this:
someserver1 ansible_ssh_host=123.123.123.121 ansible_ssh_port=222 ansible_ssh_user=someuser1 ansible_ssh_pass=somepass1 ansible_become_pass=somesudopass1
someserver2 ansible_ssh_host=123.123.123.122 ansible_ssh_port=22 ansible_ssh_user=someuser2 ansible_ssh_pass=somepass2 ansible_become_pass=somesudopass2
someserver3 ansible_ssh_host=123.123.123.123 ansible_ssh_port=222 ansible_ssh_user=someuser3 ansible_ssh_pass=somepass3 ansible_become_pass=somesudopass3
someserver4 ansible_ssh_host=123.123.123.124 ansible_ssh_port=222 ansible_ssh_user=someuser4 ansible_ssh_pass=somepass4 ansible_become_pass=somesudopass4
Or, if your login password and sudo password are the same then simply add:
ansible_become_pass='{{ ansible_ssh_pass }}'
Either to an all group_vars file or in an in-line group vars block in the inventory file like this:
[all:vars]
ansible_become_pass='{{ ansible_ssh_pass }}'

`mode` option in ansible synchronize does not work

I recently set up an ansible role with the task:
- name: "synchronize source"
sudo: yes
synchronize:
src: "../../../../" # get source dir
dest: "{{ app.user.home_folder }}/{{ app.name }}"
mode: 700
Unfortunately, upon inspection, the transferred files have -rw-r--r--. Not a big deal, as I have set up another task to chmod the files, but I am wondering why this is.
You are using mode parameter for syncronize wrong. From Ansible's documentation:
Mode specify the direction of the synchroniztion. In push mode the
localhost or delegate is the source; In pull mode the remote host in
context is the source.
What you are thinking of is mode parameter for the copy module. There it sets permissions.

Resources