`mode` option in ansible synchronize does not work - unix

I recently set up an ansible role with the task:
- name: "synchronize source"
sudo: yes
synchronize:
src: "../../../../" # get source dir
dest: "{{ app.user.home_folder }}/{{ app.name }}"
mode: 700
Unfortunately, upon inspection, the transferred files have -rw-r--r--. Not a big deal, as I have set up another task to chmod the files, but I am wondering why this is.

You are using mode parameter for syncronize wrong. From Ansible's documentation:
Mode specify the direction of the synchroniztion. In push mode the
localhost or delegate is the source; In pull mode the remote host in
context is the source.
What you are thinking of is mode parameter for the copy module. There it sets permissions.

Related

How do I properly encrypt a file from inside an Ansible Playbook?

I'm currently using an Ansible playbook to extract and then transfer a configuration backup from some network devices (a basic text file) to an external storage.
I'd like to encrypt the configuration backups before sending them to their final storage. What would be the most adequate way to encrypt a file from inside an Ansible playbook task? To me, the obvious way would be to use the shell module to either call an external encryption tool (openssl) or an ansible-vault command to encrypt the backup in a format that ansible itself can read later in some other context; i.e. one of the two tasks below (simplified):
- name: Encrypt stuff with OpenSSL using a password read from vault file
shell:
cmd: openssl {{ openssl_parameters }} -k {{ vaulted_encryption_password }} -in {{ file_to_encrypt }} -out {{ encrypted_file }}
- name: Encrypt stuff with Ansible-Vault
shell:
cmd: ansible-vault encrypt {{ file_to_encrypt }} --vault-password-file {{ vault_password_file }}
However, none of these solutions seem completely secure, given they require passing the encryption password to an external tool via a shell (which can expose the password to anyone monitoring the processes on the host this runs in, for example) or require writing the plain-text password on a file for ansible-vault to use.
Is there a better way of doing file encryption inside an Ansible task that I'm missing here? (a dedicated module, or some other solution?).
Updated answer valid since ansible 2.12
The original answer below was one solution until the availability of ansible-core v2.12. Since then, there is a new ansible.builtin.vault filter which make this much easier.
Here is a complete test (whihc needs to be hardened for complete security of course...)
First, we create a secret.txt file we later want to encrypt:
echo "I'm a file that needs to be encrypted" > secret.txt
Then the playbook encrypt.yml:
---
- hosts: localhost
gather_facts: false
vars:
vault_file: secret.txt
vault_secret: v3rys3cr3t
tasks:
- name: In-place (re)encrypt file {{ vault_file }}
copy:
content: "{{ lookup('ansible.builtin.file', vault_file) | ansible.builtin.vault(vault_secret) }}"
dest: "{{ vault_file }}"
decrypt: false
Gives:
$ ansible-playbook encrypt.yml
PLAY [localhost] ***********************************************************************************************************************************************************************************************************************
TASK [In-place (re)encrypt file secret.txt] ********************************************************************************************************************************************************************************************
changed: [localhost]
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
And we can now check the file was effectively encrypted and still contains the original data
$ ansible-vault view --ask-vault-pass secret.txt
Vault password:
I'm a file that needs to be encrypted
(END)
Note that the above playbook is not idempotent. If you replay the tasks again:
you will have to provide the current vault password to ansible so that the file lookup can read the content.
the file will be changed even if the decrypt and encrypt password are identical.
Previous answer kept for history and still valid for ansible < 2.12
There are no modules I know to use ansible-vault from playbooks directly (besides the obvious intended use which is to decrypt variables and file contents on the fly).
One possible way to improve security (as far as listing processes is concerned) with your ansible-vault example through a command would be to use the interactive prompt mode and fill the password with the expect module. An other security layer can be added by adding the no_log: true parameter to the task so it does not print content of the variables.
Here is a simple example (you will need to pip install pexpect on the target host):
---
- hosts: localhost
gather_facts: false
tasks:
- name: Vault encrypt a given file
vars:
vault_pass: v3rys3cur3
expect:
command: ansible-vault encrypt --ask-vault-pass toto.txt
responses:
New Vault password: "{{ vault_pass }}"
Confirm New Vault password: "{{ vault_pass }}"
Which gives (using the verbose mode to illustrate the no_log feature and provided the given file exist and is not yet encrypted...):
$ ansible-playbook -v test.yml
No config file found; using defaults
PLAY [localhost] **************************************************************************************************************************************************************************************************
TASK [Vault encrypt a given file] *********************************************************************************************************************************************************************************
changed: [localhost] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Missing encryption key to decrypt file with.Ask your team for your master ... it in the ENV['RAILS_MASTER_KEY']. Platform.sh Deployment aborting,

ERROR MESSAGE:
W: Missing encryption key to decrypt file with. Ask your team for your master key and write it to /app/config/master.key or put it in the ENV['RAILS_MASTER_KEY'].
when deploying my project on Platform.sh, the operation failed because of the lack of the decryption key. from my google search, I found that the decryption key.
My Ubuntu .bashrc
export RAILS_MASTER_KEY='ad5e30979672cdcc2dd4f4381704292a'
rails project configuration for PLATFORM.SH
.platform.app.yaml
# The name of this app. Must be unique within a project.
name: app
type: 'ruby:2.7'
# The size of the persistent disk of the application (in MB).
disk: 5120
mounts:
'web/uploads':
source: local
source_path: uploads
relationships:
postgresdatabase: 'dbpostgres:postgresql'
hooks:
build: |
gem install bundler:2.2.5
bundle install
RAILS_ENV=production bundle exec rake assets:precompile
deploy: |
RACK_ENV=production bundle exec rake db:migrate
web:
upstream:
socket_family: "unix"
commands:
start: "\"unicorn -l $SOCKET -E production config.ru\""
locations:
'/':
root: "\"public\""
passthru: true
expires: "24h"
allow: true
routes.yaml
# Each route describes how an incoming URL is going to be processed by Platform.sh.
"https://www.{default}/":
type: upstream
upstream: "app:http"
"https://{default}/":
type: redirect
to: "https://www.{default}/"
services.yaml
# The name given to the PostgreSQL service (lowercase alphanumeric only).
dbpostgres:
type: postgresql:13
# The disk attribute is the size of the persistent disk (in MB) allocated to the service.
disk: 5120
db:
type: postgresql:13
disk: 5120
configuration:
extensions:
- pgcrypto
- plpgsql
- uuid-ossp
environments/production.rb
config.require_master_key = true
I suspect that the master.key is not accessible during deployment, and I don't understand how to solve the problem.
From what I understand, your export is in your .bashrc on your local machine, so it won't be accessible when deploying on Platform.sh. (The logs you see in your terminal when building and deploying are streamed, this doesn't happen on your machine.)
You need to make the RAILS_MASTER_KEY accessible on Platform.sh. To do so, this variable needs to be declared in your project.
Given the nature of the variable, I would suggest to use the Platform CLI to create this variable.
If this variable should be accessible on all your environments, you can make it a project level variable.
$ platform variable:create --level project --sensitive true env:RAILS_MASTER_KEY <your_key>
If it should only be accessible for a specific environment, then you need an environment level variable:
$ platform variable:create --level environment --environment '<your_envrionment>' --inheritable false --sensitive true env:RAILS_MASTER_KEY '<your_key>'
The env: prefix in the variable names tells Platform.sh to expose the variable with the rest of the environment variables. More information about this in the variables prefix section of the environment variables documentation page.
You could do the same via the management console if you prefer to avoid the command line.
Environment variables can also be configured directly in your .platform.app.yaml file, as described here. Keep in mind that this file being versioned, you should not use this method for sensitive information, such as encryption keys, API keys, and other kind of secrets.
The RAILS_MASTER_KEY environment variable should now be accessible during your Platform.sh deployment.

Ansible - check mode with file module and dependent steps

In my ansible playbooks, I often have steps like "create a directory and then do something in it", e.g.:
- name: Create directory
file:
path: "{{ tomcat_directory }}"
state: directory
- name: Extract tomcat
unarchive:
src: 'tomcat.tar.gz'
dest: '{{ tomcat_directory }}'
When I run this playbook, it works perfectly fine. However, when I run this playbook in check mode, the first step succeeds (folder would have been created), but the second one fails, because the folder does not exist.
Is there any way how I could write steps like these where I create folder and then operate in it while also being able to run the playbook in check mode (without skipping such steps)?
Check mode can be a bit of a pain. You only really have two options:
1) Add conditionals to tasks to skip them in check mode, which you don't want to do. For reference tho:
when: not ansible_check_mode
2) You can change the behaviour of the task in check mode. If you set check_mode: no on a task, then in check mode it will behave as it would in a normal run. That is to say, despite you specifying check mode, it will actually perform the task and create the dir if it does not already exist. You have to make a choice if you are happy for a given task to run for real in check mode, so it tends to only be appropriate for low risk tasks, but does provide you a route to continue testing the rest of your playbook that is dependent on the step in question.
Ansible Check Mode Docs
You could make use of the ignore_errors task option, along with the ansible_check_mode variable, to ignore errors with your Extract tomcat task only when running in check mode, e.g.:
- name: Create directory
file:
path: "{{ tomcat_directory }}"
state: directory
- name: Extract tomcat
unarchive:
src: 'tomcat.tar.gz'
dest: '{{ tomcat_directory }}'
ignore_errors: "{{ ansible_check_mode }}"
Running this in check mode will show the Extract tomcat task failed due to dest not existing. However, instead of failing the playbook, the task failure will be marked as ignored and playbook execution will continue.
An option would be to "register: result" and test "when: result.state is defined"
- name: Create directory
file:
path: "{{ tomcat_directory }}"
state: directory
register: result
- name: Extract tomcat
unarchive:
src: 'tomcat.tar.gz'
dest: '{{ tomcat_directory }}'
when: result.state is defined

SaltStack : How I can copy file from minion to SaltStack File Server

I need to copy a file from a minion to the saltstack server file (salt://)
How can I achieve this ?
I tried a state like this but is not working.
copy:
file.managed:
- name: salt://a.txt
- source: /tmp/a.txt
Hicham
You can use cp.push:
copy:
module.run:
- name: cp.push
- path: /tmp/a.txt
- upload_path: /tmp
Note that as documented, for security purposes, you have to set file_recv to True in the master configuration file, and restart the master, in order to enable this feature, and even then the minion is only allowed to upload the file to the minion's cache directory (/var/cache/salt/master/minions/minion-id/files). Specifying upload_path would upload the file to a sub-directory inside the cache directory.

In saltstack, how do I conditionally, and iteratively ( jinja ) apply an included state

This may seem at first to be pretty simple. But I can tell you I've been wracking my brains for a couple days on this. I've read a lot of docs, sat on IRC with folks, and spoken to colleagues and at this point I don't have an answer I really think holds up.
I've looked into a few possible approaches
reactor
orchestration runner
I don't like these two because of the top down execution necessity... they seem tailored to orchestrating multiple node states, not workflows in a single node.
custom states
This is kind of something I would REALLY like to avoid as this is a repeated workflow, and I don't want to build customizations like this. There's too much room for non legibility if I go down this path with my team mates.
requires / watches
These don't have a concept ( that I am aware of ) of applying a state repeatedly, or in a logical order / workflow.
And a few others I won't mention.
Without further discussion, here's my dilemma.
Goals:
Jenkins Master gets Deployed
We can unit.test the deployment as it proceeds
We only restart tomcat when necessary
We can update plugins on a per package basis
A big emphasis on good clean intuitively clear salt configs
Jenkins deployment is pretty straight forward. We drop in the packages, and the configs, and we're set.
Unit testing is harder. As an example I've got this state file.
actions/version.sls:
# Hit's the jenkins CLI interface to check for version info
# This can be used to verify that jenkins is active and the version we want
# Import some info
{%- from 'jenkins/init.sls' import jenkins_home with context %}
# Install plugins in jenkins_plugins list
jenkins_version:
cmd.run:
- name: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" version
- cwd: /var/lib/tomcat/webapps/ROOT/WEB-INF/
- user: jenkins
actions.version basically verifies that jenkins is running and queryable. we want to be sure of this during the build at several points.
example... tomcat takes time to spin up. we had to add a delay to that restart operation. If you check out start.sls below you can see that operation occurring. Note the bug open on init_delay: .
actions/start.sls:
# Starts the tomcat service
tomcat_start:
service.running:
- name: tomcat
- enable: True
- full_restart: True
# Not functional atm see --> https://github.com/saltstack/salt/issues/20631
# - init_delay: 120
# initiate a 120 second delay after any service start to let tomcat come up.
tomcat_wait:
module.run:
- name: test.sleep
- length: 60
include:
- jenkins.actions.version
Now we have this restart capability by doing an actions.stop and an actions.start. We have this actions.version state that we can use to verify that the system is ready to proceed with jenkins specific state workflows.
I want to do something kinda like this...
Install Jenkins --> Grab yaml of plugins --> install plugins that need it
Pretty straight forward.
Except, to loop through the yaml of plugins I am using Jinja.
And now I have no way to call and be sure that the start.sls and version.sls states can be repeatedly applied.
I am looking for, a good way to do that.
This would be something akin to a jenkins.sls
{% set repo_username = "foo" -%}
{% set repo_password = "bar" -%}
include:
- jenkins.actions.version
- jenkins.actions.stop
- jenkins.actions.start
# Install Jenkins
jenkins:
pkg:
- installed
# Import Jenkins Plugins as List, and Working Path
{%- from 'jenkins/init.sls' import jenkins_home with context %}
{%- import_yaml "jenkins/plugins.sls" as jenkins_plugins %}
{%- import_yaml "jenkins/custom-plugins.sls" as custom_plugins %}
# Grab updated package list
jenkins-contact-update-server:
cmd.run:
- name: curl -L http://updates.jenkins-ci.org/update-center.json | sed '1d;$d' > {{ jenkins_home }}/updates/default.json
- unless: test -d {{ jenkins_home }}/updates/default.json
- require:
- pkg: jenkins
- service: tomcat
# Install plugins in jenkins_plugins list
{% for plugin in jenkins_plugins %}
jenkins-plugin-{{ plugin }}:
cmd.run:
- name: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" install-plugin "{{ plugin }}"
- unless: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" list-plugins | grep "{{ plugin }}"
- cwd: /var/lib/tomcat/webapps/ROOT/WEB-INF/
- user: jenkins
- require:
- pkg: jenkins
- service: tomcat
Here is where I am stuck. require won't do this. and lists
of actions don't seem to schedule linearly in salt. I need to
be able to just verify that jenkins is up and ready. I need
to be able to restart tomcat after a single plugin in the
iteration is added. I need to be able to do this to satisfy
dependencies in the plugin order.
- sls: jenkins.actions.version
- sls: jenkins.actions.stop
- sls: jenkins.actions.start
# This can't work for several reasons
# - watch_in:
# - sls: jenkins-safe-restart
{% endfor %}
# Install custom plugins in the custom_plugins list
{% for cust_plugin,cust_plugin_url in custom_plugins.iteritems() %}
# manually downloading the plugin, because jenkins-cli.jar doesn't seem to work direct to artifactory URLs.
download-plugin-{{ cust_plugin }}:
cmd.run:
- name: curl -o {{ cust_plugin }}.jpi -O "https://{{ repo_username }}:{{ repo_password }}#{{ cust_plugin_url }}"
- unless: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" list-plugins | grep "{{ cust_plugin }}"
- cwd: /tmp
- user: jenkins
- require:
- pkg: jenkins
- service: tomcat
# installing the plugin ( REQUIRES TOMCAT RESTART AFTER )
custom-plugin-{{ cust_plugin }}:
cmd.run:
- name: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" install-plugin /tmp/{{ cust_plugin }}.jpi
- unless: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" list-plugins | grep "{{ cust_plugin }}"
- cwd: /var/lib/tomcat/webapps/ROOT/WEB-INF/
- user: jenkins
- require:
- pkg: jenkins
- service: tomcat
{% endfor %}
You won't be able to achieve this without using reactors, beacons and especially not without writing your own python execution modules.
Jenkins Master gets Deployed
Write a jenkins execution module in python with a function install(...):. In that function you would manage any dependencies by either calling existing execution modules or by writing them yourself.
We can unit.test the deployment as it proceeds
Inside the install function of the jenkins module you would fire specific events depending on the results of the install.
if not _run_deployment_phase(...):
__salt__['event.send']('jenkins/install/error', {
'finished': False,
'message': "Something failed during the deployment!",
})
You would map that event to reactor sls files and handle it.
We only restart tomcat when necessary
Write a tomcat module. Add an _is_up(...) function where you would check if tomcat is up by parsing the tomcat logs for the result. Call the function inside a state module and add a mod_watch function.
def mod_watch():
# required dict to return
return_dict = {
"name": "Tomcat install",
"changes": {},
"result": False,
"comment": "",
}
if __salt__["tomcat._is_up"]():
return_dict["result"] = True
return_dict["comment"] = "Tomcat is up."
if __opts__["test"]:
return_dict["result"] = None
return_dict["comment"] = "comment here about what will change"
return return_dict
# execute changes now
return return_dict
Use your state module inside a state file.
install tomcat:
tomcat.install:
- name: ...
- user: ...
...
wait until tomcat is up:
cmd.run:
- name: ...
- watch:
- tomcat: install tomcat
We can update plugins on a per package basis
Add a function to your jenkins execution module named install_plugin. View pkg.install code to replicate interface.
A big emphasis on good clean intuitively clear salt configs
Write python execution modules for easy and maintainable configuration logic. Use that execution module inside your own state modules. Inside state files call your own state modules and supply individual configuration with any state renderer you like.
States only execute once, by design. If you need the same action to occur multiple times, you need multiple states. Also, includes are only included a single time.
Rather than all of this include/require stuff you're doing, you should just put all of the code into a single sls file, and generate states through jinja iteration.
If what you're trying to do is add a bunch of plugins, add config files, then at the end do restarts, then you should really just execute everything in order, don't use require, and use listen or listen_in, rather than watch or watch_in.
listen/listen_in cause triggered actions to happen at the end of a state run. They are similar to the concept of handlers in Ansible.
This is a pretty old question, but If you change your Jenkins/tomcat start/stop procedure to be a standard init/systemd/windows service (as all well behaved services should be), you could have a service.running for the Jenkins service and add this to each of your custom-plugin-{{ cust_plugin }} states.
require_in:
- svc: jenkins
watch_in:
- svc: jenkins
You could continue to use the cmd.run module with onchanges. You'd have to add onchanges_in: to each of the custom-plugin-{{ cust_plugin }} states, but you need to have at least one item in the on changes list or the command will fire every time the state runs.
If you use require you cause salt to re-order your states. If you want your states to run in order, just write them in the order you want them to run in.
Watch/watch_in will also re-order your states. If you use listen/listen_in instead, it'll queue the triggered actions to run in the order they were triggered at the end of the state run.
See:
http://ryandlane.com/blog/2014/07/14/truly-ordered-execution-using-saltstack/
http://ryandlane.com/blog/2015/01/06/truly-ordered-execution-using-saltstack-part-2/

Resources