How to get current role name in an ansible task - global-variables

How can I get the current role name in an ansible task yaml file?
I would like to do something like this
---
# role/some-role-name/tasks/main.yml
- name: Create a directory which is called like the current role name
action: file
path=/tmp/"{{ role_name }}"
mode=0755
state=directory
The result of this task should be a directory /tmp/some-role-name on the server

The simplest way is to just use the following
{{role_path|basename}}

As of Ansible 2.2:
{{role_name}}
As of Ansible 2.1:
{{role_path|basename}}
Older versions:
There is no way to do this in the current version of Ansible, here are a couple options that might work for you instead:
1) Use set_fact to set a role_name var to the name the of role as the first task in your tasks/main.yml file
- set_fact: role_name=some-role-name
2) Pass a parameter to your role that has the name
- roles:
- role: some-role-name
role_name: some-role-name

See this post:
To get the role directory:
role_dir: "{{ lookup('pipe', 'pwd') | dirname }}"
To get the role name:
role_name: "{{ lookup('pipe', 'pwd') | dirname | basename }}"

As of Ansible 2.8 there is ansible_play_name which contains the name of the currently executed play.
https://github.com/ansible/ansible/pull/48562
https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html

Related

Ansible - check mode with file module and dependent steps

In my ansible playbooks, I often have steps like "create a directory and then do something in it", e.g.:
- name: Create directory
file:
path: "{{ tomcat_directory }}"
state: directory
- name: Extract tomcat
unarchive:
src: 'tomcat.tar.gz'
dest: '{{ tomcat_directory }}'
When I run this playbook, it works perfectly fine. However, when I run this playbook in check mode, the first step succeeds (folder would have been created), but the second one fails, because the folder does not exist.
Is there any way how I could write steps like these where I create folder and then operate in it while also being able to run the playbook in check mode (without skipping such steps)?
Check mode can be a bit of a pain. You only really have two options:
1) Add conditionals to tasks to skip them in check mode, which you don't want to do. For reference tho:
when: not ansible_check_mode
2) You can change the behaviour of the task in check mode. If you set check_mode: no on a task, then in check mode it will behave as it would in a normal run. That is to say, despite you specifying check mode, it will actually perform the task and create the dir if it does not already exist. You have to make a choice if you are happy for a given task to run for real in check mode, so it tends to only be appropriate for low risk tasks, but does provide you a route to continue testing the rest of your playbook that is dependent on the step in question.
Ansible Check Mode Docs
You could make use of the ignore_errors task option, along with the ansible_check_mode variable, to ignore errors with your Extract tomcat task only when running in check mode, e.g.:
- name: Create directory
file:
path: "{{ tomcat_directory }}"
state: directory
- name: Extract tomcat
unarchive:
src: 'tomcat.tar.gz'
dest: '{{ tomcat_directory }}'
ignore_errors: "{{ ansible_check_mode }}"
Running this in check mode will show the Extract tomcat task failed due to dest not existing. However, instead of failing the playbook, the task failure will be marked as ignored and playbook execution will continue.
An option would be to "register: result" and test "when: result.state is defined"
- name: Create directory
file:
path: "{{ tomcat_directory }}"
state: directory
register: result
- name: Extract tomcat
unarchive:
src: 'tomcat.tar.gz'
dest: '{{ tomcat_directory }}'
when: result.state is defined

Ansible: Add Unix group to user only if the group exists

I'm using Ansible to add a user to a variety of servers. Some of the servers have different UNIX groups defined. I'd like to find a way for Ansible to check for the existence of a group that I specify, and if that group exists, add it to a User's secondary groups list (but ignore the statement it if the group does not exist).
Any thoughts on how I might do this with Ansible?
Here is my starting point.
Command
ansible-playbook -i 'localhost,' -c local ansible_user.yml
ansible_user.yml
---
- hosts: all
user: root
become: yes
vars:
password: "!"
user: testa
tasks:
- name: add user
user: name="{{user}}"
state=present
password="{{password}}"
shell=/bin/bash
append=yes
comment="test User"
Updated: based on the solution suggested by #udondan, I was able to get this working with the following additional tasks.
- name: Check if user exists
shell: /usr/bin/getent group | awk -F":" '{print $1}'
register: etc_groups
- name: Add secondary Groups to user
user: name="{{user}}" groups="{{item}}" append=yes
when: '"{{item}}" in etc_groups.stdout_lines'
with_items:
- sudo
- wheel
The getent module can be used to read /etc/group
- name: Determine available groups
getent:
database: group
- name: Add additional groups to user
user: name="{{user}}" groups="{{item}}" append=yes
when: item in ansible_facts.getent_group
with_items:
- sudo
- wheel
Do you have anything to identify those different host types?
If not, you first need to check which groups exist on that host. You can do this with the command getent group | cut -d: -f1 which will output one group per line.
You can use this as separate task like so:
- shell: getent group | cut -d: -f1
register: unix_groups
The registered result then can be used later when you want to add the user group
- user: ...
when: "'some_group' in unix_groups.stdout_lines"
This is how I'm dealing with this in my playbooks. The idea is simple - to take a list of existing groups and find an intersection between groups a user wants to be member of and groups that exist:
- name: Get Existing Groups
getent:
database: group
- name: Configure Users
user:
name: username
groups: "{{ ['wheel', 'docker', 'video'] | intersect(ansible_facts['getent_group'] | list) }}"
The getent module outputs the 'getent_group' Ansible fact containing all existing groups with their details. By piping it to | list filter I get plain list of group names. The intersect filter finds what's common between two lists.
One of the advantages of this solution is that I don't have to use append parameter of the user module. This way user may be correctly removed from groups that I don't want it to be member of anymore.
I had the similar requirement, then I did the followings,
ansible [core 2.12.2]
---
- hosts: all
become: yes
become_user: root
tasks:
- ansible.builtin.user:
name: test1
password: "&&**%%^^"
uid: 1234
shell: /bin/bash
- ansible.builtin.shell:
"cat /etc/group| grep -o sysadmin"
register: output
#you can omit the debug part
- debug:
var: output
- name: assign user the group
ansible.builtin.shell:
"usermod -G sysadmin test1"
when: "'sysadmin' in output.stdout_lines"
Update: Thanks for the suggestion #Jeter-work
- hosts: localhost
tasks:
- name: Get all groups
ansible.builtin.getent:
database: group
split: ':'
- debug:
var: ansible_facts.getent_group

Ansible - supply multiple ansible_become_pass=MYROOTPASSWORD

I have 4 VM's which all have a different ssh users.
In order to use Ansible to manipulate the Vms I set my file /etc/ansible/hosts to this:
someserver1 ansible_ssh_host=123.123.123.121 ansible_ssh_port=222 ansible_ssh_user=someuser1 ansible_ssh_pass=somepass1
someserver2 ansible_ssh_host=123.123.123.122 ansible_ssh_port=22 ansible_ssh_user=someuser2 ansible_ssh_pass=somepass2
someserver3 ansible_ssh_host=123.123.123.123 ansible_ssh_port=222 ansible_ssh_user=someuser3 ansible_ssh_pass=somepass3
someserver4 ansible_ssh_host=123.123.123.124 ansible_ssh_port=222 ansible_ssh_user=someuser4 ansible_ssh_pass=somepass4
Lets say i have this playbook which only does an ls inside the /root folder:
- name: root access test
hosts: all
tasks:
- name: ls the root folder on my Vms
become: yes
become_user: root
become_method: su
command: chdir=/root ls -all
Using this call ansible-playbook -v my-playbook.yml --extra-vars='ansible_become_pass=xxx-my-secret-root-password-for-someserver1' i can become root on one of my machines but not on all.
How is it possible to supply somepass2, somepass3 and somepass4?
Why not just define ansible_become_pass as an in-line host variable in the inventory like you already have done with the SSH password? So your inventory would now look like this:
someserver1 ansible_ssh_host=123.123.123.121 ansible_ssh_port=222 ansible_ssh_user=someuser1 ansible_ssh_pass=somepass1 ansible_become_pass=somesudopass1
someserver2 ansible_ssh_host=123.123.123.122 ansible_ssh_port=22 ansible_ssh_user=someuser2 ansible_ssh_pass=somepass2 ansible_become_pass=somesudopass2
someserver3 ansible_ssh_host=123.123.123.123 ansible_ssh_port=222 ansible_ssh_user=someuser3 ansible_ssh_pass=somepass3 ansible_become_pass=somesudopass3
someserver4 ansible_ssh_host=123.123.123.124 ansible_ssh_port=222 ansible_ssh_user=someuser4 ansible_ssh_pass=somepass4 ansible_become_pass=somesudopass4
Or, if your login password and sudo password are the same then simply add:
ansible_become_pass='{{ ansible_ssh_pass }}'
Either to an all group_vars file or in an in-line group vars block in the inventory file like this:
[all:vars]
ansible_become_pass='{{ ansible_ssh_pass }}'

In saltstack, how do I conditionally, and iteratively ( jinja ) apply an included state

This may seem at first to be pretty simple. But I can tell you I've been wracking my brains for a couple days on this. I've read a lot of docs, sat on IRC with folks, and spoken to colleagues and at this point I don't have an answer I really think holds up.
I've looked into a few possible approaches
reactor
orchestration runner
I don't like these two because of the top down execution necessity... they seem tailored to orchestrating multiple node states, not workflows in a single node.
custom states
This is kind of something I would REALLY like to avoid as this is a repeated workflow, and I don't want to build customizations like this. There's too much room for non legibility if I go down this path with my team mates.
requires / watches
These don't have a concept ( that I am aware of ) of applying a state repeatedly, or in a logical order / workflow.
And a few others I won't mention.
Without further discussion, here's my dilemma.
Goals:
Jenkins Master gets Deployed
We can unit.test the deployment as it proceeds
We only restart tomcat when necessary
We can update plugins on a per package basis
A big emphasis on good clean intuitively clear salt configs
Jenkins deployment is pretty straight forward. We drop in the packages, and the configs, and we're set.
Unit testing is harder. As an example I've got this state file.
actions/version.sls:
# Hit's the jenkins CLI interface to check for version info
# This can be used to verify that jenkins is active and the version we want
# Import some info
{%- from 'jenkins/init.sls' import jenkins_home with context %}
# Install plugins in jenkins_plugins list
jenkins_version:
cmd.run:
- name: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" version
- cwd: /var/lib/tomcat/webapps/ROOT/WEB-INF/
- user: jenkins
actions.version basically verifies that jenkins is running and queryable. we want to be sure of this during the build at several points.
example... tomcat takes time to spin up. we had to add a delay to that restart operation. If you check out start.sls below you can see that operation occurring. Note the bug open on init_delay: .
actions/start.sls:
# Starts the tomcat service
tomcat_start:
service.running:
- name: tomcat
- enable: True
- full_restart: True
# Not functional atm see --> https://github.com/saltstack/salt/issues/20631
# - init_delay: 120
# initiate a 120 second delay after any service start to let tomcat come up.
tomcat_wait:
module.run:
- name: test.sleep
- length: 60
include:
- jenkins.actions.version
Now we have this restart capability by doing an actions.stop and an actions.start. We have this actions.version state that we can use to verify that the system is ready to proceed with jenkins specific state workflows.
I want to do something kinda like this...
Install Jenkins --> Grab yaml of plugins --> install plugins that need it
Pretty straight forward.
Except, to loop through the yaml of plugins I am using Jinja.
And now I have no way to call and be sure that the start.sls and version.sls states can be repeatedly applied.
I am looking for, a good way to do that.
This would be something akin to a jenkins.sls
{% set repo_username = "foo" -%}
{% set repo_password = "bar" -%}
include:
- jenkins.actions.version
- jenkins.actions.stop
- jenkins.actions.start
# Install Jenkins
jenkins:
pkg:
- installed
# Import Jenkins Plugins as List, and Working Path
{%- from 'jenkins/init.sls' import jenkins_home with context %}
{%- import_yaml "jenkins/plugins.sls" as jenkins_plugins %}
{%- import_yaml "jenkins/custom-plugins.sls" as custom_plugins %}
# Grab updated package list
jenkins-contact-update-server:
cmd.run:
- name: curl -L http://updates.jenkins-ci.org/update-center.json | sed '1d;$d' > {{ jenkins_home }}/updates/default.json
- unless: test -d {{ jenkins_home }}/updates/default.json
- require:
- pkg: jenkins
- service: tomcat
# Install plugins in jenkins_plugins list
{% for plugin in jenkins_plugins %}
jenkins-plugin-{{ plugin }}:
cmd.run:
- name: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" install-plugin "{{ plugin }}"
- unless: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" list-plugins | grep "{{ plugin }}"
- cwd: /var/lib/tomcat/webapps/ROOT/WEB-INF/
- user: jenkins
- require:
- pkg: jenkins
- service: tomcat
Here is where I am stuck. require won't do this. and lists
of actions don't seem to schedule linearly in salt. I need to
be able to just verify that jenkins is up and ready. I need
to be able to restart tomcat after a single plugin in the
iteration is added. I need to be able to do this to satisfy
dependencies in the plugin order.
- sls: jenkins.actions.version
- sls: jenkins.actions.stop
- sls: jenkins.actions.start
# This can't work for several reasons
# - watch_in:
# - sls: jenkins-safe-restart
{% endfor %}
# Install custom plugins in the custom_plugins list
{% for cust_plugin,cust_plugin_url in custom_plugins.iteritems() %}
# manually downloading the plugin, because jenkins-cli.jar doesn't seem to work direct to artifactory URLs.
download-plugin-{{ cust_plugin }}:
cmd.run:
- name: curl -o {{ cust_plugin }}.jpi -O "https://{{ repo_username }}:{{ repo_password }}#{{ cust_plugin_url }}"
- unless: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" list-plugins | grep "{{ cust_plugin }}"
- cwd: /tmp
- user: jenkins
- require:
- pkg: jenkins
- service: tomcat
# installing the plugin ( REQUIRES TOMCAT RESTART AFTER )
custom-plugin-{{ cust_plugin }}:
cmd.run:
- name: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" install-plugin /tmp/{{ cust_plugin }}.jpi
- unless: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" list-plugins | grep "{{ cust_plugin }}"
- cwd: /var/lib/tomcat/webapps/ROOT/WEB-INF/
- user: jenkins
- require:
- pkg: jenkins
- service: tomcat
{% endfor %}
You won't be able to achieve this without using reactors, beacons and especially not without writing your own python execution modules.
Jenkins Master gets Deployed
Write a jenkins execution module in python with a function install(...):. In that function you would manage any dependencies by either calling existing execution modules or by writing them yourself.
We can unit.test the deployment as it proceeds
Inside the install function of the jenkins module you would fire specific events depending on the results of the install.
if not _run_deployment_phase(...):
__salt__['event.send']('jenkins/install/error', {
'finished': False,
'message': "Something failed during the deployment!",
})
You would map that event to reactor sls files and handle it.
We only restart tomcat when necessary
Write a tomcat module. Add an _is_up(...) function where you would check if tomcat is up by parsing the tomcat logs for the result. Call the function inside a state module and add a mod_watch function.
def mod_watch():
# required dict to return
return_dict = {
"name": "Tomcat install",
"changes": {},
"result": False,
"comment": "",
}
if __salt__["tomcat._is_up"]():
return_dict["result"] = True
return_dict["comment"] = "Tomcat is up."
if __opts__["test"]:
return_dict["result"] = None
return_dict["comment"] = "comment here about what will change"
return return_dict
# execute changes now
return return_dict
Use your state module inside a state file.
install tomcat:
tomcat.install:
- name: ...
- user: ...
...
wait until tomcat is up:
cmd.run:
- name: ...
- watch:
- tomcat: install tomcat
We can update plugins on a per package basis
Add a function to your jenkins execution module named install_plugin. View pkg.install code to replicate interface.
A big emphasis on good clean intuitively clear salt configs
Write python execution modules for easy and maintainable configuration logic. Use that execution module inside your own state modules. Inside state files call your own state modules and supply individual configuration with any state renderer you like.
States only execute once, by design. If you need the same action to occur multiple times, you need multiple states. Also, includes are only included a single time.
Rather than all of this include/require stuff you're doing, you should just put all of the code into a single sls file, and generate states through jinja iteration.
If what you're trying to do is add a bunch of plugins, add config files, then at the end do restarts, then you should really just execute everything in order, don't use require, and use listen or listen_in, rather than watch or watch_in.
listen/listen_in cause triggered actions to happen at the end of a state run. They are similar to the concept of handlers in Ansible.
This is a pretty old question, but If you change your Jenkins/tomcat start/stop procedure to be a standard init/systemd/windows service (as all well behaved services should be), you could have a service.running for the Jenkins service and add this to each of your custom-plugin-{{ cust_plugin }} states.
require_in:
- svc: jenkins
watch_in:
- svc: jenkins
You could continue to use the cmd.run module with onchanges. You'd have to add onchanges_in: to each of the custom-plugin-{{ cust_plugin }} states, but you need to have at least one item in the on changes list or the command will fire every time the state runs.
If you use require you cause salt to re-order your states. If you want your states to run in order, just write them in the order you want them to run in.
Watch/watch_in will also re-order your states. If you use listen/listen_in instead, it'll queue the triggered actions to run in the order they were triggered at the end of the state run.
See:
http://ryandlane.com/blog/2014/07/14/truly-ordered-execution-using-saltstack/
http://ryandlane.com/blog/2015/01/06/truly-ordered-execution-using-saltstack-part-2/

Ansible 1.6 include with_items deprecated

So looks like this feature has been deprecated, i really don't understand why, Ansible CTO's says that we should use instead with_nested but honestly i have no idea how to do it,
Here's my playboook:
- hosts: all
user: root
vars:
- sites:
- site: site1.com
repo: ssh://hg#bitbucket.org/orgname/reponame
nginx_ssl: true;
copy_init:
- path1/file1.txt
- path2/file2.php
- path2/file3.php
- site: site2.net
repo: ssh://hg#bitbucket.org/orgname/reposite2
- site: site4.com
repo: ssh://hg#bitbucket.org/orgname/reposite3
copy_init:
- path2/file2.php
tasks:
- name: Bootstrap Sites
include: bootstrap_site.yml site={{item}}
And the error message when trying to execute this in Ansible 1.6.6:
ERROR: [DEPRECATED]: include + with_items is a removed deprecated feature. Please update your playbooks.
How can i convert this playbook to something that works with this ansible version?
There's no drop-in replacement, unfortunately. Some things you can do:
Pass the list to your included file and iterate there. In your playbook:
vars:
sites:
- site1
- site2
tasks:
- include: bootstrap_site.yml sites={{sites}}
And in bootstrap_site.yml:
- some_Task: ...
with_items: sites
- another_task: ...
with_items: sites
...
Rewrite bootstrap_site as a module (in python, bash, whatever), put it in a library dir next to your playbook. Then you could do:
- bootstrap_site: site={{item}}
with_items: sites
Update: Ansible V2 is out and brings back the include + with_items combo loop!
I found an answer to circumvent the blahblah-deprecated message... as asked in the original post.
I added a file vars/filenames.yml:
filenames:
- file1
- file2
- file3
Next I read these names at the beginning of the playbook:
- name: read filenames
include_vars: vars/filenames.yml
Then, I can use them later:
- name: Copy files 1
copy: src=/filesrc1/{{ item }} dest=/filedest1/{{ item }} owner=me group=we
with_items: filenames
and so on....
Regards,
Tom

Resources