keys method for ansible - dictionary

I'm look for an simple way to create a list keys of a dictionary as one does in Python with the keys method but it seems this doesn't exist in Ansible. For example, below is the output of the setup module on a host.
"ansible_lvm": {
...
"pvs": {
"/dev/sda2": {
"free_g": "0",
"size_g": "8.81",
"vg": "rhel"
},
"/dev/sdb1": {
"free_g": "1.50",
"size_g": "3.00",
"vg": "bob"
}
},
The result I would like to achieve is to have the devices, dev/sda1 ..., in a list so that I can easily test to see if a device in the list of pvs or other operations on the list of keys. I know there are workarounds for about every case I have come up with but I feel like I'm missing something since it seems like something rather fundamental.

Simply transform your dict to a list and you will get its keys as in the following example:
---
- name: List of keys demo
hosts: localhost
gather_facts: false
vars:
"pvs": {
"/dev/sda2": {
"free_g": "0",
"size_g": "8.81",
"vg": "rhel"
},
"/dev/sdb1": {
"free_g": "1.50",
"size_g": "3.00",
"vg": "bob"
}
}
tasks:
- name: "List keys"
debug:
msg: "{{ pvs | list }}"
which gives
PLAY [List of keys demo] ********************************************************************************************************************************************************************************************************************
TASK [List keys] ****************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": [
"/dev/sda2",
"/dev/sdb1"
]
}
PLAY RECAP **********************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Related

The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'data'\n

I have simple playbook where fetching some data from Vault server using curl.
tasks:
- name: role_id
shell: 'curl \
--header "X-Vault-Token: s.ddDblh8DpHkOu3IMGbwrM6Je" \
--cacert vault-ssl-cert.chained \
https://active.vault.service.consul:8200/v1/auth/approle/role/cpanel/role-id'
register: 'vault_role_id'
- name: test1
debug:
msg: "{{ vault_role_id.stdout }}"
The output is like this:
TASK [test1] *********************************************************************************************************************************************************************
ok: [localhost] => {
"msg": {
"auth": null,
"data": {
"role_id": "65d02c93-689c-eab1-31ca-9efb1c3e090e"
},
"lease_duration": 0,
"lease_id": "",
"renewable": false,
"request_id": "8bc03205-dcc2-e388-57ff-cdcaef84ef69",
"warnings": null,
"wrap_info": null
}
}
Everything is ok if I am accessing first level attribute, like .stdout in previous example. I need deeper level attribute to reach, like vault_role_id.stdout.data.role_id. When I try this it is failing with following error:
"The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'data'\n\n
Do you have suggestion what I can do to get properly attribute values from deeper level in this object hierarchy?
"The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'data'\n\n
Yes, because what's happening is that rendering it into msg: with {{ is coercing the JSON text into a python dict; if you do want it to be a dict, then use either msg: "{{ (vault_role_id.stdout | from_json).data.role_id }}" or you can use set_fact: {vault_role_data: "{{vault_role_id.stdout}}"} and then vault_role_data will be a dict for the same reason it was coerced by your msg
You can see the opposite process by prefixing the msg with any characters:
- name: this one is text
debug:
msg: vault_role_id is {{ vault_role_id.stdout }}
- name: this one is coerced
debug:
msg: '{{ vault_role_id.stdout }}'
while this isn't what you asked, you should also add --fail to your curl so it exists with a non-zero return code if the request returns non-200-OK, or you can use the more ansible-y way via - uri: and set the return_content: yes parameter

Combine nested dictionaries in ansible

I have 2 different dictionaries that contains application information I need to join together.
landscape_dictionary:
{
"app_1": {
"Category": "application",
"SolutionID": "194833",
"Availability": null,
"Environment": "stage",
"Vendor/Manufacturer": null
},
"app_2": false
}
app_info_dictionary:
{
"app_1": {
"app_id": "6886817",
"owner": "owner1#nomail.com",
"prod": [
"server1"
],
"stage": []
},
"app_2": {
"app_id": "3415012",
"owner": "owner2#nomail.com",
"prod": [
"server2"
],
"stage": [
"server3"
]
}
}
This is the code I'm using to join both dictionaries
- set_fact:
uber_dict: "{{app_info_dictionary}}"
- set_fact:
uber_dict: "{{ uber_dict | default ({}) | combine(new_item, recursive=true) }}"
vars:
new_item: "{ '{{item.key}}' : { 'landscape': '{{landscape_dictionary[item.key]|default(false)}}' } }"
with_dict: "{{ uber_dict }}"
- debug:
msg: "{{item.key}}: {{item.value}}"
with_dict: "{{uber_dict}}"
If the value in the landscape_dictionary is false it will add it to the uber_dict without problems. But if the value contains information, it fails.
This is the error:
fatal: [127.0.0.1]: FAILED! => {"msg": "|combine expects dictionaries, got u\"{ 'app_1' : { 'landscape': '{u'Category': u'application', u'SolutionID': u'194820', u'Availability': None, u'Environment': 'stage', u'Vendor/Manufacturer': None}' } }\""}
What could be the problem?
Do I need to do an extra combine when I set the var in the set_fact?
Thanks
As #DustWolf notes in the comments,
For anyone from the Internet looking for the answer to: "How tp combine nested dictionaries in ansible", the answer is | combine(new_item, recursive=true)
This solves a closely related issue that has baffled myself and my team for months.
I will demonstrate:
Code:
---
- hosts: localhost
gather_facts: false
vars:
my_default_values:
key1: value1
key2:
a: 10
b: 20
my_custom_values:
key3: value3
key2:
a: 30
my_values: "{{ my_default_values | combine(my_custom_values, recursive=true) }}"
tasks:
- debug: var=my_default_values
- debug: var=my_values
Output:
ok: [localhost] =>
my_values:
key1: value1
key2:
a: 30
key3: value3
Note how key2 was completely replaced, thus losing key2.b
We changed this to:
my_values: "{{ my_default_values | combine(my_custom_values, recursive=true) }}"
Output:
my_values:
key1: value1
key2:
a: 30
b: 20
key3: value3
This syntax is not legal, or at the very least doesn't do what you think:
new_item: "{ '{{item.key}}' : { 'landscape': '{{landscape_dictionary[item.key]|default(false)}}' } }"
Foremost, ansible will only auto-coerce JSON strings into a dict, but you have used python syntax.
Secondarily, the way to dynamically construct a dict is not to use jinja2 to build up text but rather use the fact that jinja2 is almost a programming language:
new_item: "{{
{
item.key: {
'landscape': landscape_dictionary[item.key]|default(false)
}
}
}}"
Any time you find yourself with nested jinja2 interpolation blocks, that is a code smell that you are thinking about the problem too much as text (by nested, I mean {{ something {{nested}} else }})

ansible meta: refresh_inventory does not include previously absent hosts in task execution

Sometime ago, somebody suggested using dynamic inventories to generate a different hosts file depending on a location and other variables from a template, but I faced a pretty big issue :
After I create the inventory from a template, I need to refresh it (I do it using meta: refresh_inventory) for Ansible to execute tasks on newly added hosts, however, if the host was not initially in hosts file, ansible does not execute tasks on it. On the other hand, if after changing the host file a host is absent from a newly-formed file, then Ansible omits the host like it should, so the refresh_inventory does half of the work. Is there any way to get around this issue?
E.g. I have 1 task to generate hosts file from template, then refresh inventory, then do a simple task on all hosts, like show message:
tasks:
- name: Creating inventory template
local_action:
module: template
src: hosts.j2
dest: "/opt/ansible/inventories/{{location}}/hosts"
mode: 0777
force: yes
backup: yes
ignore_errors: yes
run_once: true
- name: "Refreshing hosts file for {{location}} location"
meta: refresh_inventory
- name: Force refresh of host errors
meta: clear_host_errors
- name: Show message
debug: msg="This works for this host"
If initial hosts file has hosts A, B, C, D, and the newly created inventory has B, C, D, then all is good:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
However, if newly formed hosts file has hosts B, C, D, E (E not being present at initial hosts file) then again the result is:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
With task for E missing. Now if I replay the playbook, only to add another host, say F, then the result looks like:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
ok: [E] => {
"msg": "This works for this host"
}
But no F, which is already added to the inventory file before the refresh.
So, any ideas?
Quoting from Basics
For each play in a playbook, you get to choose which machines in your infrastructure to target ... The hosts line is a list of one or more groups or host patterns ...
For example, it is possible to create the inventory in the 1st play and use it in the 2nd play. The playbook below
- hosts: localhost
tasks:
- template:
src: hosts.j2
dest: "{{ playbook_dir }}/hosts"
- meta: refresh_inventory
- hosts: test
tasks:
- debug:
var: inventory_hostname
with the template (fit it to your needs)
$ cat hosts.j2
[test]
test_01
test_02
test_03
[test:vars]
ansible_connection=ssh
ansible_user=admin
ansible_become=yes
ansible_become_user=root
ansible_become_method=sudo
ansible_python_interpreter=/usr/local/bin/python3.6
ansible_perl_interpreter=/usr/local/bin/perl
give
PLAY [localhost] ****************************************************************************
TASK [Gathering Facts] **********************************************************************
ok: [localhost]
TASK [template] *****************************************************************************
changed: [localhost]
PLAY [test] *********************************************************************************
TASK [Gathering Facts] **********************************************************************
ok: [test_02]
ok: [test_01]
ok: [test_03]
TASK [debug] ********************************************************************************
ok: [test_01] => {
"inventory_hostname": "test_01"
}
ok: [test_02] => {
"inventory_hostname": "test_02"
}
ok: [test_03] => {
"inventory_hostname": "test_03"
}
PLAY RECAP **********************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
test_01 : ok=2 changed=0 unreachable=0 failed=0
test_02 : ok=2 changed=0 unreachable=0 failed=0
test_03 : ok=2 changed=0 unreachable=0 failed=0
Even though the first answer provided here is correct I think this deserves an explanation on how refresh_inventory and also add_host behave. As I've seen a few other questions regarding this topic.
It does not matter if you use static or dynamic inventory, the behavior is the same. The only thing specific for dynamic inventory that can change the behavior is caching. The following applies for disabled caching or refreshed cache after adding the new host.
Both refresh_inventory and add_host allow you to execute tasks only in subsequent plays. However they allow you to access hostvars of the added hosts also in the current play. This behavior is partially and very briefly mentioned in the add_host documentation and is easy to miss.
Use variables to create new hosts and groups in inventory for use in later plays of the same playbook.
Consider following inventory called hosts_ini-main.ini:
localhost testvar='testmain'
Now you can write a playbook that will observe and test the behavior of refresh_inventory. It overwrites hosts_ini-main.ini inventory file (used by the playbook) with the following contents from the second file hosts_ini-second.ini:
localhost testvar='testmain'
127.0.0.2 testvar='test2'
The playbook prints hostvars before the inventory is changed follows by changing the inventory, refreshing inventory, again printing hostvars and then trying to execute task only on the newly added host.
The second play also tries to execute task only on the added host.
---
- hosts: all
connection: local
become: false
gather_facts: false
tasks:
- name: Print hostvars
debug:
var: hostvars
- name: Print var for first host
debug:
var: testvar
when: hostvars[inventory_hostname]['testvar'] == "testmain"
- name: Copy alternate hosts file to main hosts file
copy:
src: "hosts_ini-second.ini"
dest: "hosts_ini-main.ini"
- name: Refresh inventory using meta module
meta: refresh_inventory
- name: Print hostvars for the second time in the first play
debug:
var: hostvars
- name: Print var for added host
debug:
var: testvar # This will not execute
when: hostvars[inventory_hostname]['testvar'] == "test2"
# New play
- hosts: all
connection: local
become: false
gather_facts: false
tasks:
- name: Print hostvars in a different play
debug:
var: testvar
when: hostvars[inventory_hostname]['testvar'] == "test2"
Here is the execution (I've truncated parts of the output to make it more readable).
PLAY [all] *******************************************************************************
TASK [Print hostvars] ********************************************************************
ok: [localhost] => {
"hostvars": {
"localhost": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {},
...
"testvar": "testmain"
}
}
}
TASK [Print var for first host] ***********************************************************
ok: [localhost] => {
"testvar": "testmain"
}
TASK [Copy alternate hosts file to main hosts file] ***************************************
changed: [localhost]
TASK [Refresh inventory using meta module] ************************************************
TASK [Print hostvars for the second time in the first play] *******************************
ok: [localhost] => {
"hostvars": {
"127.0.0.2": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {},
...
"testvar": "test2"
},
"localhost": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
...
"testvar": "testmain"
}
}
}
TASK [Print var for added host] ***********************************************************
skipping: [localhost]
PLAY [all] ********************************************************************************
TASK [Print hostvars in a different play] *************************************************
skipping: [localhost]
ok: [127.0.0.2] => {
"testvar": "test2"
}
PLAY RECAP *******************************************************************************
127.0.0.2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
As can be seen the hostvars contain information about the newly added host even in the first play, but Ansible is not able to execute task on the host. When new play is created the task is executed on the new host without problems.

ansible changing nested dict variable

After deployment of a VM with a DHCP IP I would like to get the IP and append it to the guests dictionary.
For the first VM (testvm2) the code perfoms as expected and updates the tempip variable for the testvm2 VM.
But with the second VM (testvm1), it updates the tempip variable of the first VM (testvm2) with the IP of the second VM (testvm1), and updates the tempip variable of the second VM (testvm1) with the code of the variable '{{ tempip_reg.stdout_lines }}'
Can anyone explain to me why this happens?
I would appreciate the help.
I copied all the relevant code and output below:
guests dictionary:
---
guests:
testvm1:
mem: 512
cpus: 1
clone: template-centos
vmid: 102
tempip:
testvm2:
mem: 1536
cpus: 2
clone: template-centos
vmid: 102
tempip:
Ansible Playbook that starts the task:
---
- name: Provision VMs
hosts: infra
become: true
vars_files:
- group_vars/vms.yml
- group_vars/vars.yml
tasks:
- include_tasks: roles/tasks/provision-tasks.yml
with_dict: "{{ guests }}"
Ansible Tasks:
- name: Run proxmox-get-ip-linux.sh script to register DHCP IP of VM
script: proxmox-get-ip-linux.sh "{{ item.key }}"
register: tempip_reg
- name: temporary IP of VM "{{ item.key }}"
debug:
var: tempip_reg
- name: current host in item.key
set_fact:
current_host: "{{ item.key }}"
- name: current_host variable set to
debug:
var: current_host
- name: append item.value.tempip with the DHCP IP of the VM registered in
last task
set_fact:
guests: "{{ guests|combine({ current_host: {'tempip': '{{ tempip_reg.stdout_lines }}' }}, recursive=True) }}"
- name: temporary IP of "{{ item.key }}"
debug: var=guests
Result first VM:
"tempip_reg": {
"stdout": "192.168.1.21\r\n",
"stdout_lines": [
"192.168.1.21"
}
"current_host": "testvm2"
"guests": {
"testvm1": {
"clone": "template-centos",
"cpus": 1,
"ip": "192.168.1.60",
"mem": 512,
"tempip": null,
"vmid": 102
},
"testvm2": {
"clone": "template-centos",
"cpus": 2,
"ip": "192.168.1.61",
"mem": 1536,
"tempip": [
"192.168.1.21"
],
"vmid": 102
}
}
Result 2nd VM:
"tempip_reg": {
"stdout": "192.168.1.22\r\n",
"stdout_lines": [
"192.168.1.22"
}
"current_host": "testvm1"
"guests": {
"testvm1": {
"clone": "template-centos",
"cpus": 1,
"ip": "192.168.1.60",
"mem": 512,
"tempip": "{{ tempip_reg.stdout_lines }}",
"vmid": 102
},
"testvm2": {
"clone": "template-centos",
"cpus": 2,
"ip": "192.168.1.61",
"mem": 1536,
"tempip": [
"192.168.1.22"
],
"vmid": 102
}
}
TL;DR
Using Ansible code, you are trying to implement what Ansible already does for you.
Your attempts superimpose with built-in functionality and you get results which look nondeterministic.
Explanation:
The main problem with your code is a completely unnecessary loop declared with with_dict: "{{ guests }}" which causes to include the file 4 times.
It runs 4 times because you change the guests dictionary, which it loops over inside the included tasks-file.
In effect you get something which looks like an nondeterministic result.
The second problem is a trivial one: you always replace the value of tempip with a string {{ tempip_reg.stdout_lines }}.
Now, because of the unnecessary with_dict loop over a dictionary which you dynamically change, and because Jinja2 uses lazy variable evaluation, strings from previous iterations are interpreted as templates and get evaluated with incorrect values in subsequent iterations.
The last iteration leaves the string {{ tempip_reg.stdout_lines }} intact.
You also define and print two different facts.
What you should do:
You should not declare arbitrary iterations at all. Ansible implements a loop for all hosts itself. That is, if you declare a task:
- include_tasks: roles/tasks/provision-tasks.yml
the file will be included for each of the hosts in infra group (twice in your example).
You seem to want to have a single copy of your data structure with updated values for each VM.
At the same time, you create a fact, which is a separate data object maintained for each host separately.
So you should refer to and modify (combine) a single fact - you can do it for example on localhost.
You should structure your code like this:
---
- name: Provision VMs
hosts: infra
become: true
vars_files:
- group_vars/vms.yml
- group_vars/vars.yml
tasks:
- include_tasks: roles/tasks/provision-tasks.yml
- debug:
var: hostvars['localhost'].guests
and provision-tasks.yml:
- set_fact:
guests: "{{ guests|combine({ current_host: {'tempip': tempip_reg.stdout_lines }}, recursive=True) }}"
delegate_to: localhost
This will get you the following result:
"hostvars['localhost'].guests": {
"testvm1": {
"clone": "template-centos",
"cpus": 1,
"ip": "192.168.1.60",
"mem": 512,
"tempip": [
"192.168.1.21"
],
"vmid": 102
},
"testvm2": {
"clone": "template-centos",
"cpus": 2,
"ip": "192.168.1.61",
"mem": 1536,
"tempip": [
"192.168.1.22"
],
"vmid": 102
}
}
Finally, in the above play, you used group_vars and roles/tasks directories in wrong context. I left the paths intact and they will work for the above code, but basically you should never use them this way, because again, they have special meaning and treatment in Ansible.

How to filter dictionaries in Jinja?

I have a dictionary of packages with package-name being the key and a dictionary of some details being the value:
{
"php7.1-readline": {
"latest": "7.1.9-1+ubuntu14.04.1+deb.sury.org+1",
"origins": [
"ppa.launchpad.net"
],
"version": "7.1.6-2~ubuntu14.04.1+deb.sury.org+1",
"www": "http://www.php.net/"
},
"php7.1-xml": {
"latest": "7.1.9-1+ubuntu14.04.1+deb.sury.org+1",
"origins": [
"ppa.launchpad.net"
],
"version": "7.1.6-2~ubuntu14.04.1+deb.sury.org+1",
"www": "http://www.php.net/"
},
"plymouth": {
"version": "0.8.8-0ubuntu17.1"
},
....
}
I'd like to reduce the above to a dictionary with only the packages, that have the latest-attribute in their values.
It would seem like json_query is the filter to use, but I can't figure out the syntax. The examples out there all seem to operate on lists of dictionaries, not dictionaries of same...
For example, if I "pipe" the above dictionary into json_query('*.latest'), I get the list of the actual latest versions:
[
"7.1.9-1+ubuntu14.04.1+deb.sury.org+1",
"7.1.9-1+ubuntu14.04.1+deb.sury.org+1",
"7.1.6-2~ubuntu14.04.1+deb.sury.org+1"
]
How can I get the entire dictionary-elements instead?
Any hope?
With dict2items filter added in December 2017, it is possible using native functionality:
- debug:
msg: "{{ dict(pkg | dict2items | json_query('[?value.latest].[key, value.latest]')) }}"
The result:
"msg": {
"php7.1-readline": "7.1.9-1+ubuntu14.04.1+deb.sury.org+1",
"php7.1-xml": "7.1.9-1+ubuntu14.04.1+deb.sury.org+1"
}
You can't perform this translation (I think) exclusively with Jinja filters, but you can get there by applying a little Ansible logic as well. The following playbook uses a with_dict loop to loop over the items in your dictionary, and build a new dictionary from matching ones:
- hosts: localhost
vars:
packages: {
"php7.1-readline": {
"latest": "7.1.9-1+ubuntu14.04.1+deb.sury.org+1",
"origins": [
"ppa.launchpad.net"
],
"version": "7.1.6-2~ubuntu14.04.1+deb.sury.org+1",
"www": "http://www.php.net/"
},
"php7.1-xml": {
"latest": "7.1.9-1+ubuntu14.04.1+deb.sury.org+1",
"origins": [
"ppa.launchpad.net"
],
"version": "7.1.6-2~ubuntu14.04.1+deb.sury.org+1",
"www": "http://www.php.net/"
},
"plymouth": {
"version": "0.8.8-0ubuntu17.1"
}
}
tasks:
- set_fact:
new_packages: >
{{ new_packages|default({})|
combine({item.key: item.value}) }}
with_dict: "{{ packages }}"
when: "{{ item.value.latest is defined }}"
- debug:
var: new_packages
You are correct to link this question to https://stackoverflow.com/a/41584889/2795592.
There are no options to manipulate keys and values simultaneously with json_query out of the box (as of Ansible 2.4.0).
Here's patched json_query.py that supports jq-like to_entries/from_entries functions.
You can put it into ./filter_plugins near your playbook and make this query:
- debug:
msg: "{{ pkg | json_query('to_entries(#) | [?value.latest].{key:key, value:value.latest} | from_entries(#)')}}"
to get this result:
"msg": {
"php7.1-readline": "7.1.9-1+ubuntu14.04.1+deb.sury.org+1",
"php7.1-xml": "7.1.9-1+ubuntu14.04.1+deb.sury.org+1"
}
I'll make PR to ansible as soon as I have some spare time.

Resources