I'm trying to generate a list with the information of hosts that match a certain condition (for instance, that NTP is synched for an inventory of Cisco devices). So that the ones matching that condition will be added to a list with say hostname and IP, for later generating a CSV.
Checking the condition is quite easy, but I'm struggling on how to generate this list.
Adding them to a list in a var doesn't seem wise, as it requires of serial execution of tasks per device.
Should I set a boolean fact for each device (i.e., ntp_synched, and then generate a list with the ansible_net_hostname and ansible_host of each device? How to do this?
- name: CHECK NTP STATUS
ios_command:
commands:
- show ntp status
register: ntp_status
- name: NTP NOT SYNCH
debug:
msg: "{{ [ansible_net_hostname] }}"
when: '"Clock is synchronized" not in ntp_status.stdout[0]'
For example, given the inventory for testing
host01 status="Clock is synchronized"
host02 status="Clock is synchronized"
host03 status="Clock is not synchronized"
Create the dictionary of the hosts and statuses
- hosts: all
tasks:
- command: "echo {{ status }}"
register: ntp_status
- set_fact:
host_status: "{{ dict(_hosts|zip(_stats)) }}"
vars:
_hosts: "{{ ansible_play_hosts }}"
_stats: "{{ ansible_play_hosts|
map('extract', hostvars, ['ntp_status','stdout'])|list }}"
run_once: true
gives
host_status:
host01: Clock is synchronized
host02: Clock is synchronized
host03: Clock is not synchronized
List the synchronized hosts
- debug:
msg: "{{ host_status|dict2items|
selectattr('value', 'search', 'Clock is synchronized')|
map(attribute='key')|list }}"
run_once: true
gives
msg:
- host01
- host02
Related
I'm creating a playbook for an ACL update, where the existing ACL needs to be updated, but before adding the new set of IP addresses to that ACL, I need to make sure that the ACL is present and that the IP hasn't already been configured.
Process:
Need to add the below IP addresses
ACL NAME: 11, 13, DATA_TEST, dummy
Check if the list of ACL are present
commands: "show access-lists {{item}}"
Check if ACL Exist
Q: Can't figure out how to access each item in the result of the first action to see if ACL has been configured. Ex. We can see from the output that dummy has no output, how can I exclude that and process if exist. (refer code below)
Check if IP addresses already added
Q: What is the best approach here? I'm thinking using when then comparing the ACL output from stdout vs the given variables content (ex. parents/lines)?
Add the set of IP addresses on target ACL
Q: What is the best approach here? Need to match the ACL name and configure using the variable.
If somebody is knowledgeable about Ansible, perhaps you could assist me in creating this project? I'm still doing some research, so any assistance you can give would be greatly appreciated. Thanks
My Code:
---
- name: Switch SVU
hosts: Switches
gather_facts: False
vars:
my_acl_list:
- 11
- 13
- DATA_TEST
- dummy
fail: "No such access-list {{item}}"
UP_ACL11:
parents:
- access-list 11 permit 192.168.1.4
- access-list 11 permit 192.168.1.5
UP_ACL13:
parents: access-list 13 permit 10.22.1.64 0.0.0.63
UP_ACLDATA:
lines:
- permit 172.11.1.64 0.0.0.63
- permit 172.12.2.64 0.0.0.63
parents: ip access-list standard DATA_TEST
tasks:
- name: Check if the ACL Name already exists.
ios_command:
commands: "show access-lists {{item}}"
register: acl_result
loop: "{{my_acl_list}}"
- debug: msg="{{acl_result}}"
- name: Check if ACL Exist
debug:
msg: "{{item.stdout}}"
when: item.stdout.exists
with_items: "{{acl_result.results}}"
loop_control:
label: "{{item.item}}"
# Pending - Need to know how to match if ACL name exist on stdout.
- name: Check if IP addresses already added
set_fact:
when:
# pending - ansible lookup?
# when var: UP_ACL11, UP_ACL13, UP_ACLDATA IPs are not in ACL then TRUE
- name: Add the set of IP addresses on target ACL
ios_config:
# pending - if doest exist on particular ACL name then configure using the var: UP_ACL11, UP_ACL13, UP_ACLDATA
Given the simplified data for testing
acl_result:
results:
- item: DATA_TEST
stdout:
- "Standard ... 10 permit ... 20 permit ..."
stdout_lines:
- - "Standard ..."
- "10 permit ..."
- "20 permit ..."
- item: dummy
stdout:
- ""
stdout_lines:
- - ""
Q: "Check if ACL Exists"
A: If ACL doesn't exist the attribute stdout is a list of empty strings. Test it
- name: Check if ACL Exists
debug:
msg: "{{ item.item }} exists: {{ item.stdout|map('length')|select()|length > 0 }}"
loop: "{{ acl_result.results }}"
loop_control:
label: "{{item.item}}"
gives
TASK [Check if ACL Exists] ********************************************
ok: [localhost] => (item=DATA_TEST) =>
msg: 'DATA_TEST exists: True'
ok: [localhost] => (item=dummy) =>
msg: 'dummy exists: False'
Notes:
In the filter select, "If no test is specified, each object will be evaluated as a boolean". The number 0 evaluates to false.
Example of a complete playbook for testing
- hosts: localhost
vars:
acl_result:
results:
- item: DATA_TEST
stdout:
- "Standard ... 10 permit ... 20 permit ..."
stdout_lines:
- - "Standard ..."
- "10 permit ..."
- "20 permit ..."
- item: dummy
stdout:
- ""
stdout_lines:
- - ""
tasks:
- name: Check if ACL Exists
debug:
msg: "{{ item.item }} exists: {{ item.stdout|map('length')|select()|length > 0 }}"
loop: "{{ acl_result.results }}"
loop_control:
label: "{{item.item}}"
The test can be simplified if you're sure stdout is a list with a single line only
msg: "{{ item.item }} exists: {{ item.stdout|first|length > 0 }}"
'data_list' consists of the values in the csv file. I want to use the values in 'data_list' to loop through the parameters in the 'Create user' section of the playbook, but I am getting this error after running my playbook:
TASK [Create Multiple Users : Create multiple users] ***************************
fatal: [10.16.220.30]: FAILED! => {"reason": "Vars in a Task must be specified as a dictionary, or a list of dictionaries\n\nThe error appears to be in '/runner/project/Windows AD/roles/Create Multiple Users/tasks/Create_multiple_users.yml': line 14, column 9, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n - \"{{ item.groups }}\"\n vars: data_list\n ^ here\n"}
This is my playbook:
---
- name: Read Users
hosts: localhost
vars:
data_list: []
tasks:
- read_csv:
path: user.csv
key: name
fieldnames: name,firstname,surname,displayName,groups
delimiter: ','
register: userdata
- name: Extract the list
set_fact:
data_list: "{{ data_list + [{ 'name': item.value.name, 'firstname': item.value.firstname, 'surname': item.value.surname, 'displayName': item.value.displayName, 'groups': item.value.groups }] }}"
loop: "{{ userdata.dict|dict2items }}"
- name: Create user accounts
hosts: "{{ hostname }}"
gather_facts: false
any_errors_fatal: false
become: yes
become_method: runas
become_user: admin
roles:
- { role: Create Multiple Users }
- name: Create users
community.windows.win_domain_user:
name: "{{ item.name }}"
firstname: "{{ item.firstname }}"
surname: "{{ item.surname }}"
attributes:
displayName: "{{ item.firstname + ' ' + item.surname }}"
groups:
- "{{ item.groups }}"
vars: data_list
with_items:
- "{{ data_list }}"
What is the correct vars that I should write?
This is the line causing the error in your task
vars: data_list
As mentioned in your error message, the vars section should look like:
vars:
var1: value1
var2: value2
But this is not the only problem in you above script. You are gathering your csv data in a separate play on localhost and setting that info as a fact in variable data_list. When your first play is over, that var will only be known from the localhost target. If you want to reuse it in a second play targeting other hosts, you'll have to get that var from the hostvars magic variable
{{ hostvars.localhost.data_list }}
This is not the best approach here as you can easily shorten your playbook to a single play. The trick here is to delegate your csv gathering task to localhost and set run_once: true so that the registered var is calculated only once and distributed to all hosts with the same value. You can also drop the set fact which basically copies the same key: value to a new var.
Here is an (untested) example playbook to show you the way:
---
- name: Create multiple Windows AD user accounts from CSV
hosts: "{{ hostname }}"
gather_facts: false
tasks:
- name: read csv from localhost (single run same registered var for all hosts)
read_csv:
path: user.csv
key: name
fieldnames: name,firstname,surname,displayName,groups
delimiter: ','
register: userdata
run_once: true
delegate_to: localhost
- name: Create users
community.windows.win_domain_user:
name: "{{ item.name }}"
firstname: "{{ item.firstname }}"
surname: "{{ item.surname }}"
attributes:
displayName: "{{ item.firstname + ' ' + item.surname }}"
groups:
- "{{ item.groups }}"
# This will work on any number of hosts as `userdata`
# now has the same value for each hosts inside this play.
# we just have to extract the values from each keys from
# `userdata` and loop on that list
loop: "{{ userdata.dict | dict2items | map(attribute='value') }}"
In Ansible I've used register to save the results of a task in the variable services.
It has this structure:
"stdout_lines": [
"arp-ethers.service \u001b[1;31mdisabled\u001b[0m",
"auditd.service \u001b[1;32menabled \u001b[0m",
"autovt#.service \u001b[1;31mdisabled\u001b[0m",
"blk-availability.service \u001b[1;31mdisabled\u001b[0m"]
and I would like to receive this:
{
"arp-ethers.service": "disabled",
"auditd.service": "enabled",
"autovt#.service": "disabled",
"blk-availability.service":"disabled"
}
I'd like to use a subsequent set_fact task to generate a new variable with a dictionary, but I'm going round in circles with no luck so far.
- name: Collect all services for SYSTEMD
raw: systemctl list-unit-files --type=service --no-pager -l --no-legend`
register: services
changed_when: false
- debug:
var: services
- debug:
msg: "{{ item.split()[0]|to_json }} : {{ item.split()[1]|to_json }}"
with_items:
- "{{ services.stdout_lines }}"
- name: Populate fact list_services for SYSTEMD
set_fact:
cacheable: yes
list_services: "{{ list_services|default({}) | combine ( {item.split()[0]|to_json: item.split()[1]|to_json} ) }}"
with_items: "{{ services.stdout_lines }}"
This return :
FAILED! => {"msg": "|combine expects dictionaries, got u'arp-ethers.service \\x1b[1;31mdisabled\\x1b[0m\\r\\nauditd.service \\x1b[1;32menabled \\x1b[0m\\r\\nautovt#.service \\x1b[1;31mdisabled\\x1b[0m\\r\\nblk-availability.service \\x1b[1;31mdisabled\\x1b[0m\\r\\n'"}
What you want is to switch list-unit-files into json output using --output=json (yes, that's a link to the journalctl man page, because the systemctl one links there)
roughly like this, although I didn't test it:
- name: Collect all services for SYSTEMD
raw: systemctl --output=json list-unit-files --type=service
register: services_json
changed_when: false
- set_fact:
services: '{{ services_json.stdout | from_json }}'
Use service_facts. For example
- service_facts:
- set_fact:
dict_services: "{{ dict(ansible_facts.services|
dict2items|
json_query('[].[key, value.status]')) }}"
In my ansible playbook, I read a list of directories into a list. I then want to read a "config.yml" file from each of these directories and put their content into dictionary, so that I can reference the config-data via the directory name from that dictionary.
The first part is no problem, but I cannot get the second part to work:
Step 1, load directories:
- name: Include directories
include_vars:
file: /main-config.yml
name: config
Step 2, load configs from directories:
- name: load deploymentset configurations
include_vars:
file: /path/{{ item }}/config.yml
name: "allconfs.{{ item }}" ## << This is the problematic part
with_items:
- "{{ config.dirs }}"
I tried different things like "allconfs['{{ item }}'], but none seemed to work. The playbook completed successfully, but the data was not in the dictionary.
I also tried defining the outer dictionary beforehand, but that did not work either.
The config files themselves are very simple:
/main-config.yml:
dirs:
- dir1
- dir2
- dir3
/path/dir1/config.yml:
some_var: "some_val"
another_var: "another val"
I want to be able to then access the values of the config.yml files like this:
{{ allconfs.dir1.some_var }}
UPDATE to try Konstantins approach:
- name: load deploymentset configurations
include_vars:
file: /repo/deploymentsets/{{ item }}/config.yml
name: "default_config"
with_items:
- "{{ config.deploymentsets }}"
register: default_configs
- name: combine configs
set_fact:
default_configs: "{{ dict(default_configs.results | json_query('[].[item, ansible_facts.default_config]')) }}"
Error message:
fatal: [127.0.0.1]: FAILED! => {"failed": true, "msg": "Unexpected templating type error occurred on ({{ dict(default_configs.results | json_query('[].[item, ansible_facts.default_config]')) }}): <lambda>() takes exactly 0 arguments (1 given)"}
Here is a piece of code from one of my projects with similar functionality:
- name: Load config defaults
include_vars:
file: "{{ item }}.yml"
name: "default_config"
with_items: "{{ config_files }}"
register: default_configs
tags:
- configuration
- name: Combine configs into one dict
# Здесь мы делаем словарь вида
# default_configs:
# config_name_1: { default_config_object }
# config_name_2: { default_config_object }
# config_name_3: { default_config_object }
# ...
set_fact:
default_configs: "{{ dict(default_configs.results | json_query('[].[item, ansible_facts.default_config]')) }}"
tags:
- configuration
default_config is a dummy var to temporary load var data.
The trick is to use register: default_configs with include_vars and parse it with the following task stripping out unnecessary fields.
AFAIK it isn't possible to create a single dictionary that encompasses multiple include_vars. From my testing it would create separate dictionaries for each included directory. Here's what you can do instead.
Remove allconfs. from your variable name.
- name: load deploymentset configurations
include_vars:
file: /path/{{ item }}/config.yml
name: "{{ item }}"
with_items:
- "{{ config.dirs }}"
You can then either access variables directly with
debug:
msg: "{{ dir1.some_var }}"
with_items: "{{ config.dirs }}"
Or if you need to loop through all variables in your included directories use this (hoisted from Ansible: how to construct a variable from another variable and then fetch it's value).
debug:
msg: "{{ hostvars[inventory_hostname][item].some_var }}"
with_items: "{{ config.dirs }}"
I have this dictionary:
MyClouds:
Devwatt:
ExternalNetwork: PublicRSC
Flavors:
- Flavor_1cpu_1gb: Devwatt_1cpu_1gb
- Flavor_1cpu_2gb: Devwatt_1cpu_2gb
- Flavor_1cpu_4gb: Devwatt_1cpu_4gb
Fuga:
ExternalNetwork: Internet
Flavors:
- Flavor_1cpu_1gb: Fuga_1cpu_1gb
- Flavor_1cpu_2gb: Fuga_1cpu_2gb
- Flavor_1cpu_4gb: Fuga_1cpu_4gb
- Flavor_1cpu_8gb: Fuga_1cpu_8gb
I have to migrate from one Openstack cloud to another, and one of my problem is to find correspondances between flavors.
I want to find which flavor (key) has the value "Devwatt_1cpu_2gb" in "Devwatt", and after get the value of the same key in "Fuga"
I tried a lot of solution (with-dict, when, jija filters, json_query) but I can't find a way to do that.
Please, may you help me ?
Inspired by Eric's answer and this usefull resource, I, finally, used this solution:
I changed a little bit my data structure and put it in a file matrice.yml:
MyClouds:
Devwatt:
ExternalNetwork: PublicRSC
Flavors:
- name: Flavor_1cpu_1gb
FlavorName: Devwatt_1cpu_1gb
- name: Flavor_2cpu_1gb
FlavorName: Devwatt_2cpu_1gb
- name: Flavor_1cpu_2gb
FlavorName: Devwatt_1cpu_2gb
Fuga:
ExternalNetwork: Internet
Flavors:
- name: Flavor_1cpu_1gb
FlavorName: Fuga_1cpu_1gb
- name: Flavor_2cpu_1gb
FlavorName: Fuga_2cpu_1gb
- name: Flavor_1cpu_2gb
FlavorName: Fuga_1cpu_2gb
then I used these filters in my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
vars:
SourceFlavorName: "Devwatt_2cpu_1gb"
tasks:
- name: get flavors matrice
include_vars:
file: matrice.yml
- name: Get generic name from flavor name of source cloud
debug:
msg: "{{ MyClouds.Devwatt.Flavors | selectattr('FlavorName','search','^'+ SourceFlavorName +'$') |map (attribute='name') | list }}"
register: result
- name: Get flavor name for target cloud from generic name
debug:
msg: "{{ MyClouds.Fuga.Flavors | selectattr('name','search','^'+ result.msg[0] +'$') |map (attribute='FlavorName') | list }}"
With this solution I can have any number of clouds and find easily correspondances between flavor from one source cloud to target cloud.
Why not using a simple mapping using a dict where keys are "Devwatt" flavors and values are "Fuga" flavors, like this :
---
- hosts: localhost
vars:
FlavorsMapping:
Devwatt_1cpu_1gb: Fuga_1cpu_1gb
Devwatt_1cpu_2gb: Fuga_1cpu_2gb
Devwatt_1cpu_4gb: Fuga_1cpu_4gb
tasks:
- debug:
var: FlavorsMapping['Devwatt_1cpu_2gb']