SaltStack NO onchange functionality - salt-stack

I trying to find a way how to execute a specific state only if the previous one completed successfully but ONLY when is without changes, basically, I need something like no onchanges.
start-event-{{ minion }}:
salt.function:
- name: event.send
- tgt: {{ minion }}
- arg:
- 'PATCHING-STARTED'
start-patching-{{ minion }}:
salt.state:
- tgt: {{ minion }}
- require:
- bits-{{ minion }}
- sls:
- patching.uptodate
finish-event-{{ minion }}:
salt.function:
- name: event.send
- tgt: {{ minion }}
- arg:
- 'PATCHING-FINISHED'
or in other words, I want to send ever "finish-event-{{ minion }}" only when "start-patching-{{ minion }}" is like:
----------
ID: start-patching-LKA3
Function: salt.state
Result: True
Comment: States ran successfully. No changes made to LKA3.
Started: 11:29:15.906124
Duration: 20879.248 ms
Changes:
----------

Related

Salt stack master reactors no action

I want to have Linux patches schedule and restart the minions if updates are success. I have created state for OS update process and sending event to event bus. Then I have created reactors to listen event tag and reboot server if tag success but reactor not react to anything and no action.
# /srv/salt/os/updates/linuxupdate.sls
uptodate:
pkg.uptodate:
- refresh: True
event-completed:
event.send:
- name: 'salt/minion-update/success'
- require:
- pkg: uptodate
event-failed:
event.send:
- name: 'salt/minion-update/failed'
- onfail:
- pkg: uptodate
# /etc/salt/master.d/reactor.conf
reactor:
- 'salt/minion-update/success':
- /srv/reactor/reboot.sls
# /srv/reactor/reboot.sls
reboot_server:
local.state.sls:
- tgt: {{ data['id'] }}
- arg:
- os.updates.reboot-server

Ensure a certain amount of time has elapsed between two tasks in ansible playbook, in real time

I will be notifying users that an event will happen in 15 minutes; I then perform tasks that take a variable amount of time which is less than 15 minutes, and I then need to wait the rest of the time and perform the said event at exactly 15 minutes from when I notified the users.
Can someone propose such a real-time timer in ansible? pause won't work, because it's static. Also, async doesn't work on a pause task, so we can't start a pause asynchronously with poll: 0, move on to other tasks, and then come back and ensure it has succeeded with async_status right before our waited-for task.
This is my best attempt, but the until conditional doesn't seem to be getting updated with the actual current time, because it never terminates:
- name: Ensure a certain amount of time has elapsed between two tasks
hosts: localhost
gather_facts: no
vars:
wait_time: 10
timer_delay_interval: 1
tasks:
- name: Debug start time
debug:
var: ansible_date_time
- name: Set current time
set_fact:
start_time: "{{ ansible_date_time.date }} {{ ansible_date_time.time }}"
- name: Other task
pause:
seconds: 2
- name: Timer
set_fact:
current_time: "{{ ansible_date_time.date }} {{ ansible_date_time.time }}"
until: ((current_time | to_datetime) - (start_time | to_datetime)).total_seconds() >= wait_time
retries: 1000000
delay: "{{ timer_delay_interval }}"
register: timer_task
- name: Waited for task
debug:
msg: |
The timer has completed with {{ timer_task.attempts }} attempts,
for a total of {{ timer_task.attempts*timer_delay_interval | int }} seconds.
The original wait time was {{ wait_time }}, which means that intervening
tasks took {{ wait_time - timer_task.attempts*timer_delay_interval | int }} seconds.
NOTE: The to_datetime filter requires datetimes to be formatted like %Y-%m-%d %H:%M:%S, which is why I'm formatting them that way.
There are more options.
Run tasks concurrently. Run the module wait_for asynchronously. Then, use async_status to wait for the remaining wait_time to elapse. The number of retries is the difference between wait_time and pause in 1 second delay to ensure the module will cover the remaining time. In practice, the number of retries will be smaller, of course. See comments below about offset
- name: Ensure a certain amount of time has elapsed between two tasks
hosts: localhost
gather_facts: false
vars:
wait_time: 10
pause: 5
offset: 2
tasks:
- debug:
msg: "start_time: {{ '%Y-%m-%d %H:%M:%S'|strftime }}"
- wait_for:
timeout: "{{ wait_time|int - offset|int }}"
async: 20
poll: 0
register: async_result
- pause:
seconds: "{{ pause }}"
- async_status:
jid: "{{ async_result.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: "{{ wait_time|int - pause|int }}"
delay: 1
- debug:
msg: "Something happened at {{ '%Y-%m-%d %H:%M:%S'|strftime }}"
gives (abridged)
TASK [debug] *********************************************************************************
ok: [localhost] =>
msg: 'start_time: 2022-05-12 09:43:11'
TASK [wait_for] ******************************************************************************
changed: [localhost]
TASK [pause] *********************************************************************************
Pausing for 5 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [localhost]
TASK [async_status] **************************************************************************
FAILED - RETRYING: [localhost]: async_status (5 retries left).
FAILED - RETRYING: [localhost]: async_status (4 retries left).
ok: [localhost]
TASK [debug] *********************************************************************************
ok: [localhost] =>
msg: Something happened at 2022-05-12 09:43:21
The next option is the calculation of the remaining time
- name: Ensure a certain amount of time has elapsed between two tasks
hosts: localhost
gather_facts: false
vars:
wait_time: 10
pause: 5
offset: 2
tasks:
- set_fact:
start_time: "{{ '%Y-%m-%d %H:%M:%S'|strftime }}"
start_time_sec: "{{ '%s'|strftime }}"
- set_fact:
stop_time: "{{ '%Y-%m-%d %H:%M:%S'|strftime(start_time_sec|int + wait_time|int) }}"
stop_time_sec: "{{ start_time_sec|int + wait_time|int }}"
- debug:
msg: "start_time: {{ start_time }}"
- pause:
seconds: "{{ pause }}"
- set_fact:
wait_time: "{{ stop_time_sec|int - '%s'|strftime|int - offset|int }}"
- debug:
msg: |-
wait_time: {{ wait_time }}
when: debug|d(false)|bool
- wait_for:
timeout: "{{ wait_time|int }}"
- debug:
msg: "Something happened at {{ '%Y-%m-%d %H:%M:%S'|strftime }}"
gives (abridged)
TASK [debug] *********************************************************************************
ok: [localhost] =>
msg: 'start_time: 2022-05-12 09:55:08'
TASK [pause] *********************************************************************************
Pausing for 5 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [localhost]
TASK [set_fact] ******************************************************************************
ok: [localhost]
TASK [debug] *********************************************************************************
skipping: [localhost]
TASK [wait_for] ******************************************************************************
ok: [localhost]
TASK [debug] *********************************************************************************
ok: [localhost] =>
msg: Something happened at 2022-05-12 09:55:18
Fit offset to your system.

Network Interface not enabling when provisioning VM from Ansible

I'm provisioning VM from ansible-playbook using VMware template, I can see the VM is created successfully but the network interface is not enabled automatically. I have to manually go to the VMware console and then edit the settings of the VM to enable the Network Interface. Kindly check the below playbook tasks and suggest what correction I need to do to enable the Network Interface when running the playbook
tasks:
- name: Create VM from template
vmware_guest:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
esxi_hostname: "{{ esxhost }}"
datacenter: "{{ datacenter_name }}"
name: "{{ name }}"
folder: TEST
template: "{{ vmtemplate }}"
disk:
- size_gb: "{{ disk_size | default(32) }}"
type: thin
datastore: "{{ datastore }}"
networks:
- name: VM Network
ip: 172.17.254.223
netmask: 255.255.255.0
gateway: 172.17.254.1
device_type: vmxnet3
state: present
wait_for_ip_address: True
hardware:
memory_mb: "{{ vm_memory | default(2000) }}"
state: present
register: newvm
- name: Changing network adapter
vmware_guest_network:
hostname: "{{ vcenter_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
datacenter: "{{ datacenter_name }}"
name: "{{ name }}"
validate_certs: no
networks:
- name: "VM Network"
device_type: vmxnet3
state: present
According to the documentation you can "connect" the network interface via connected: true. There is also a parameter start_connected. So add both to your networks dictionary
networks:
- name: VM Network
ip: 172.17.254.223
netmask: 255.255.255.0
gateway: 172.17.254.1
device_type: vmxnet3
connected: true
start_connected: true
I can't see a default value in the documentation, but I assume - they are per default false.
Also - there is no state parameter in networks dict list.
I have had this issue before. This happen because you're using VDS and on the vm-template side the open-vm-tools is installed instead of vmware-tools.
I was able to fix this issue by applying this workaround:
First install vmware-tools in the template instead of open-vm-tools.
Make sure in the playbook that the VM got those parameters:
connected: true /
start_connected: true
In case there is a need for open-vm-tools after, you can simply run a small playbook which uninstall vmware-tools and reinstall the open-vmtools instead.

ansible with_dict fails when provided with set_fact variable

I am trying to dynamically provide dictionary name for interface variables.
My ansible task looks like this.
- name: Setting interface list
set_fact:
one_fact: "{{ host_name }}_interfaces"
- name: deb
debug: var={{ one_fact }}
- name: Managing Interfaces
ios_interface:
enabled: "{{ item['value']['enabled'] }}"
name: "{{ item['key'] }}"
state: "{{ item['value']['state'] }}"
with_dict: "{{ one_fact }}"
Dictionary looks something like this
---
h1_interfaces:
Ethernet1/1:
description: Firewall
enabled: true
speed: auto
state: present
Ethernet1/2:
description: asd
enabled: true
speed: auto
state: present
h2_interfaces:
Ethernet1/1:
description: Firewall
enabled: true
speed: auto
state: present
Ethernet1/2:
description: asd
enabled: true
speed: auto
state: present
When i set with_dict: {{ one_fact }} i get an error FAILED! => {"msg": "with_dict expects a dict"}
But when i provide with with_dict: {{ h1_interfaces }} it works like a charm. What am i doing wrong?
Apparently you have a variable host_name too, which is set to h1 or h2, and you want to access the dictionaries: h1_interfaces/h2_interfaces.
To construct dynamically the variable name and access its value, you should use the lookup plugin, please see below task:
- name: Setting interface list
set_fact:
one_fact: "{{ lookup('vars', myvar + '_interfaces') }}"
vars:
myvar: "{{ host_name }}"
and a slightly altered playbook to demonstrate the result:
playbook:
---
- hosts: localhost
gather_facts: false
vars:
host_name: h1
h1_interfaces:
Ethernet1/1:
description: Firewall
enabled: true
speed: auto
state: present
Ethernet1/2:
description: asd
enabled: true
speed: auto
state: present
h2_interfaces:
Ethernet1/1:
description: Firewall
enabled: true
speed: auto
state: present
Ethernet1/2:
description: asd
enabled: true
speed: auto
state: present
tasks:
- name: Setting interface list
set_fact:
one_fact: "{{ lookup('vars', myvar + '_interfaces') }}"
vars:
myvar: "{{ host_name }}"
- name: deb
debug: var=one_fact
- name: Managing Interfaces
debug:
msg: "enabled: {{ item['value']['enabled'] }}, name: {{ item['key'] }}, state: {{ item['value']['state'] }}"
with_dict: "{{ one_fact }}"
result:
TASK [Managing Interfaces] *********************************************************************************************************************************************************************************************
ok: [localhost] => (item={'key': 'Ethernet1/1', 'value': {'description': 'Firewall', 'enabled': True, 'speed': 'auto', 'state': 'present'}}) => {
"msg": "enabled: True, name: Ethernet1/1, state: present"
}
ok: [localhost] => (item={'key': 'Ethernet1/2', 'value': {'description': 'asd', 'enabled': True, 'speed': 'auto', 'state': 'present'}}) => {
"msg": "enabled: True, name: Ethernet1/2, state: present"
}
cheers

Use a Ansible Array variable with loop in Jinja2 template [duplicate]

This question already has answers here:
How to use Ansible's with_item with a variable?
(2 answers)
Closed 5 years ago.
This is my configuration array.
tomcatsconfs:
- {instance: tc-p1_i01_3001, port: 30011, connector: ajp-nio-, connector_port: 30012}
- {instance: tc-p1_i02_3002, port: 30021, connector: ajp-nio-, connector_port: 30022}
- {instance: tc-p1_i03_3003, port: 30031, connector: ajp-nio-, connector_port: 30032}
Now I woul like to create a nrpe.cfg with a jinja2 template with these task:
- name: copy nrpe.conf from template
template: src=nrpe.cfg.j2 dest=/etc/nagios/nrpe.cfg mode=0644 owner=root group=root
with_items:
- tomcatsconfs
Ansible transfers this array as a dictionary.
+[{u'connector': u'ajp-nio-', u'instance': u'tc-p1_i01_3001', u'connector_port': 30012, u'port': 30011}, {u'connector': u'ajp-nio-', u'instance': u'tc-p1_i02_3002', u'connector_port': 30022, u'port': 30021}, {u'connector': u'ajp-nio-', u'instance': u'tc-p1_i03_3003', u'connector_port': 30032, u'port': 30031}]
And I try to iterate this dictionary with this loop
{% for key value in tomcatconfs.iteritems() %}
key value
{% endfor %}
But I get the error message:
failed: [host] (item=tomcatconfs) => {"failed": true, "item": "tomcatconfs", "msg": "AnsibleUndefinedVariable: 'list object' has no attribute 'iteritems'"}
How I can iterate this dictionary in this template?
Greetings niesel
I used this.
---
- name: Run Ansible
hosts: 127.0.0.1
connection: local
gather_facts: true
vars:
tomcatsconfs:
- {instance: tc-p1_i01_3001, port: 30011, connector: ajp-nio-, connector_port: 30012}
- {instance: tc-p1_i02_3002, port: 30021, connector: ajp-nio-, connector_port: 30022}
- {instance: tc-p1_i03_3003, port: 30031, connector: ajp-nio-, connector_port: 30032}
tasks:
- name: Testing Iteration
copy:
dest: /tmp/testtemp
content: |
{% for var in tomcatsconfs %}
instance: {{ var.instance }}
port: {{ var.port }}
connector: {{ var.connector }}
connector_port: {{ var.connector_port }}
{% endfor %}
OUTPUT:
instance: tc-p1_i01_3001
port: 30011
connector: ajp-nio-
connector_port: 30012
instance: tc-p1_i02_3002
port: 30021
connector: ajp-nio-
connector_port: 30022
instance: tc-p1_i03_3003
port: 30031
connector: ajp-nio-
connector_port: 30032
I think all you need to change is how you are passing the list to with_items. Try changing
- name: copy nrpe.conf from template
template: src=nrpe.cfg.j2 dest=/etc/nagios/nrpe.cfg mode=0644 owner=root group=root
with_items:
- tomcatsconfs
to
- name: copy nrpe.conf from template
template: src=nrpe.cfg.j2 dest=/etc/nagios/nrpe.cfg mode=0644 owner=root group=root
with_items: "{{ tomcatsconfs }}"
I think what is going on is that you are giving with_items a list of one list. If you change it to what I have in my example, you are just giving it the list.
This fixed it with my simplified sample playbook:
---
- hosts: localhost
connection: local
vars:
tomcatsconfs:
- {instance: tc-p1_i01_3001, port: 30011, connector: ajp-nio-, connector_port: 30012}
- {instance: tc-p1_i02_3002, port: 30021, connector: ajp-nio-, connector_port: 30022}
- {instance: tc-p1_i03_3003, port: 30031, connector: ajp-nio-, connector_port: 30032}
tasks:
- debug: var="{{item}}"
with_items:
- tomcatsconfs
- debug: var="{{item['port']}}"
with_items: "{{ tomcatsconfs }}"

Resources