Salt stack master reactors no action - salt-stack

I want to have Linux patches schedule and restart the minions if updates are success. I have created state for OS update process and sending event to event bus. Then I have created reactors to listen event tag and reboot server if tag success but reactor not react to anything and no action.
# /srv/salt/os/updates/linuxupdate.sls
uptodate:
pkg.uptodate:
- refresh: True
event-completed:
event.send:
- name: 'salt/minion-update/success'
- require:
- pkg: uptodate
event-failed:
event.send:
- name: 'salt/minion-update/failed'
- onfail:
- pkg: uptodate
# /etc/salt/master.d/reactor.conf
reactor:
- 'salt/minion-update/success':
- /srv/reactor/reboot.sls
# /srv/reactor/reboot.sls
reboot_server:
local.state.sls:
- tgt: {{ data['id'] }}
- arg:
- os.updates.reboot-server

Related

Get a YAML file with HTTP and use it as a variable in an Ansible playbook

Background
I have a YAML file like this on a web server. I am trying to read it and make user accounts in the file with an Ansible playbook.
users:
- number: 20210001
name: Aoki Alice
id: alice
- number: 20210002
name: Bob Bryant
id: bob
- number: 20210003
name: Charlie Cox
id: charlie
What I tried
To confirm how to read a downloaded YAML file dynamically with include_vars, I had written a playbook like this:
- name: Add users from list
hosts: workstation
tasks:
- name: Download yaml
get_url:
url: http://fqdn.of.webserver/path/to/yaml.yml
dest: "/tmp/tmp.yml"
notify:
- Read yaml
- List usernames
handlers:
- name: Read yaml
include_vars:
file: /tmp/tmp.yml
name: userlist
- name: List usernames
debug:
var: "{{ item }}"
loop: "{{ userlist.users }}"
Problem
In the handler Read yaml, I got the following error message. On the target machine (workstation.example.com), /tmp/tmp.yml is downloaded correctly.
RUNNING HANDLER [Read yaml] *****
fatal: [workstation.example.com]: FAILED! => {"ansible facts": {"userlist": []},
"ansible included var files": [], "changed": false, "message": "Could not find o
r access '/tmp/tmp. yml' on the Ansible Controller.\nIf you are using a module a
nd expect the file to exist on the remote, see the remote src option"}
Question
How can I get a YAML file with HTTP and use it as a variable with include_vars?
Another option would be to use the uri module to retrieve the value into an Ansible variable, then the from_yaml filter to parse it.
Something like:
- name: Add users from list
hosts: workstation
tasks:
- name: Download YAML userlist
uri:
url: http://fqdn.of.webserver/path/to/yaml.yml
return_content: yes
register: downloaded_yaml
- name: Decode YAML userlist
set_fact:
userlist: "{{ downloaded_yaml.content | from_yaml }}"
Note that uri works on the Ansible Controller, while get_url works on the target host (or on the host specified in delegate_to); depending on your network configuration, you may need to use different proxy settings or firewall rules to permit the download.
The include_vars task looks for files on the local (control) host, but you've downloaded the file to /tmp/tmp.yml on the remote host. There are a number of ways of getting this to work.
Perhaps the easiest is just running the download task on the control machine instead (note the use of delegate_to):
tasks:
- name: Download yaml
delegate_to: localhost
get_url:
url: http://fqdn.of.webserver/path/to/yaml.yml
dest: "/tmp/tmp.yml"
notify:
- Read yaml
- List usernames
This will download the file to /tmp/tmp.yml on the local system, where it will be available to include_vars. For example, if I run this playbook (which grabs YAML content from an example gist I just created)...
- hosts: target
gather_facts: false
tasks:
- name: Download yaml
delegate_to: localhost
get_url:
url: https://gist.githubusercontent.com/larsks/70d8ac27399cb51fde150902482acf2e/raw/676a1d17bcfc01b1a947f7f87e807125df5910c1/example.yaml
dest: "/tmp/tmp.yml"
notify:
- Read yaml
- List usernames
handlers:
- name: Read yaml
include_vars:
file: /tmp/tmp.yml
name: userlist
- name: List usernames
debug:
var: item
loop: "{{ userlist.users }}"
...it produces the following output:
RUNNING HANDLER [Read yaml] ******************************************************************
ok: [target]
RUNNING HANDLER [List usernames] *************************************************************
ok: [target] => (item=bob) => {
"ansible_loop_var": "item",
"item": "bob"
}
ok: [target] => (item=alice) => {
"ansible_loop_var": "item",
"item": "alice"
}
ok: [target] => (item=mallory) => {
"ansible_loop_var": "item",
"item": "mallory"
}
Side note: based on what I see in your playbook, I'm not sure you want
to be using notify and handlers here. If you run your playbook a
second time, nothing will happen because the file /tmp/tmp.yml
already exists, so the handlers won't get called.
With #Larsks 's answer, I made this playbook that works correctly in my environment:
- name: Download users list
hosts: 127.0.0.1
connection: local
become: no
tasks:
- name: Download yaml
get_url:
url: http://fqdn.of.webserver/path/to/yaml/users.yml
dest: ./users.yml
- name: Add users from list
hosts: workstation
tasks:
- name: Read yaml
include_vars:
file: users.yml
- name: List usernames
debug:
msg: "{{ item.id }}"
loop: "{{ users }}"
Point
Run get_url on the control host
As #Larsks said, you have to run the get_url module on the control host rather than the target host.
Add become: no to the task run on the control host
Without "become: no", you will get the following error message:
TASK [Gathering Facts] ******************************************************
fatal: [127.0.0.1]: FAILED! => {"ansible_facts": {}, "changed": false, "msg":
"The following modules failed to execute: setup\n setup: MODULE FAILURE\nSee
stdout/stderr for the exact error\n"}
Use connection: local rather than local_action
If you use local_action rather than connection: local like this:
- name: test get_url
hosts: workstation
tasks:
- name: Download yaml
local_action:
module: get_url
url: http://fqdn.of.webserver/path/to/yaml/users.yml
dest: ./users.yml
- name: Read yaml
include_vars:
file: users.yml
- name: output remote yaml
debug:
msg: "{{ item.id }}"
loop: "{{ users }}"
You will get the following error message:
TASK [Download yaml] ********************************************************
fatal: [workstation.example.com]: FAILED! => {"changed": false, "module_stde
rr": "sudo: a password is required\n", "module_stdout":"", "msg":"MODULE FAIL
URE\nSee stdout/stderr for the exact error", "rc": 1}
get_url stores a file on the control host
In this situation, the get_url module stores users.yml on the control host (in the current directory). So you have to delete the users.yml if you don't want to leave it.

SaltStack NO onchange functionality

I trying to find a way how to execute a specific state only if the previous one completed successfully but ONLY when is without changes, basically, I need something like no onchanges.
start-event-{{ minion }}:
salt.function:
- name: event.send
- tgt: {{ minion }}
- arg:
- 'PATCHING-STARTED'
start-patching-{{ minion }}:
salt.state:
- tgt: {{ minion }}
- require:
- bits-{{ minion }}
- sls:
- patching.uptodate
finish-event-{{ minion }}:
salt.function:
- name: event.send
- tgt: {{ minion }}
- arg:
- 'PATCHING-FINISHED'
or in other words, I want to send ever "finish-event-{{ minion }}" only when "start-patching-{{ minion }}" is like:
----------
ID: start-patching-LKA3
Function: salt.state
Result: True
Comment: States ran successfully. No changes made to LKA3.
Started: 11:29:15.906124
Duration: 20879.248 ms
Changes:
----------

Argo artifacts gives error "http: server gave HTTP response to HTTPS client"

I was setting up Argo in my k8s cluster in Argo namespace.
I also Installed MinIO as an Artifact repository (https://github.com/argoproj/argo-workflows/blob/master/docs/configure-artifact-repository.md).
I am configuring a workflow which tries to access that Non-Default Artifact Repository as:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: artifact-passing-
spec:
entrypoint: artifact-example
templates:
- name: artifact-example
steps:
- - name: generate-artifact
template: whalesay
- - name: consume-artifact
template: print-message
arguments:
artifacts:
# bind message to the hello-art artifact
# generated by the generate-artifact step
- name: message
from: "{{steps.generate-artifact.outputs.artifacts.hello-art}}"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["cowsay hello world | tee /tmp/hello_world.txt"]
outputs:
artifacts:
# generate hello-art artifact from /tmp/hello_world.txt
# artifacts can be directories as well as files
- name: hello-art
path: /tmp/hello_world.txt
s3:
endpoint: argo-artifacts-minio.argo:9000
bucket: my-bucket
key: /my-output-artifact.tgz
accessKeySecret:
name: argo-artifacts-minio
key: accesskey
secretKeySecret:
name: argo-artifacts-minio
key: secretkey
- name: print-message
inputs:
artifacts:
# unpack the message input artifact
# and put it at /tmp/message
- name: message
path: /tmp/message
s3:
endpoint: argo-artifacts-minio.argo:9000
bucket: my-bucket
accessKeySecret:
name: argo-artifacts-minio
key: accesskey
secretKeySecret:
name: argo-artifacts-minio
key: secretkey
container:
image: alpine:latest
command: [sh, -c]
args: ["cat /tmp/message"]
I created the workflow in argo namespace by:
argo submit --watch artifact-passing-nondefault-new.yaml -n argo
But the workflow fails with an error:
STEP PODNAME DURATION MESSAGE
✖ artifact-passing-z9g64 child 'artifact-passing-z9g64-150231068' failed
└---⚠ generate-artifact artifact-passing-z9g64-150231068 12s failed to save outputs: Get https://argo-artifacts-minio.argo:9000/my-bucket/?location=: http: server gave HTTP response to HTTPS client
Can someone help me to solve this error?
Since the minio setup runs without TLS configured, the workflow should specify that it should connect to an insecure artifact repository.
Including a field insecure: true in the s3 definition section of the workflow solves the issue.
s3:
endpoint: argo-artifacts-minio.argo:9000
insecure: true
bucket: my-bucket
key: /my-output-artifact.tgz
accessKeySecret:
name: argo-artifacts-minio
key: accesskey
secretKeySecret:
name: argo-artifacts-minio
key: secretkey

Multiple file.line in single state in Salt

I would like to have a Salt state for managing my SSH file. This requires multiple file.line operations. How can I do this?
UPDATE: See bottom of the question for my current workaround
What I have is this:
Secure SSH:
file:
- name: /etc/ssh/sshd_config
- line:
- match: "^PasswordAuthentication "
- content: "PasswordAuthentication no"
- mode: ensure
- line:
- match: "^PubkeyAuthentication "
- content: "PubkeyAuthentication yes"
- mode: ensure
- line:
- match: "^Port "
- content: "Port 8888"
- mode: ensure
service.running:
- name: sshd
- watch:
- file: /etc/ssh/sshd_config
but this fails with
Data failed to compile:
----------
No function declared in state 'file' in SLS u'xyz'
Actually my first attempt was this:
Secure SSH:
file.line:
- name: /etc/ssh/sshd_config
- match: "^PasswordAuthentication "
- content: "PasswordAuthentication no"
- mode: ensure
file.line:
- name: /etc/ssh/sshd_config
- match: "^PubkeyAuthentication "
- content: "PubkeyAuthentication yes"
- mode: ensure
file.line:
- name: /etc/ssh/sshd_config
- match: "^Port "
- content: "Port 8888"
- mode: ensure
service.running:
- name: sshd
- watch:
- file: /etc/ssh/sshd_config
but this fails with
Data failed to compile:
----------
Rendering SLS 'base:xyz' failed: Conflicting ID 'file.line'
I understand this error because every state function is a dictionary key, but it does look very clean.
The Salt documentation is very unhelpful in this because it does not say anything about what to do when just maybe you want to modify multiple things to one file, and it conveniently only gives very trivial examples in its documentation.
UPDATE:
I got it to work by using a separate state for each line (and I also changed file.line to file.replace but that was another issue). I think this is rather unwieldy plus isn't the service reloaded after every step?
Disallow SSH password authentication:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^PasswordAuthentication .*
- repl: PasswordAuthentication no
- append_if_not_found: True
service.running:
- name: sshd
- watch:
- file: /etc/ssh/sshd_config
Allow SSH public key authentication:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^PubkeyAuthentication .*
- repl: PubkeyAuthentication yes
- append_if_not_found: True
service.running:
- name: sshd
- watch:
- file: /etc/ssh/sshd_config
Set SSH port:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^Port .*
- repl: Port 8888
- append_if_not_found: True
service.running:
- name: sshd
- watch:
- file: /etc/ssh/sshd_config
Separating the file.replace to multiple states is the way to go.
To avoid redundancy you should move the service.running to its own state as well. Plus: when using watch (or watch_in) you'll need to specify the name of state you are watching after the file: part.
The result will look like this:
Disallow SSH password authentication:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^PasswordAuthentication .*
- repl: PasswordAuthentication no
- append_if_not_found: True
- watch_in:
- service: ssh_service
Allow SSH public key authentication:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^PubkeyAuthentication .*
- repl: PubkeyAuthentication yes
- append_if_not_found: True
- watch_in:
- service: ssh_service
Set SSH port:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^Port .*
- repl: Port 8888
- append_if_not_found: True
- watch_in:
- service: ssh_service
ssh_service:
service.running:
- name: sshd
I would recommend checking out listen instead of watch. Watch will restart sshd 3 times, once after each time the file is changed.
If you use listen, it will only restart it once at the very end. But you have to do
Put the service.running at the very end with it's own stateid and listen to all of the changes.
Disallow SSH password authentication:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^PasswordAuthentication .*
- repl: PasswordAuthentication no
- append_if_not_found: True
Allow SSH public key authentication:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^PubkeyAuthentication .*
- repl: PubkeyAuthentication yes
- append_if_not_found: True
Set SSH port:
file.replace:
- name: /etc/ssh/sshd_config
- pattern: ^Port .*
- repl: Port 8888
- append_if_not_found: True
Start SSHD:
service.running:
- name: sshd
- listen:
- file: /etc/ssh/sshd_config
You might also find it worthwhile to check out the augeas state. It makes making changes like this a lot easier and look better in the state files.
sshd_config:
augeas.change:
- context: /files/etc/ssh/sshd_config
- changes:
- set Port 8888
- set PasswordAuthentication yes
- set PubkeyAuthentication yes
service.running:
- name: sshd
- listen:
- augeas: sshd_config

deploy multiple file through saltstack only if all files are valid

We are managing web sites with saltstack. These sites run on PHP-FPM, and we have several fpm pools. Each pool is configured with dedicated file in php-fpm.d/ directory.
Current, we have a file.managed state with check_cmd: php-fpm -ty to check if the configuration is valid.
fpm-conf:
file.managed:
- name: /etc/php-fpm.conf
- source: salt://php/template/fpm.jinja
- user: someuser
- group: somegroup
- mode: 644
- template: jinja
- check_cmd: /usr/sbin/php-fpm -ty
- require:
- pkg: php-package
fpm-pool-a:
file.managed:
- name: /etc/php-fpm.d/a.conf
- source: salt://php/template/fpm-a.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
fpm-pool-b:
file.managed:
- name: /etc/php-fpm.d/b.conf
- source: salt://php/template/fpm-b.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
It works fine, until mistake was made to a pool file (say, fpm-pool-a). Though the fpm-conf state blocks the update to the main fpm config file, the a.conf has been contaminated with erroneous configuration.
Is there a way to prevent this from happening? It seems that check_cmd can't be used in this case.
How to guarantee that a series of files are all valid before updating?
One workaround is recovering the original pool files if any mistakes were made.
Here is an example, I'd suggest start using jinja if this state start to get any larger.
fpm-conf:
file.managed:
- name: /etc/php-fpm.conf
- source: salt://php/template/fpm.jinja
- user: someuser
- group: somegroup
- mode: 644
- template: jinja
- check_cmd: /usr/sbin/php-fpm -ty
- require:
- pkg: php-package
fpm-pool-a:
file.managed:
- name: /etc/php-fpm.d/a.conf
- source: salt://php/template/fpm-a.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
- backup: minion
fpm-pool-b:
file.managed:
- name: /etc/php-fpm.d/b.conf
- source: salt://php/template/fpm-b.jinja
- user: someuser
- group: somegroup
- file_mode: 644
- template: jinja
- require:
- pkg: php-package
- require_in:
- file: fpm-conf
- backup: minion
fpm-pool-a-recover:
module.run:
- name: file.restore_backup
- path: /etc/php-fpm.d/a.conf
- backup_id: 0
- onfail:
- file: fpm-conf
fpm-pool-a-recover:
module.run:
- name: file.restore_backup
- path: /etc/php-fpm.d/b.conf
- backup_id: 0
- onfail:
- file: fpm-conf
Notice the - backup: minion addition, this will backup the file locally to /var/cache/salt/minion/file_backup/...
So in case the main config fails, fpm-pool-a-recover and fpm-pool-b-recover will fire and recover the most recent backup of the original file.

Resources