Reactor - vRA - Saltstack Config integration - salt-stack

Assumption
vRA to Saltstack config integration is working fine
Saltstack config accepting the keys from the minion
I am triggering an event from vRA when I am creating a new VM. I would like to know how the user will know that the states which triggered by the event are completed or not
For instance:
reactor:
- 'my/custom/event':
- salt://reactor/custom.sls
/srv/salt/reactor/custom.sls
test_df:
local.cmd.run:
- tgt: "role:MyServer"
- tgt_type: grain
- arg:
- df -h > /tmp/test_df.txt
On Cloud-init running the following:
salt-call event.send 'my/custom/event'
====================================================================
How the USER will find out that the event completed successfully or not, with errors or without?

Related

SaltStack - Unable to update mine_function

I'm unable to change the mine_function on the minion hosts. How can I make changes to the function and push them to all minions?
cat /etc/salt/cloud:
minion:
mine_functions:
internal_ip:
- mine_function: grains.get
- ip_interfaces:eth0:0
external_ip:
- mine_function: grains.get
- ip_interfaces:eth1:0
I want to change the external_ip function as below. But I'm not sure how to push these changes to all minions. mine_interval is set to 1 minute but the changes aren't picked up by minions.
external_ip:
- mine_function: network.ip_addrs
- cidr: 172.0.0.0/8

How to trigger an action upon change of state?

I am testing salt as a management system, using ansible so far.
How can I trigger an action (specifically, a service reload) when a state has changed?
In Ansible this is done via notify but browsing salt documentation I cannot find anything similar.
I found watch, which works the other way round: "check something, and if it changed to this and that".
there is also listen which seems to be closer to my needs (the documentation mentions a service reload) but I cannot put together the pieces.
To set an example, how the following scenario would work in salt: check a git repo (= create it if not existing or pull from it otherwise) and if it has changed, reload a service? The Ansible equivalent is
- name: clone my service
git:
clone: yes
dest: /opt/myservice
repo: http://git.example.com/myservice.git
version: master
force: yes
notify:
- restart my service if needed
- name: restart my service if needed
systemd:
name: myservice
state: restarted
enabled: True
daemon_reload: yes
Your example:
ensure my service:
git.latest:
- name: http://git.example.com/myservice.git
- target: /opt/myservice
service.running:
- watch:
- git: http://git.example.com/myservice.git
When there will be change in repo (clone for the first time, update etc.)
the state will be marked as "having changes" thus the dependent states -
service.running in this case - will require changes, for service it means to restart
What you are asking is covered in salt quickstart

Accessing Mine data immediately after install

I am deploying a cluster via SaltStack (on Azure) I've installed the client, which initiates a reactor, runs an orchestration to push a Mine config, do an update, restart salt-minion. (I upgraded that to restarting the box)
After all of that, I can't access the mine data until I restart the minion
/srv/reactor/startup_orchestration.sls
startup_orchestrate:
runner.state.orchestrate:
- mods: orchestration.startup
orchestration.startup
orchestration.mine:
salt.state:
- tgt: '*'
- sls:
- orchestration.mine
saltutil.sync_all:
salt.function:
- tgt: '*'
- reload_modules: True
mine.update:
salt.function:
- tgt: '*'
highstate_run:
salt.state:
- tgt: '*'
- highstate: True
orchestration.mine
{% if salt['grains.get']('MineDeploy') != 'complete' %}
/etc/salt/minion.d/globalmine.conf:
file.managed:
- source: salt:///orchestration/files/globalmine.conf
MineDeploy:
grains.present:
- value: complete
- require:
- service: rabbit_running
sleep 5 && /sbin/reboot:
cmd.run
{%- endif %}
How can I push a mine update, via a reactor and then get the data shortly afterwards?
I deploy my mine_functions from pillar so that I can update the functions on the fly
then you just have to do salt <target> saltutil.refresh_pillar and salt <target> mine.update to get your mine info on a new host.
Example:
/srv/pillar/my_mines.sls
mine_functions:
aws_cidr:
mine_function: grains.get
delimiter: '|'
key: ec2|network|interfaces|macs|{{ mac_addr }}|subnet_ipv4_cidr_block
zk_pub_ips:
- mine_function: grains.get
- ec2:public_ip
You would then make sure your pillar's top.sls targets the appropriate minions, then do the saltutil.refresh_pillar/mine.update to get your mine functions updated & mines supplied with data. After taking in the above pillar, I now have mine functions called aws_cidr and zk_pub_ips I can pull data from.
One caveat to this method is that mine_interval has to be defined in the minion config, so that parameter wouldn't be doable via pillar. Though if you're ok with the default 60-minute interval, this is a non-issue.

No matching sls found for 'swapfile' in env 'base'

I have successfully been using saltstack for managing virtual and bare-metal Ubuntu-14.04-servers for about a year.
On master, I have the following /srv/salt/top.sls:
base:
'*':
- common
- users
- openvpn # openvpn-formula
- openvpn.config # openvpn-formula
- fail2ban # fail2ban-formula
- fail2ban.config # fail2ban-formula
- swapfile # swapfile-formula
- ntp # ntp-formula to set up and configure the ntp client or serv
In /etc/salt/master I have included the following:
gitfs_remotes:
- https://github.com/srbolle/openvpn-formula.git
- https://github.com/srbolle/postgres-formula.git
- https://github.com/srbolle/fail2ban-formula.git
- https://github.com/srbolle/ntp-formula.git
- https://github.com/srbolle/swapfile-formula.git
I have had no problems with saltstack-formulas until now, but when I recently included the "swapfile-formula" I get the following when running
salt '*' state.highstate:
servername:
Data failed to compile:
----------
No matching sls found for 'swapfile' in env 'base'
Also, I get the same error message when running:
salt 'servername' state.show_sls swapfile
servername:
- No matching sls found for 'swapfile' in env 'base'
When I run:
salt servername state.show_top
'swapfile' is listed. I have tried to clear the cache, restarted servers, recreated servers, used 'kwalify -m top.sls' to validate my top.sls-file. I have spent days on this error, and don't know how to further debug (logs don't show anything suspiciuos).
Thankful for any clues on how to proceed.

Saltstack apply state on minions through salt reactors and runners

I have multiple salt deployment environments.
I have a requirement in which I raise an event from the minions, the master upon receiving the event, generates few files which I then want to copy to the minions.
How do I do this?
I was trying to get it to work using orchestrate. This is what I have right now:
reactor sls->
copy_cert:
runner.state.orchestrate:
- mods: _orch.copy_certs
- saltenv: 'central'
copy_certs sls->
copy_kube_certs:
salt.state:
- tgt: 'kubeminion'
- tgt_type: nodegroup
- sls:
- kubemaster.copy_certs
The problem is that I want to happen for all the environments and not just one. How do I do that?
Or is there a way to loop over the environments using jinja in some way.
Also is it possible using anything other than orchestrate.
You don't need to use orchestrate for this, all you need is the salt reactor.
Lets say you fire an event from the minion salt-call event.send tag='event/test' (you can watch the salt event bus using salt-run state.event pretty=True):
event/test {
"_stamp": "2017-05-24T10:36:05.907438",
"cmd": "_minion_event",
"data": {
"__pub_fun": "event.send",
"__pub_jid": "20170524133601757005",
"__pub_pid": 4590,
"__pub_tgt": "salt-call"
},
"id": "minion_A",
"tag": "event/test"
}
Now you need to decide what happens when salt receives the event, edit/create /etc/salt/master.d/reactor.conf (remember to restart the salt-master after editing this file):
reactor:
- event/test: # event tag to match
- /srv/reactor/some_state.sls # sls file to run
some_state.sls:
some_state:
local.state.apply:
- tgt: kubeminion
- tgt_type: nodegroup
- arg:
- kubemaster.copy_certs
- kwarg:
- saltenv: central
This will in turn apply the state kubemaster.copy_certs to all minions in the "kubeminion" nodegroup.
Hope this helps.

Resources