pillar data in orchestration sls files are not resolved - salt-stack

I'm using Salt 2018.3.2 with following pillar data in /srv/pillar/mypillar.sls
#!yaml
mypillar:
some_key: some_value
and try to use it in the following state file /srv/salt/orch/mypillar.sls
write-pillar-file:
file.managed:
- name: /tmp/mypillar.txt
- contents_pillar: mypillar
It works fine if called as a state:
$ salt 'localhost' state.apply orch.mypillar
Yet does not work if called as orchestrate runner:
$ salt-run state.orchestrate orch.mypillar
[INFO ] Loading fresh modules for state activity
[INFO ] Fetching file from saltenv 'base', ** done ** 'orch/mypillar.sls'
[INFO ] Running state [/tmp/mypillar.txt] at time 18:32:03.120348
[INFO ] Executing state file.managed for [/tmp/mypillar.txt]
[ERROR ] Pillar mypillar does not exist
[INFO ] Completed state [/tmp/mypillar.txt] at time 18:32:03.122809 (duration_in_ms=2.461)
It works if I pass the pillar via commandline, but I want to access the pillar from the filesystem. Shouldn't this be possible?
Any advice appreciated!

I posted the same question on github and got the advice that pillar information is tied to minions, which means that salt-run has no access to pillar info as it runs without minion context.
A workaround is to explicitly query the pillar information like this (in my example I've used 'localhost' as minion ID of my salt master):
{% set pillardata = salt.saltutil.runner('pillar.show_pillar', kwarg={'minion': 'localhost'}) %}
write-pillar-file:
file.managed:
- name: /tmp/mypillar.txt
- contents:
- {{ pillardata['mypillar']['some_key'] }}

Related

How to modify default options in Salt Minion config file from Master

I want to set "grains_cache" variable to "True" from Salt Master on all Minions. This variable is from default options that exist in minion config file and cannot be overridden by pillar data. So how can I set variables (for example "grains_cache", "grains_cache_expiration" or "log_file") from Master?
this should be an easy one. Manage the minion configuration file using the file.managed function.
A simple sls should help here:
minion_configuration:
file.managed:
- name: /etc/salt/minion
- contents: |
grains_cache: true
backup_mode: minion
salt-minion-restart:
cmd.wait:
- name: salt-call --local service.restart salt-minion
- bg: True
- order: last
- watch:
- file: salt-minion-config
In this example, saltstack ensures that the two lines beneath - contents: | are present within the minions configuration file.
The second state: salt-minion-restart will restart the salt-minion if the minion configuration file is being touched (managed by the first state).
So in short terms, this state adds your variables to the minion's configuration and restarts the minion afterwards.
This formula is os-independent.
The last thing left to do is, to target all of your minions with this.
If you want to know more about the cmd.wait and the shown example, please refer to this documentation.
I hope i could help.

Why does using a pillar value in this salt environment fail with "... has no attribute ..."?

I have a new Debian (9.3) install with new salt-master (2017.7.4) and salt-minion installed. In /etc/salt/minion.d I have a conf file containing:
master: 127.0.0.1
grains:
roles:
- 'introducer'
In /srv/salt/top.sls I have:
base:
# https://docs.saltstack.com/en/latest/ref/states/top.html
'G#roles:introducer':
- 'introducer'
In /srv/pillar/data.sls I have:
introducer:
location: 'tcp:x.x.x.x:y'
port: 'tcp:y'
When I run salt '*' state.apply, I encounter this failure:
668629:
Data failed to compile:
----------
Rendering SLS 'base:introducer' failed: Jinja variable 'salt.pillar object' has no attribute 'introducer'
ERROR: Minions returned with non-zero exit code
Why isn't the pillar data available?
Pillar data requires a top definition as well. The configuration described in the question has no Pillar top.sls so no Pillar data is selected for any of the minions.
To correct this, add a top.sls to the Pillar directory which selects the desired minions and makes the data available to them. For example, this /srv/pillar/top.sls:
base:
'*':
- 'data'
This makes the contents of /srv/pillar/data.sls available to all minions (selected by *) in the base environment.

Accessing Mine data immediately after install

I am deploying a cluster via SaltStack (on Azure) I've installed the client, which initiates a reactor, runs an orchestration to push a Mine config, do an update, restart salt-minion. (I upgraded that to restarting the box)
After all of that, I can't access the mine data until I restart the minion
/srv/reactor/startup_orchestration.sls
startup_orchestrate:
runner.state.orchestrate:
- mods: orchestration.startup
orchestration.startup
orchestration.mine:
salt.state:
- tgt: '*'
- sls:
- orchestration.mine
saltutil.sync_all:
salt.function:
- tgt: '*'
- reload_modules: True
mine.update:
salt.function:
- tgt: '*'
highstate_run:
salt.state:
- tgt: '*'
- highstate: True
orchestration.mine
{% if salt['grains.get']('MineDeploy') != 'complete' %}
/etc/salt/minion.d/globalmine.conf:
file.managed:
- source: salt:///orchestration/files/globalmine.conf
MineDeploy:
grains.present:
- value: complete
- require:
- service: rabbit_running
sleep 5 && /sbin/reboot:
cmd.run
{%- endif %}
How can I push a mine update, via a reactor and then get the data shortly afterwards?
I deploy my mine_functions from pillar so that I can update the functions on the fly
then you just have to do salt <target> saltutil.refresh_pillar and salt <target> mine.update to get your mine info on a new host.
Example:
/srv/pillar/my_mines.sls
mine_functions:
aws_cidr:
mine_function: grains.get
delimiter: '|'
key: ec2|network|interfaces|macs|{{ mac_addr }}|subnet_ipv4_cidr_block
zk_pub_ips:
- mine_function: grains.get
- ec2:public_ip
You would then make sure your pillar's top.sls targets the appropriate minions, then do the saltutil.refresh_pillar/mine.update to get your mine functions updated & mines supplied with data. After taking in the above pillar, I now have mine functions called aws_cidr and zk_pub_ips I can pull data from.
One caveat to this method is that mine_interval has to be defined in the minion config, so that parameter wouldn't be doable via pillar. Though if you're ok with the default 60-minute interval, this is a non-issue.

use saltstack state.sls to install mysql but not return

I am searching for a long time on net. But no use. Please help or try to give some ideas how to achieve this.
my saltstack file code in github
saltstack file
install mysql salt code:
[root#salt_master srv]# cat salt/base/lnmp_yum/mysql/mysql_install.sls
repo_init:
file.managed:
- name: /etc/yum.repos.d/mysql-{{pillar['mysql_version']}}.repo
- source: salt://lnmp_yum/mysql/files/mysql-{{pillar['mysql_version']}}.repo
- user: root
- group: root
- mode: 644
mysql_install:
pkg.installed:
- names:
- mysql
- mysql-server
- mysql-devel
- require:
- file: repo_init
service.running:
- name: mysqld
- enable: True
after run cmd:
salt 'lnmp_base' state.sls lnmp_yum.mysql.mysql_install -l debug
always print log:
[DEBUG ] Checking whether jid 20170526144936867490 is still running
[DEBUG ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/master', 'salt_master_master', 'tcp://127.0.0.1:4506', 'clear')
[DEBUG ] Passing on saltutil error. This may be an error in saltclient. 'retcode'
[DEBUG ] Checking whether jid 20170526144936867490 is still running
[DEBUG ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/master', 'salt_master_master', 'tcp://127.0.0.1:4506', 'clear')
[DEBUG ] Passing on saltutil error. This may be an error in saltclient. 'retcode'
[DEBUG ] Checking whether jid 20170526144936867490 is still running
[DEBUG ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/master', 'salt_master_master', 'tcp://127.0.0.1:4506', 'clear')
[DEBUG ] Passing on saltutil error. This may be an error in saltclient. 'retcode'
when i look salt node server, mysql already installed and start,but salt master server always print log, no exit.
I searched for days, but I could not solve it.
the same question when i install jboss.
Thanks in advance.
Two thoughts occur to me:
I think mysql has a basic configuration ncurses gui that requires user input to configure (Set default password etc). If I remember that correctly then your salt state is still running and waiting for a human to type at the screen. You can fix this by feeding it an answer/config file.
Stolen shamelesly from another post:
sudo debconf-set-selections <<< 'mysql-server-5.6 mysql-server/root_password password your_password'
sudo debconf-set-selections <<< 'mysql-server-5.6 mysql-server/root_password_again password your_password'
sudo apt-get -y install mysql-server-5.6
The other is that it may simply take longer than your salt timeout default for a task. That can be configured in salt at the salt cmd line with -t or the config file (forget which setting)

salt-master is not receiving scheduled job event fired on salt-minion

I would to like to ask you for a help. I use saltstack as a job scheduler for slaves (minions) and I would like to be able to see on master job events fired on minion.
My setup
Job is scheduled on salt-master using a pillar for given minion. Pillar is:
schedule_returner: mongo
schedule:
cmd:
function: cmd.run
args:
- date +%s >> /tmp/job_runs
minutes: 1
maxrunning: 1
Scheduled job is executed without any problem on minion. I can see returned data in mongodb and a new timestamp in my dummy file /tmp/job_runs. The configuration file on minion /etc/salt/minion.d/_schedule.conf is:
schedule:
__mine_interval: {enabled: true, function: mine.update, jid_include: true, maxrunning: 2, minutes: 60, return_job: false}
cmd:
args: [date +%s >> /tmp/job_runs]
function: cmd.run
maxrunning: 1
minutes: 1
This file was generated and I didn't modify it.
In minion log I can see:
[DEBUG ] SaltEvent PUB socket URI:
/var/run/salt/minion/minion_event_1fa42d8010_pub.ipc
[DEBUG ] SaltEvent PULL socket URI: /var/run/salt/minion/minion_event_1fa42d8010_pull.ipc
[DEBUG ] Initializing new IPCClient for path: /var/run/salt/minion/minion_event_1fa42d8010_pull.ipc
[DEBUG ] Sending event: tag = __schedule_return; data = {'fun_args': ['date +%s >> /tmp/job_runs'], 'jid': 'req', 'return':
'', 'retcode': 0, 'success': True, 'schedule': 'cmd', 'cmd':
'_return', 'pid': 10264, '_stamp': '2017-02-22T10:03:05.750874',
'fun': 'cmd.run', 'id': 'vagrant.vm'}
[DEBUG ] Minion of "salt" is handling event tag '__schedule_return'
[DEBUG ] schedule.handle_func: Removing /var/cache/salt/minion/proc/20170222100305532940
[DEBUG ] LazyLoaded mongo.returner
Now I'm interested in listening to those events with tag __schedule_return.
On minion, I can run the following commands:
wget https://raw.github.com/saltstack/salt/develop/tests/eventlisten.py
sudo python eventlisten.py -n minion
The output of eventlisten.py is correct and I can see this event.
Now my question is: Is there any way to listen to this events on salt-master?
When I run almost the same commands on master:
wget https://raw.github.com/saltstack/salt/develop/tests/eventlisten.py
sudo python eventlisten.py
I'm not able to see those events fired on minion by my scheduled job.
My motivation to do this is that I'm running saltpad on my master and I would like to see my scheduled jobs in the recent jobs (websockets...).
Thank you for any help.
Listening for Events
The quickest way to watch the event bus is by calling the state.event runner on you salt-master:
salt-run state.event pretty=True
Firing Events
It's possible to fire an event to be sent up to the master from the minion using the event.send execution function:
salt-call event.send '__schedule_return' '{success: True, message: "It works!"}'
Reactor System
Salt's Reactor System gives the ability to trigger actions in response to an event. Reactor SLS files and event tags are associated in the master config file (by default /etc/salt/master or /etc/salt/master.d/reactor.conf).
In the master config section 'reactor:' you can specify a list of event tags to be matched. Each event tag can have a list of reactor SLS files to be run.
# Master config section "reactor"
reactor:
# Match tag "__schedule_return"
- '__schedule_return':
# Things to it matches the tag
- /srv/reactor/do_stuff.sls
See the documentation about the reactor system for more information about salt's reactor system.

Resources