How to deal with a minion "already deleted from tracker"? - salt-stack

When running
salt '*' test.ping -s --out json
I get the following output (reduced to just a few typical cases, the ellipsis covers the same cases)
Minion sent222 did not respond. No job will be sent.
(...)
minion sent005 was already deleted from tracker, probably a duplicate key
(...)
{
(...)
"sent005": true,
(...)
}
The case of sent222 is clear: it is not available.
What should I do with sent002? It does answer, but also warns about being deleted from the tracker and having a duplicate key. What does this mean?
I noticed this by chance because the regular salt '*' test.ping simply states that sent005 is True, without any further comments on its state.

Related

How to call a current inventory name in the if condition in saltstate?

I have a simple saltstate. My question is using jinja or whatever else, how can I specify it or steps of it to run only if inventory name contains some string?
Where I can check all the saltstack variables documentation btw?
I think you want to target minions by grains. Salt have many grains set by default, but you can add your own grain. After adding you own "inventory" grain, you can target the minion in the top file.
Check all grains of a minion: salt "minion" grains.items
Set your own grain: salt "minion" grains.set inventory inventory_num
Target minion by new grain: salt -G "inventory:inventory_num" test.ping
Further information:
https://docs.saltproject.io/en/latest/ref/modules/all/salt.modules.grains.html#salt.modules.grains.set
https://docs.saltproject.io/en/latest/topics/targeting/index.html
https://docs.saltproject.io/en/latest/topics/grains/index.html

Minion cannot find file on master

On Minion:
ID: run_snmpv3_config
Function: file.managed
Name: /tmp/run_snmpv3_config_cmd.sh
Result: False
Comment: Source file salt://files/run_snmpv3_config_cmd.sh not found in saltenv 'base'
Started: 15:11:56.175325
Duration: 27.084 ms
Changes:
On master we confirm that the minion does in fact see the file:
master # salt minion cp.list_master | grep snmp
- files/run_snmpv3_config_cmd.sh
So why isn't it able to get it?
(In fact I wanted to use cmd.script but that errors out with Unable to cache script, so I tried to just copy the file, which doesn't work either as we see above.)
I called the state for debugging purposes on a client system using
salt-call --local state.apply teststate -l debug
Of course in this case it will look for file salt://x inside /srv/salt (or whatever the minion's config is) on the minion and not the master....

salt sls to use dnsutil.hosts_append not working

I need to read the host entries from pillar file and update the /etc/hosts file accordingly
This is my simple sls file to update the /etc/hosts file.
#/srv/salt/splunk_dep/hosts.sls
dnsutil:
dnsutil.hosts-append:
- hostsfile: '/etc/hosts'
- ip_addr: '10.10.10.10'
- entries: 'hostname'
when i execute the sls file
salt Minion-name state.apply splunk_dep/hosts
Getting the following error
ID: dnsutil
Function: dnsutil.hosts-append
Result: False
Comment: State 'dnsutil.hosts-append' was not found in SLS 'splunk_dep/hosts'
Reason: 'dnsutil.hosts-append' is not available.
Started:
Duration:
Changes:
If i execute through command line its working fine
salt 'DS-110' dnsutil.hosts_append /etc/hosts 10.10.10.10 hostname
I need to update the /etc/hosts file through sls file. Can someone please help me on this.
I am using the salt version : salt 2015.8.3 (Beryllium)
dnsutil is a Salt module, and not a Salt state. Therefore it can be used from the command line, but not directly via SLS state file.
To run modules from state file you'll need module.run. Please note that in this case you'll need to put an underscore in hosts_append, not a hyphen.
dnsutil:
module.run:
- name: dnsutil.hosts_append
- hostsfile: '/etc/hosts'
- ip_addr: '10.10.10.10'
- entries: 'hostname'
Some caveats with modules: even if they don't change your system, they will be reported as "changed" in the summary of your salt call. Please consider using file.blockreplace for managing hosts file instead to avoid this.

SaltStack: $HOME of users which does not exist yet

I create a user foo on a minion. The minion evalutes /etc/default/useradd. This means the salt master does not know whether the new $HOME will be /home/foo or in our case /localhome/foo.
How can I get the $HOME of user foo as jinia variable?
I need it in a systemd service file.
I would like to avoid custom pillar data, since this is redundant. Is there a way to get it via grains?
Does it work during boostrapping? First the user foo needs to be created, then the systemd file can be created by looking up the $HOME of foo...
This would work if the user does already exist:
{{ salt['user.info'](user).get('home') }}/foo:
file.recurse:
- source: salt://conf/common/foo
Related issue: https://github.com/saltstack/salt/issues/7883
Answer this question:
Is there a way to get it via grains?
1) add file '_grains/homeprefix.py' under the file_roots specified by the master config file, the content of which is :
#!/usr/bin/env python
from os.path import dirname, expanduser
def gethomeprefix():
# initialize a grains dictionary
grains = {}
# Some code for logic that sets grains like
grains['homeprefix'] = dirname(expanduser("~"))
return grains
2) run sync cmd on master to sync grains info to minion :
salt '*' saltutil.sync_grains
3) run grains.get on master to test :
salt '*' grains.get homeprefix

How can I dump pillar data, that will be send to minion

When I do highstate at minion, there is a strange error. I suspect, that the pillar data on the minion may not be right. Can I somehow dump the pillar data from minion?
As you said in your answer to your own question,
salt '*' pillar.data
will show all the data. However, you have some additional useful commands:
salt '*' pillar.raw
will show the raw data as it's loaded into the __pillar__ dict.
salt '*' pillar.get <key>
will show you the value of some key in pillar, with the ability to default to a certain value if the key doesn't exist. (The default is super useful when you're using pillar when templating states)
To see the pillar data
salt '*' pillar.data
It's good idea to refresh the pillar data first using
salt '*' saltutil.refresh_pillar
Also using :
salt '*' pillar.items
Will show you all pillars node by node

Resources