How to modify default options in Salt Minion config file from Master - salt-stack

I want to set "grains_cache" variable to "True" from Salt Master on all Minions. This variable is from default options that exist in minion config file and cannot be overridden by pillar data. So how can I set variables (for example "grains_cache", "grains_cache_expiration" or "log_file") from Master?

this should be an easy one. Manage the minion configuration file using the file.managed function.
A simple sls should help here:
minion_configuration:
file.managed:
- name: /etc/salt/minion
- contents: |
grains_cache: true
backup_mode: minion
salt-minion-restart:
cmd.wait:
- name: salt-call --local service.restart salt-minion
- bg: True
- order: last
- watch:
- file: salt-minion-config
In this example, saltstack ensures that the two lines beneath - contents: | are present within the minions configuration file.
The second state: salt-minion-restart will restart the salt-minion if the minion configuration file is being touched (managed by the first state).
So in short terms, this state adds your variables to the minion's configuration and restarts the minion afterwards.
This formula is os-independent.
The last thing left to do is, to target all of your minions with this.
If you want to know more about the cmd.wait and the shown example, please refer to this documentation.
I hope i could help.

Related

Salt External Pillar Issues

I am trying to configure an external pillar in github, but no matter what I cannot get the minions to successfully read top.sls. Below is my ext_pillar and pillar_roots config:
pillar_roots:
base:
- /srv/pillar
fileserver_backend:
- gitfs
- roots
gitfs_update_interval: 60
gitfs_base: main
gitfs_remotes:
- https://gituser:gittoken#github.com/gitaccount/saltstack.git:
- mountpoint: salt://
ext_pillar:
- git:
- main https://gituser:gittoken#github.com/gitaccount/saltpillar.git
I have the following in the root of my saltpillar repo:
top.sls:
base:
'*':
- data
data.sls:
info: some test data from remote pillar
Repos are accessible with the URIs provided. When I run salt '*' saltutil.refresh_pillar and then salt '*' pillar.items I get no results. However, I can put top.sls and data.sls directly into /srv/pillar and it works. I put the master in debug mode and don't see any errors running the commands. Any help is appreciated.
Does the following ext_pillar configuration fix your issue? I'm assuming your top.sls you posted is still in the main branch of your git repo.
ext_pillar:
- git:
- main https://gituser:gittoken#github.com/gitaccount/saltpillar.git
- env: base
Your top.sls must reference your actual branch name or you can add the env option to specify a different name.
https://docs.saltproject.io/en/latest/ref/pillar/all/salt.pillar.git_pillar.html

Minion cannot find file on master

On Minion:
ID: run_snmpv3_config
Function: file.managed
Name: /tmp/run_snmpv3_config_cmd.sh
Result: False
Comment: Source file salt://files/run_snmpv3_config_cmd.sh not found in saltenv 'base'
Started: 15:11:56.175325
Duration: 27.084 ms
Changes:
On master we confirm that the minion does in fact see the file:
master # salt minion cp.list_master | grep snmp
- files/run_snmpv3_config_cmd.sh
So why isn't it able to get it?
(In fact I wanted to use cmd.script but that errors out with Unable to cache script, so I tried to just copy the file, which doesn't work either as we see above.)
I called the state for debugging purposes on a client system using
salt-call --local state.apply teststate -l debug
Of course in this case it will look for file salt://x inside /srv/salt (or whatever the minion's config is) on the minion and not the master....

How can I "sprinkle" my minions with custom grains when deploying salt-minion using Saltify (salt-cloud)?

I've gotten saltify to work on a fresh minion. I am able to specify a profile for the minion as well. However, I don't know how to assign custom grains to my minion during this process.
Here's my set up.
In /etc/salt/cloud.profiles.d/saltify.conf I have:
salt-this-webserver:
ssh_host: 10.66.77.99
ssh_username: opsuser
password: **********
provider: web-saltify-config
salt-this-fileserver:
ssh_host: 10.66.77.99
ssh_username: opsuser
password: **********
provider: file-saltify-config
In /etc/salt/cloud/cloud.providers I have:
web-saltify-config:
minion:
master: 10.66.77.44
driver: saltify
grains:
layers:
- infrastructure
roles:
- web-server
file-saltify-config:
minion:
master: 10.66.77.55
driver: saltify
grains:
layers:
- infrastructure
roles:
- file-server
When I run my command from my Salt master:
salt-cloud -p salt-this-fileserver slave-salttesting-01.eng.example.com
My /etc/salt/minion file on my minion looks like this:
grains:
salt-cloud:
driver: saltify
profile: salt-this-fileserver
provider: file-saltify-config:saltify
hash_type: sha256
id: slave-salttesting-01.eng.example.com
log_level: info
master: 10.66.77.99
I would really like it to also have:
grains:
layers:
- infrastructure
roles:
- file-server
I'd like for this to happen during the saltify stage rather than a subsequent step because it just fits really nicely into what I'm trying to accomplish at this step.
Is there a way to sprinkle some grains on my minion during "saltification"?
EDIT: The sync_after_install configuration parameter may have something to do with it but I'm not sure where to put my custom modules, grains, states, etc.
I found the grains from my cloud.providers file in /etc/salt/grains This appears to just work if you build your cloud.providers file in a similar fashion to the way I built mine (above).
I enabled debugging (in /etc/salt/cloud) and in the debugging output on the screen I can see a snippet of code that suggests that at some point a file named "grains" in the conf directory in git root may also be transferred over:
# Copy the grains file if found
if [ -f "$_TEMP_CONFIG_DIR/grains" ]; then
echodebug "Moving provided grains file from $_TEMP_CONFIG_DIR/grains to $_SALT_ETC_DIR/grains"
But, I am not sure because I didn't dig into it more since my grains are being sprinkled as I had hoped they would.

salt sls to use dnsutil.hosts_append not working

I need to read the host entries from pillar file and update the /etc/hosts file accordingly
This is my simple sls file to update the /etc/hosts file.
#/srv/salt/splunk_dep/hosts.sls
dnsutil:
dnsutil.hosts-append:
- hostsfile: '/etc/hosts'
- ip_addr: '10.10.10.10'
- entries: 'hostname'
when i execute the sls file
salt Minion-name state.apply splunk_dep/hosts
Getting the following error
ID: dnsutil
Function: dnsutil.hosts-append
Result: False
Comment: State 'dnsutil.hosts-append' was not found in SLS 'splunk_dep/hosts'
Reason: 'dnsutil.hosts-append' is not available.
Started:
Duration:
Changes:
If i execute through command line its working fine
salt 'DS-110' dnsutil.hosts_append /etc/hosts 10.10.10.10 hostname
I need to update the /etc/hosts file through sls file. Can someone please help me on this.
I am using the salt version : salt 2015.8.3 (Beryllium)
dnsutil is a Salt module, and not a Salt state. Therefore it can be used from the command line, but not directly via SLS state file.
To run modules from state file you'll need module.run. Please note that in this case you'll need to put an underscore in hosts_append, not a hyphen.
dnsutil:
module.run:
- name: dnsutil.hosts_append
- hostsfile: '/etc/hosts'
- ip_addr: '10.10.10.10'
- entries: 'hostname'
Some caveats with modules: even if they don't change your system, they will be reported as "changed" in the summary of your salt call. Please consider using file.blockreplace for managing hosts file instead to avoid this.

What are "states" when using SaltStack?

I'm trying SaltStack after using Puppet for a while, but I can't understand their use of the word "state".
My understanding is that, for example, a light switch has 2 possible states - on or off. When I write my SLS configuration I am describing what state a server should be in. When I ask SaltStack to provision a server I issue the command salt '*' state.highstate. I understand that a server can be in a highstate (as described in my config) or not. All good so far.
But this page describes other states. It describes lowstate, highstate and overstate (amongst others) as layers. Does this mean a server passes through several states to get to a highstate? Or all states are maintained simultaneously as layers? Or can I configure multiple possible states in my SLS and have SaltStack switch between them? Or are they just layers to SaltStack that have 'state' in the name and I'm confused?
I'm probably missing something obvious, if anyone can nudge me in the right direction I think a lot of the documentation will become clear to me!
Here, top.sls wihch contain,
# cat top.sls
base:
'*':
- httpd_require
and,
# cat httpd_require.sls
install_httpd:
pkg.installed:
- name: httpd
service.running:
- name: httpd
- enable: True
- require:
- file: install_httpd
file.managed:
- name: /var/www/html/index.html
- source: salt://index1.html
- user: root
- group: root
- mode: 644
- require:
- pkg: install_httpd
High state:
We can see all the aspects of high state system while working with state files( .sls), There are three specific components.
High data:
SLS file:
High State
Each individual State represents a piece of high data(pkg.installed:'s block), Salt will compile all relevant SLS inside the top.sls, When these files are tied together using includes, and further glued together for use inside an environment using a top.sls file, they form a High State.
# salt 'remote_minion' state.show_highstate --out yaml
remote_minion:
install_httpd:
__env__: base
__sls__: httpd_require
file:
- name: /var/www/html/index.html
- source: salt://index1.html
- user: root
- group: root
- mode: 644
- require:
- pkg: install_httpd
- managed
- order: 10002
pkg:
- name: httpd
- installed
- order: 10000
service:
- name: httpd
- enable: true
- require:
- file: install_httpd
- running
- order: 10001
First, an order is declared, All States that are set to be first will have their order adjusted accordingly. Salt will then add 10000 to the last defined number (which is 0 by default), and add any States that are not explicitly ordered.
Salt will also add some variables that it uses internally, to know which environment (__env__) to execute the State in, and which SLS file (__sls__) the State declaration came from, Remember that the order is still no more than a starting point; the actual High State will be executed based first on requisites, and then on order.
"In other words, "High" data refers generally to data as it is seen by the user."
Low States:
""Low" data refers generally to data as it is ingested and used by Salt."
Once the final High State has been generated, it will be sent to the State compiler. This will reformat the State data into a format that Salt uses internally to evaluate each declaration, and feed data into each State module (which will in turn call the execution modules, as necessary). As with high data, low data can be broken into individual components:
Low State
Low chunks
State module
Execution module(s)
# salt 'remote_minion' state.show_lowstate --out yaml
remote_minion:
- __env__: base
__id__: install_httpd
__sls__: httpd_require
fun: installed
name: httpd
order: 10000
state: pkg
- __env__: base
__id__: install_httpd
__sls__: httpd_require
enable: true
fun: running
name: httpd
order: 10001
require:
- file: install_httpd
state: service
- __env__: base
__id__: install_httpd
__sls__: httpd_require
fun: managed
group: root
mode: 644
name: /var/www/html/index.html
order: 10002
require:
- pkg: install_httpd
source: salt://index1.html
state: file
user: root
Together, all this comprises a Low State. Each individual item is a Low Chunk. The first Low Chunk on this list looks like this:
- __env__: base
__id__: install_httpd
__sls__: httpd_require
fun: installed
name: http
order: 10000
state: pkg
Each low chunk maps to a State module (in this case, pkg) and a function inside that State module (in this case, installed). An ID is also provided at this level (__id__). Salt will map relationships (that is, requisites) between States using a combination of State and __id__. If a name has not been declared by the user, then Salt will automatically use the __id__ as the name.Once a function inside a State module has been called, it will usually map to one or more execution modules which actually do the work.
salt '\*' state.highstate
'*' refers to all the minions connected to the master.
'state.highstate' is used to run all modules / scripts mentioned in top.sls defined in master
To invoke a specific module / script on all minions, use the following salt command where the state information is defined in state.sls for apache in the example given below.
salt '\*' state.sls apache
To invoke the above salt call only on a specific minion, use the below command.
salt 'minion-name' state.sls apache
I don't know all levels of state, but when you run :
salt '*' state.highstate
Saltstack apply the states you provide in /srv/salt/top.sls.
If you write nothing in it, you can't apply an highstate.
You can apply other state with this command :
salt '*' state.sls state.example
A highstate is just the collection of states that is applied to your server. There is a process in the background where Salt's "state compiler" goes through several stages preparing the data in order to produce the highstate, but you don't really need to worry about those.
Things like the lowstate can help with debugging, but aren't necessary for day to day usage. The highstate is only applied once.

Resources