I'm trying SaltStack after using Puppet for a while, but I can't understand their use of the word "state".
My understanding is that, for example, a light switch has 2 possible states - on or off. When I write my SLS configuration I am describing what state a server should be in. When I ask SaltStack to provision a server I issue the command salt '*' state.highstate. I understand that a server can be in a highstate (as described in my config) or not. All good so far.
But this page describes other states. It describes lowstate, highstate and overstate (amongst others) as layers. Does this mean a server passes through several states to get to a highstate? Or all states are maintained simultaneously as layers? Or can I configure multiple possible states in my SLS and have SaltStack switch between them? Or are they just layers to SaltStack that have 'state' in the name and I'm confused?
I'm probably missing something obvious, if anyone can nudge me in the right direction I think a lot of the documentation will become clear to me!
Here, top.sls wihch contain,
# cat top.sls
base:
'*':
- httpd_require
and,
# cat httpd_require.sls
install_httpd:
pkg.installed:
- name: httpd
service.running:
- name: httpd
- enable: True
- require:
- file: install_httpd
file.managed:
- name: /var/www/html/index.html
- source: salt://index1.html
- user: root
- group: root
- mode: 644
- require:
- pkg: install_httpd
High state:
We can see all the aspects of high state system while working with state files( .sls), There are three specific components.
High data:
SLS file:
High State
Each individual State represents a piece of high data(pkg.installed:'s block), Salt will compile all relevant SLS inside the top.sls, When these files are tied together using includes, and further glued together for use inside an environment using a top.sls file, they form a High State.
# salt 'remote_minion' state.show_highstate --out yaml
remote_minion:
install_httpd:
__env__: base
__sls__: httpd_require
file:
- name: /var/www/html/index.html
- source: salt://index1.html
- user: root
- group: root
- mode: 644
- require:
- pkg: install_httpd
- managed
- order: 10002
pkg:
- name: httpd
- installed
- order: 10000
service:
- name: httpd
- enable: true
- require:
- file: install_httpd
- running
- order: 10001
First, an order is declared, All States that are set to be first will have their order adjusted accordingly. Salt will then add 10000 to the last defined number (which is 0 by default), and add any States that are not explicitly ordered.
Salt will also add some variables that it uses internally, to know which environment (__env__) to execute the State in, and which SLS file (__sls__) the State declaration came from, Remember that the order is still no more than a starting point; the actual High State will be executed based first on requisites, and then on order.
"In other words, "High" data refers generally to data as it is seen by the user."
Low States:
""Low" data refers generally to data as it is ingested and used by Salt."
Once the final High State has been generated, it will be sent to the State compiler. This will reformat the State data into a format that Salt uses internally to evaluate each declaration, and feed data into each State module (which will in turn call the execution modules, as necessary). As with high data, low data can be broken into individual components:
Low State
Low chunks
State module
Execution module(s)
# salt 'remote_minion' state.show_lowstate --out yaml
remote_minion:
- __env__: base
__id__: install_httpd
__sls__: httpd_require
fun: installed
name: httpd
order: 10000
state: pkg
- __env__: base
__id__: install_httpd
__sls__: httpd_require
enable: true
fun: running
name: httpd
order: 10001
require:
- file: install_httpd
state: service
- __env__: base
__id__: install_httpd
__sls__: httpd_require
fun: managed
group: root
mode: 644
name: /var/www/html/index.html
order: 10002
require:
- pkg: install_httpd
source: salt://index1.html
state: file
user: root
Together, all this comprises a Low State. Each individual item is a Low Chunk. The first Low Chunk on this list looks like this:
- __env__: base
__id__: install_httpd
__sls__: httpd_require
fun: installed
name: http
order: 10000
state: pkg
Each low chunk maps to a State module (in this case, pkg) and a function inside that State module (in this case, installed). An ID is also provided at this level (__id__). Salt will map relationships (that is, requisites) between States using a combination of State and __id__. If a name has not been declared by the user, then Salt will automatically use the __id__ as the name.Once a function inside a State module has been called, it will usually map to one or more execution modules which actually do the work.
salt '\*' state.highstate
'*' refers to all the minions connected to the master.
'state.highstate' is used to run all modules / scripts mentioned in top.sls defined in master
To invoke a specific module / script on all minions, use the following salt command where the state information is defined in state.sls for apache in the example given below.
salt '\*' state.sls apache
To invoke the above salt call only on a specific minion, use the below command.
salt 'minion-name' state.sls apache
I don't know all levels of state, but when you run :
salt '*' state.highstate
Saltstack apply the states you provide in /srv/salt/top.sls.
If you write nothing in it, you can't apply an highstate.
You can apply other state with this command :
salt '*' state.sls state.example
A highstate is just the collection of states that is applied to your server. There is a process in the background where Salt's "state compiler" goes through several stages preparing the data in order to produce the highstate, but you don't really need to worry about those.
Things like the lowstate can help with debugging, but aren't necessary for day to day usage. The highstate is only applied once.
Related
I hope you can help me with a rather frustrating issue I have been having. I have been trying to remove static config from some config files and move this to Pillar/Mine data using Salt-Stack.
Everything is going well, with the exception of 1 specific task.
This is grabbing data (custom grain) from 3 specific minions to make 3 different variables in an .sls (context) or a jinja file (direct variable) on other minions, but I cannot seem to get it to work.
(My scenario is flexible as I can call this in either a state file or jinja variable in a config file.)
This is on AWS EC2 instances, but can be replicated away from AWS in my lab. The grain I need is: "public_ipv4" and the reason I cannot use the network.util in salt runner is because this is NAT'd and the box doesn't have a 2nd interface with the public IP assigned to it. (This cannot be changed)
Pillar data works and I have a init.sls for the mine function:
mine_functions:
grains.item:
- location
- environment
- roles
- srvtype
- instance
- az
- public_ipv4
- fqdn
- ipv4
- ipv6
(Also the custom grain: "public_ipv4" works being called by the minion so I know it is the not the grains themselves being incorrect.)
When targeting via the master using the below it brings back the requested information:
my-minion:
----------
minion-with-data-i-want-1:
----------
az:
c
environment:
dev
fqdn:
correct_fqdn
instance:
3
ipv4:
- Correct_local_ip
- 127.0.0.1
ipv6:
- ::1
- Correct_ip
location:
correct_location
public_ipv4:
Correct_public_ip
roles:
Correct_role
srvtype:
None
It is key to note here that the above comes from:
salt '*globbed_target*' mine.get '*minions-with-data-i-need-glob*' grains.item
This is from the master, but I cannot single out a specific grain by using indexing or any args/kwargs etc.
So I put some syntax into a state file and some jinja templates and I cannot get it to work. Here are a few I have tried so far:
Jinja:
{% set ip1 = salt['mine.get']('*minion-with-data-i-need-glob*', 'grains.item')[7] %}
Above returns nothing.
State file:
- context:
- ip1: {{ salt['mine.get']('*minions-with-data-i-need-glob*', 'grains.item') }}
The above returns a dict error:
Context must be formed as a dict
Running latest salt-minion/master from apt.
Steps I have taken:
Running: salt '*' mine.update after every change and checking with: salt '*' mine.valid after every change and they show.
Any help is appreciated.
This looks like you are running into a classic problem. Not knowing what you are getting as the return value.
first your {# set ip1 = salt['mine.get']('*minion-with-data-i-need-glob*', 'grains.item')[7] #} returns nothing because it is a jinja comment. {% set ip1 = salt['mine.get']('*minion-with-data-i-need-glob*', 'grains.item') %}
the next problem you have is that you are passing a list to context. when it is supposed to take a dict. the error isn't even related to mine.
try this instead
- context:
ip1: {{ salt['mine.get']('*minions-with-data-i-need-glob*', 'grains.item') | json}}
next learn to use slsutil.renderer to look at how things are rendered. such as salt minion slsutil.renderer salt://thing/init.sls default_renderer=jinja
I want to set "grains_cache" variable to "True" from Salt Master on all Minions. This variable is from default options that exist in minion config file and cannot be overridden by pillar data. So how can I set variables (for example "grains_cache", "grains_cache_expiration" or "log_file") from Master?
this should be an easy one. Manage the minion configuration file using the file.managed function.
A simple sls should help here:
minion_configuration:
file.managed:
- name: /etc/salt/minion
- contents: |
grains_cache: true
backup_mode: minion
salt-minion-restart:
cmd.wait:
- name: salt-call --local service.restart salt-minion
- bg: True
- order: last
- watch:
- file: salt-minion-config
In this example, saltstack ensures that the two lines beneath - contents: | are present within the minions configuration file.
The second state: salt-minion-restart will restart the salt-minion if the minion configuration file is being touched (managed by the first state).
So in short terms, this state adds your variables to the minion's configuration and restarts the minion afterwards.
This formula is os-independent.
The last thing left to do is, to target all of your minions with this.
If you want to know more about the cmd.wait and the shown example, please refer to this documentation.
I hope i could help.
I am testing salt as a management system, using ansible so far.
How can I trigger an action (specifically, a service reload) when a state has changed?
In Ansible this is done via notify but browsing salt documentation I cannot find anything similar.
I found watch, which works the other way round: "check something, and if it changed to this and that".
there is also listen which seems to be closer to my needs (the documentation mentions a service reload) but I cannot put together the pieces.
To set an example, how the following scenario would work in salt: check a git repo (= create it if not existing or pull from it otherwise) and if it has changed, reload a service? The Ansible equivalent is
- name: clone my service
git:
clone: yes
dest: /opt/myservice
repo: http://git.example.com/myservice.git
version: master
force: yes
notify:
- restart my service if needed
- name: restart my service if needed
systemd:
name: myservice
state: restarted
enabled: True
daemon_reload: yes
Your example:
ensure my service:
git.latest:
- name: http://git.example.com/myservice.git
- target: /opt/myservice
service.running:
- watch:
- git: http://git.example.com/myservice.git
When there will be change in repo (clone for the first time, update etc.)
the state will be marked as "having changes" thus the dependent states -
service.running in this case - will require changes, for service it means to restart
What you are asking is covered in salt quickstart
I have a repository with salt states for provisioning my cluster of servers in the cloud. Over time, I kept on adding more states - the .sls files - into this repo. Now im starting to struggle what is what and what is where.
I am wondering if there is a there is some software utility/package that will generate documentation off my states repository, preferably as html pages, so that I can browse them and see their interdependencies.
UPDATE:
The state sls files look like this:
include:
- states.core.pip
virtualenv:
pip.installed:
- require:
- sls: states.core.pip
virtualenvwrapper:
pip.installed:
- require:
- sls: states.core.pip
And another sls example:
{% set user_home = '/home/username' %}
my_executable_virtualenv:
virtualenv.managed:
- name: {{ user_home }}/.virtualenvs/my_executable_virtualenv
- user: username
- system_site_packages: False
- pip_pkgs:
- requests
- numpy
- pip_upgrade: True
- require:
- sls: states.core
my_executable_supervisor_entry:
file.managed:
- name: /etc/supervisor/conf.d/my_executable.conf
- source: salt://files/supervisor_config/my_executable.conf
- user: username
- group: username
- mode: 644
- makedirs: False
- require:
- sls: states.core
I did some research and found that salt stack has created one. It does work as HTML pages too. According to the documentation. If you have python installed installing Sphinx is as easy as doing
C:\> pip install sphinx
Salt-stacks docs on this can be found here. According to the docs making the HTML documentation is as easy as doing:
cd /path/to/salt/doc
make HTML
I hope this answer is what you were looking for!
This needs a custom plugin which needs to be written.
There is no plugins directly available to render sls files.
There are some plugins available for rendering YAML files, may be you can modify the same to suite your requirement.
You can use some of the functions in the state module to list all the everything in the highstate for a minion:
# salt-call state.show_states --out=yaml
local:
- ufw.package.install
- ufw.config.file
- ufw.service.enable
- ufw.service.reload
- ufw.config.services
- ufw.config.applications
- ufw.service.running
- apt.apt_conf
- apt.unattended
- cacerts
- kerberos
- network
- editor
- mounts
- openssh
- openssh.config_ini
- openssh.known_hosts
...
And then view the compiled data for each one (also works with states not in the highstate):
# salt-call state.show_sls editor --out=yaml
local:
vim-tiny:
pkg:
- installed
- order: 10000
__sls__: csrf.editor
__env__: base
editor:
alternatives:
- path: /usr/bin/vim.tiny
- set
- order: 10001
__sls__: csrf.editor
__env__: base
Or to get the entire highstate at once with state.show_highstate.
I'm not aware of any tools to build HTML documentation from that. You'd have to do that yourself.
To access all states (not just a particular highstate), you can use salt-run fileserver.file_list | grep '.sls$' to find every state, and salt-run state.orchestrate_show_sls to get the rendered data for each (though you may need to supply pillar data).
If this explanation exists somewhere, I've spent 3 months trying to find it, and failed. I come from a Puppet background, however for various reasons I really want to try replacing it with Salt.
I've gotten a basic setup and I can code my own states and see them work without any issues. The documentation on this is pretty clear. Where I'm stuck is attempting to implement a community salt formula. I can include the formula with it's basic setup and they work fine, however I cannot figure out how to override the defaults from my pillar data. This seems to be where the Salt documentation is weakest.
The documentation states that you should check the pillar.example for how to configure the formula. The pillar.example gives the configuration part clearly, however nether the documentation or the pillar.example tell you how to include this into your pillar data.
In my case I'm trying to use the snmp-formula. I've got a basic setup for my salt file structure, which you can see from my file roots:
file_roots:
base:
- /srv/salt/base
- /srv/formulas/snmp-formula
Inside base I have two pillars:
base/
top.sls
common.sls
top.sls is very simple:
base:
'*':
- common
common.sls has all common config:
include:
- snmp
- snmp.conf
- snmp.trap
- snmp.conftrap
tcpdump:
pkg.latest:
- name: tcpdump
telnet:
pkg.latest:
- name: telnet
htop:
pkg.latest:
- name: htop
snmp:
conf:
location: 'Office'
syscontact: 'Example.com Admin <admin#example.com>'
logconnects: false
# vacm com2sec's (map communities into security names)
com2sec:
- name: mynetwork
source: 192.168.0.13/31
community: public
# vacm group's (map security names to group names)
groups:
- name: MyROGroup
version: v1
secname: mynetwork
- name: MyROGroup
version: v1c
secname: mynetwork
# vacm views (map mib trees to views)
views:
- name: all
type: included
oid: '.1'
# vacm access (map groups to views with access restrictions)
access:
- name: MyROGroup
context: '""'
match: any
level: noauth
prefix: exact
read: all
write: none
notify: none
# v3 users for read-write
rwusers:
- username: 'nagios'
passphrase: 'myv3password'
view: all
In common.sls I've included the snmp-formula and then followed the pillar.example from the formula to customize the configuration. However when I run a test with this I get the following error:
Data failed to compile:
----------
Detected conflicting IDs, SLS IDs need to be globally unique.
The conflicting ID is 'snmp' and is found in SLS 'base:common' and SLS 'base:snmp'
I'm not sure how to proceed with this. It seems like I would have to actually modify the community formula directly to achieve what I want, which seems like the wrong idea. I want to be able to keep the community formula up to date with it's repository and coming from the Puppet perspective, I should be overriding a modules defaults as I need, not modifying the modules directly.
Can someone please make the missing connection for me? How do I implement the pillar.example?
The Salt formula in question is here:
https://github.com/saltstack-formulas/snmp-formula
I have finally figured this out, and it was a problem with a fundamental misunderstanding of the differences between 'file_roots' and 'pillar_roots' as well as 'pillars' vs 'states'. I don't feel that the documentation is very clear in the Getting Started guide about these so I'll explain it, but first the answer.
ANSWER:
To implement the above pillar.example, simply create a dedicated snmp.sls file in your 'base' environment in your pillar data:
/srv/pillar/snmp.sls:
snmp:
conf:
location: 'Office'
syscontact: 'Example.com Admin <admin#example.com>'
logconnects: false
# vacm com2sec's (map communities into security names)
com2sec:
- name: mynetwork
source: 192.168.0.13/31
community: public
# vacm group's (map security names to group names)
groups:
- name: MyROGroup
version: v1
secname: mynetwork
- name: MyROGroup
version: v1c
secname: mynetwork
# vacm views (map mib trees to views)
views:
- name: all
type: included
oid: '.1'
mask: 80
# vacm access (map groups to views with access restrictions)
access:
- name: MyROGroup
context: '""'
match: any
level: noauth
prefix: exact
read: all
write: none
notify: none
# v3 users for read-write
rwusers:
- username: 'nagios'
passphrase: 'myv3password'
view: all
Your pillar_root must also include a top.sls (not to be confused with the top.sls in your file_roots for your states) like this:
/srv/pillar/top.sls
base:
'*':
- snmp
IMPORTANT: This directory and this top.sls for pillar data cannot exist or be included by your file_roots! This is where I was going wrong. For a complete picture, this is the the config I now have:
/etc/salt/master: (snippet)
file_roots:
base:
- /srv/salt/base
- /srv/formulas/snmp-formula
pillar_roots:
base:
- /srv/pillar
Inside /srv/salt/base I have a top.sls which includes a common.sls
for the 'base' environment. This is where the snmp-formula and it's states are included.
/srv/salt/base/top.sls:
base/
top.sls
common.sls
/srv/salt/base/common.sls:
include:
- snmp
- snmp.conf
- snmp.trap
- snmp.conftrap
tcpdump:
pkg.latest:
- name: tcpdump
telnet:
pkg.latest:
- name: telnet
htop:
pkg.latest:
- name: htop
Now the snmp parameter in the pillar data does not conflict with ID of the snmp state from the formula included by the state data.