Currently trying to wrap my head around Salt.
In essence I'd like to install a package (rpm) and enable and run the service (if the package installed successfully).
Surprise: The service is called differently than the package.
Let's say
the package is called x
but the (systemd/init) service this package installs is called y
This does not work
my_state_id:
pkg.installed:
- pkgs:
- x
service.running:
- name: y
- enable: true
- require:
- pkg: x
Result:
Comment: The following requisites were not found:
require:
pkg: x
It looks like I have to write it like this and reference the state and not the package:
my_state_id:
pkg.installed:
- pkgs:
- x
service.running:
- name: y
- enable: true
- require:
- pkg: my_state_id
But: What does require: pkg: my_state_id mean? =D "if the state up to this point didn't fail, then run the current module"?
Quoting from the requisites documentation:
The generalized form of a requisite target is <state name>: <ID or name>.
If we break up your my_state_id ID:
my_state_id is the ID
pkg and service are the state names
pkg state does not have a name parameter, but service state has it, and it is y
Since pkg state does not have name parameter, we need to use its ID to specify it as requisite:
On the left side we will have pkg
On the right side it will be the ID my_state_id
- require:
- pkg: my_state_id
The other way to write the same would be:
# give the package name in 'name' parameter
my_state_id:
pkg.installed:
- name: x
service.running:
- name: y
- enable: true
- require:
- pkg: x
So it is a way to tell Saltstack to take actions conditionally. In this case if package install failed, then it should not try to start the service (and fail).
Related
I have a concourse pipeline which takes git source code, build then deploy it to pcf.
Now I have to make two deployments after the build, pcf-dev and pcf-qa with dependency of qa over dev. It means if dev deployment is successful then do the qa deployment.
groups: []
resources:
- name: pcf-dev
type: cf
- name: pcf-qa
type: cf
- name: source-code
type: git
resource_types: []
jobs:
- name: build-deploy
public: true
plan:
- get: source-code
- task: build
privileged: true
config:
platform: linux
image_resource:
type: docker-image
source:
repository: java
tag: openjdk-8-alpine
run:
path: sh
args:
- -exc
- |
set -e -u -x
cd source-code/api/
./mvnw package
cp target/*.jar ../../build-output/api.jar
cd /tmp
find .
inputs:
- name: source-code
outputs:
- name: build-output
- put: pcf-dev
params:
path: build-output/api.jar
- put: pcf-qa
params:
path: build-output/api.jar
I don't know how to use "passed" condition for such case. I know I can use it with "get" but don't know how to use it with "put" for my case.
Can anyone please help?
It should work as is. If pcf-dev fails then the job will fail and stop, and pcf-qa won't run. If pcf-dev passes then pcf-qa will run. Tasks only run at the same time if they are in an aggregate block.
I am trying to build an .sls file which will always restart a service:
systemd-resolved:
service.running:
- restart: True
When deployed, this gives
ID: systemd-resolved
Function: service.running
Result: True
Comment: The service systemd-resolved is already running
Started: 23:46:49.999789
Duration: 53.068 ms
Changes:
This is correct, the service is already running. What I was trying to convey with this command is to restart it. How to do that?
Note: I would like to avoid, if possible, an explicit command to be ran (as I feel it i snot very salt-like - this should rather be handled by the appropriate module):
'systemctl restart systemd-resolved':
cmd.run
If you want your service to reload you need to set reload: True instead.
Beside, If you only want to restart the service if there is any change in any other state, you need to use watch instead.
for instance,
systemd-resolved:
service.running:
- enable: True
- reload: True
- watch:
- pkg: <abc>
I have a repository with salt states for provisioning my cluster of servers in the cloud. Over time, I kept on adding more states - the .sls files - into this repo. Now im starting to struggle what is what and what is where.
I am wondering if there is a there is some software utility/package that will generate documentation off my states repository, preferably as html pages, so that I can browse them and see their interdependencies.
UPDATE:
The state sls files look like this:
include:
- states.core.pip
virtualenv:
pip.installed:
- require:
- sls: states.core.pip
virtualenvwrapper:
pip.installed:
- require:
- sls: states.core.pip
And another sls example:
{% set user_home = '/home/username' %}
my_executable_virtualenv:
virtualenv.managed:
- name: {{ user_home }}/.virtualenvs/my_executable_virtualenv
- user: username
- system_site_packages: False
- pip_pkgs:
- requests
- numpy
- pip_upgrade: True
- require:
- sls: states.core
my_executable_supervisor_entry:
file.managed:
- name: /etc/supervisor/conf.d/my_executable.conf
- source: salt://files/supervisor_config/my_executable.conf
- user: username
- group: username
- mode: 644
- makedirs: False
- require:
- sls: states.core
I did some research and found that salt stack has created one. It does work as HTML pages too. According to the documentation. If you have python installed installing Sphinx is as easy as doing
C:\> pip install sphinx
Salt-stacks docs on this can be found here. According to the docs making the HTML documentation is as easy as doing:
cd /path/to/salt/doc
make HTML
I hope this answer is what you were looking for!
This needs a custom plugin which needs to be written.
There is no plugins directly available to render sls files.
There are some plugins available for rendering YAML files, may be you can modify the same to suite your requirement.
You can use some of the functions in the state module to list all the everything in the highstate for a minion:
# salt-call state.show_states --out=yaml
local:
- ufw.package.install
- ufw.config.file
- ufw.service.enable
- ufw.service.reload
- ufw.config.services
- ufw.config.applications
- ufw.service.running
- apt.apt_conf
- apt.unattended
- cacerts
- kerberos
- network
- editor
- mounts
- openssh
- openssh.config_ini
- openssh.known_hosts
...
And then view the compiled data for each one (also works with states not in the highstate):
# salt-call state.show_sls editor --out=yaml
local:
vim-tiny:
pkg:
- installed
- order: 10000
__sls__: csrf.editor
__env__: base
editor:
alternatives:
- path: /usr/bin/vim.tiny
- set
- order: 10001
__sls__: csrf.editor
__env__: base
Or to get the entire highstate at once with state.show_highstate.
I'm not aware of any tools to build HTML documentation from that. You'd have to do that yourself.
To access all states (not just a particular highstate), you can use salt-run fileserver.file_list | grep '.sls$' to find every state, and salt-run state.orchestrate_show_sls to get the rendered data for each (though you may need to supply pillar data).
I have the following states:
copy_over_systemd_service_files:
file.managed:
- name: /etc/systemd/system/consul-template.service
- source: salt://mesos/files/consul-template.service
- owner: consul
start_up_consul-template_service:
service.running:
- name: consul-template
- enable: True
- restart: True
- require:
- file: copy_over_systemd_service_files
- watch:
- /etc/systemd/system/consul-template.service
when I run my state file I get the following error:
ID: start_up_consul-template_service
Function: service.running
Name: consul-template
Result: False
Comment: Service consul-template is already enabled, and is dead
Started: 17:27:38.346659
Duration: 2835.888 ms
Changes:
I'm not sure what this means. All I want to do is restart the service once it's been copied over and I've done this before without issue. Looking back through the stack trace just shows that Salt ran systemctl is-enabled consult-template
I think I was over complicating things. Instead I'm doing this:
consul-template:
service.running:
- require:
- file: copy_over_systemd_service_files
- watch:
- /etc/systemd/system/consul-template.service
so i have a basic saltstack statefile to install and configure an app - in this case influxdb. however, i would like salt to manage the mounting of a block device and have that required by the app before running it.
/opt/influxdb/shared/data/db:
mount.mounted:
- device: /dev/vdb1
- fstype: ext4
- mkmnt: True
- opts:
- defaults
influxdb:
pkg.installed:
- sources:
- influxdb: salt://influxdb/influxdb-0.8.8-1.x86_64.rpm
service.running:
- require:
- pkg: influxdb
- watch:
- file: /opt/influxdb/current/config.toml
module.run:
- name: influxdb.db_create
- m_name: test_db
/opt/influxdb/current/config.toml:
file.managed:
- name: /opt/influxdb/current/config.toml
- template: jinja
- source:
- salt://ptolemy/influxdb.toml
python-pip:
pkg.installed
influxdb-python:
pip.installed:
- name: influxdb
- require:
- pkg: python-pip
i guess i would want something under service.running under influxdb. can anyone help?
You'll need to add a new attribute below require and list additional requirements. It should look like this:
influxdb:
pkg.installed:
- sources:
- influxdb: salt://influxdb/influxdb-0.8.8-1.x86_64.rpm
service.running:
- require:
- pkg: influxdb
- mount: /opt/influxdb/shared/data/db
- watch:
- file: /opt/influxdb/current/config.toml
See here for documentation on require: http://docs.saltstack.com/en/latest/ref/states/requisites.html