saltstack require for mount point - mount

so i have a basic saltstack statefile to install and configure an app - in this case influxdb. however, i would like salt to manage the mounting of a block device and have that required by the app before running it.
/opt/influxdb/shared/data/db:
mount.mounted:
- device: /dev/vdb1
- fstype: ext4
- mkmnt: True
- opts:
- defaults
influxdb:
pkg.installed:
- sources:
- influxdb: salt://influxdb/influxdb-0.8.8-1.x86_64.rpm
service.running:
- require:
- pkg: influxdb
- watch:
- file: /opt/influxdb/current/config.toml
module.run:
- name: influxdb.db_create
- m_name: test_db
/opt/influxdb/current/config.toml:
file.managed:
- name: /opt/influxdb/current/config.toml
- template: jinja
- source:
- salt://ptolemy/influxdb.toml
python-pip:
pkg.installed
influxdb-python:
pip.installed:
- name: influxdb
- require:
- pkg: python-pip
i guess i would want something under service.running under influxdb. can anyone help?

You'll need to add a new attribute below require and list additional requirements. It should look like this:
influxdb:
pkg.installed:
- sources:
- influxdb: salt://influxdb/influxdb-0.8.8-1.x86_64.rpm
service.running:
- require:
- pkg: influxdb
- mount: /opt/influxdb/shared/data/db
- watch:
- file: /opt/influxdb/current/config.toml
See here for documentation on require: http://docs.saltstack.com/en/latest/ref/states/requisites.html

Related

Salt: install package and run service (name different than package)

Currently trying to wrap my head around Salt.
In essence I'd like to install a package (rpm) and enable and run the service (if the package installed successfully).
Surprise: The service is called differently than the package.
Let's say
the package is called x
but the (systemd/init) service this package installs is called y
This does not work
my_state_id:
pkg.installed:
- pkgs:
- x
service.running:
- name: y
- enable: true
- require:
- pkg: x
Result:
Comment: The following requisites were not found:
require:
pkg: x
It looks like I have to write it like this and reference the state and not the package:
my_state_id:
pkg.installed:
- pkgs:
- x
service.running:
- name: y
- enable: true
- require:
- pkg: my_state_id
But: What does require: pkg: my_state_id mean? =D "if the state up to this point didn't fail, then run the current module"?
Quoting from the requisites documentation:
The generalized form of a requisite target is <state name>: <ID or name>.
If we break up your my_state_id ID:
my_state_id is the ID
pkg and service are the state names
pkg state does not have a name parameter, but service state has it, and it is y
Since pkg state does not have name parameter, we need to use its ID to specify it as requisite:
On the left side we will have pkg
On the right side it will be the ID my_state_id
- require:
- pkg: my_state_id
The other way to write the same would be:
# give the package name in 'name' parameter
my_state_id:
pkg.installed:
- name: x
service.running:
- name: y
- enable: true
- require:
- pkg: x
So it is a way to tell Saltstack to take actions conditionally. In this case if package install failed, then it should not try to start the service (and fail).

Must specify saltenv=base?

I'm trying to understand what's wrong with my config that I must specify saltenv=base when running sudo salt '*' state.highstate saltenv=base. If I run the high state without specifying the saltenv, I get the error message:
No Top file or master_tops data matches found.
Running salt-call cp.get_file_str salt://top.sls on the minion or master pulls back the right top.sls file. Here's a snippet of my top.sls:
base:
# All computers including clients and servers
'*':
- states.schedule_highstate
# any windows machine server or client
'os:Windows':
- match: grain
- states.chocolatey
Also, I can run any state that's in the same directory or subdirectory as the top.sls without specifying the saltenv=. with sudo salt '*' state.apply states.(somestate).
While I do have base specified in /etc/salt/master like this:
file_roots:
base:
- /srv/saltstack/salt/base
There is nothing in filesystem on the Salt master. All of the salt and pillar files are coming from GitFS. Specifying the saltenv= does grab from the correct corresponding git branch, with the master branch responding to saltenv=base or no saltenv specified when doing state.apply (that works).
gitfs_remotes
- https://git.asminternational.org/SaltStack/salt.git:
- user: someuser
- password: somepassword
- ssl_verify: False
.
.
.
ext_pillar:
- git:
- master https://git.asminternational.org/SaltStack/pillar.git:
- name: base
- user: someuser
- password: somepassword
- ssl_verify: False
- env: base
- dev https://git.asminternational.org/SaltStack/pillar.git:
- name: dev
- user: someuser
- password: somepassword
- ssl_verify: False
- env: dev
- test https://git.asminternational.org/SaltStack/pillar.git:
- name: test
- user: someuser
- password: somepassword
- ssl_verify: False
- env: test
- prod https://git.asminternational.org/SaltStack/pillar.git:
- name: prod
- user: someuser
- password: somepassword
- ssl_verify: False
- env: prod
- experimental https://git.asminternational.org/SaltStack/pillar.git:
- user: someuser
- password: somepassword
- ssl_verify: False
- env: experimental
The behavior is so inconsistent where it can't find top.sls unless specifying the saltenv, but running states is fine without saltenv=.
Any ideas?
After more debugging I found the answer. One of the other environment top.sls files was malformed and causing an error. When specifying saltenv=base, none of the other top files are evaluated, which is why it worked. After I verified ALL of the top.sls files from all the environments things behaved as expected.
Note to self, verify all the top files, not just the one you are working on.

start a system service using cron - SaltStack

I would like to override default tmp.conf at /usr/lib/tmpfiles.d/ with /etc/tmpfiles.d/tmp.conf and run the cron job at midnight on everyday. The service need to run as systemd-tmpfiles --clean. How can I run the service at midnight, Somebody help me please?
Sample code:
tmp.conf:
file.managed:
- name: /etc/tmpfiles.d/tmp.conf
- source: salt://tmp/files/tmp.conf
- user: root
- mode: 644
- require:
- user: root
run_systemd-tmpfiles:
cron.present:
- user: root
- minute: 0
- hour: 0
- require:
- file: tmp.conf
enable_tmp_service:
service.running:
- name: systemd-tmpfiles --clean
- enable: True
- require:
- cron: run_systemd-tmpfiles
If you just want the command to run as part of a cron, you would need to have that cron.present setup to run the command.
cron_systemd-tmpfiles:
cron.present:
- name: systemd-tmpfiles --clean
- user: root
- minute: 0
- hour: 0
- require:
- file: tmp.conf
If you then want to run it in this state, you can't use the tmpfile.service, you would just run the command through a cmd.run, or if you only want it run when the file.managed changes, you would use cmd.wait
run tmpfiles:
cmd.wait:
- name: systemd-tmpfiles --clean
- listen:
- file: tmp.conf
But systemd-tmpfiles.service is already run on boot if you are using systemd, so there is no reason to enable it again. And when it runs during the beginning of the boot process, it will run the same way tmpfile --clean runs.

Service is already enabled, and is dead

I have the following states:
copy_over_systemd_service_files:
file.managed:
- name: /etc/systemd/system/consul-template.service
- source: salt://mesos/files/consul-template.service
- owner: consul
start_up_consul-template_service:
service.running:
- name: consul-template
- enable: True
- restart: True
- require:
- file: copy_over_systemd_service_files
- watch:
- /etc/systemd/system/consul-template.service
when I run my state file I get the following error:
ID: start_up_consul-template_service
Function: service.running
Name: consul-template
Result: False
Comment: Service consul-template is already enabled, and is dead
Started: 17:27:38.346659
Duration: 2835.888 ms
Changes:
I'm not sure what this means. All I want to do is restart the service once it's been copied over and I've done this before without issue. Looking back through the stack trace just shows that Salt ran systemctl is-enabled consult-template
I think I was over complicating things. Instead I'm doing this:
consul-template:
service.running:
- require:
- file: copy_over_systemd_service_files
- watch:
- /etc/systemd/system/consul-template.service

Saltstack - how to watch a whole directory for changes?

nginx
pkg.installed:
- name: nginx
service:
- name: nginx
- running
- enable: True
- watch:
- file: /etc/nginx/*
/etc/nginx:
file.recurse:
- source: salt://{{slspath}}/etc/nginx/
- include_empty: True
How can I make the above work?
I want to make it so that every time a new config is added in /etc/nginx/conf.d/newsite.conf nginx is reloaded.
Currently I can only achieve that if I manually add every conf in the sls in the manner:
/etc/nginx/conf.d/newsite.conf:
file.managed:
- source: salt://{{slspath}}/etc/nginx/conf.d/newsite.conf
Is there a way to automate it?
You can't watch a file change within a directory to execute a state. But you can watch a state result to do so. In your case, the following should restart nginx whenever a change is done by the /etc/nginx file state:
nginx
pkg.installed:
- name: nginx
service.running:
- enable: True
- watch:
- file: /etc/nginx
/etc/nginx:
file.recurse:
- source: salt://{{slspath}}/etc/nginx/
- include_empty: True

Resources