Correct way to manage block devices in SaltStack - salt-stack

I would like to be able provision block devices using SaltStack. In this specific case on Ubuntu 20.04. There doesn't seem to be much documentation on the subject.
The goal is to partition and format a new block device as GPT with a single EXT4 filesystem, then mount it. If the block device already has a filesystem it will just be mounted. An entry should be added to /etc/fstab so that the device is automatically mounted on boot using it's partition label.
I was able to pull together a state file that seems to have gotten the job done, volume.sls:
disk_label_mysql:
module.run:
- name: partition.mklabel
- device: /dev/nvme2n1
- label_type: gpt
- unless: "parted /dev/nvme2n1 print | grep -i '^Partition Table: gpt'"
disk_partition_mysql:
module.run:
- name: parted_partition.mkpart
- device: /dev/nvme2n1
- fs_type: ext4
- part_type: primary
- start: 0%
- end: 100%
- unless: parted /dev/nvme2n1 print 1
- require:
- module: disk_label_mysql
disk_name_mysql:
module.run:
- name: partition.name
- device: /dev/nvme2n1
- partition: 1
- m_name: mysql
- unless: parted /dev/nvme2n1 print | grep mysql
- require:
- module: disk_partition_mysql
disk_format_mysql:
module.run:
- name: extfs.mkfs
- device: /dev/nvme2n1p1
- fs_type: ext4
- label: mysql
- unless: blkid --label mysql
- require:
- module: disk_name_mysql
disk_mount_mysql:
file.directory:
- name: /var/db/mysql
- user: root
- group: root
- file_mode: 644
- dir_mode: 777
- makedirs: True
mount.fstab_present:
- name: /dev/nvme2n1p1
- fs_file: /var/db/mysql
- fs_vfstype: ext4
- mount: True
- fs_mntops:
- defaults
- mount_by: label
- require:
- module: disk_format_mysql
After applying the sate I do see the device gets partitioned and a file system formatted.
parted /dev/nvme2n1 print
Model: Amazon Elastic Block Store (nvme)
Disk /dev/nvme2n1: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53.7GB 53.7GB ext4 mysql
It also gets added to /etc/fstab and mounted.
# cat /etc/fstab
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 1
LABEL=mysql /var/db/mysql ext4 defaults 0 0
# df -h /var/db/mysql
Filesystem Size Used Avail Use% Mounted on
/dev/nvme2n1p1 49G 53M 47G 1% /var/db/mysql
I would like to know if there is a "more correct way" to do this. The current approach looks very complicated. In my experience that usually means I'm doing it wrong.

There is a Salt state to manage block devices and it is called blockdev.
If there is no need to explicitly create and manage partitions on a disk, using this state could eliminate the need to label, partition, name, etc. the disk.
Example below will format the entire disk /dev/sdb with ext4 filesystem and mount it. If you have a partition, you could specify it instead of the disk.
disk_create:
blockdev.formatted:
- name: /dev/sdb
- fs_type: ext4
disk_mount:
file.directory:
- name: /mnt/newdisk
mount.mounted:
- name: /mnt/newdisk
- device: /dev/sdb
- fstype: ext4

Related

SaltStack - Unable to update mine_function

I'm unable to change the mine_function on the minion hosts. How can I make changes to the function and push them to all minions?
cat /etc/salt/cloud:
minion:
mine_functions:
internal_ip:
- mine_function: grains.get
- ip_interfaces:eth0:0
external_ip:
- mine_function: grains.get
- ip_interfaces:eth1:0
I want to change the external_ip function as below. But I'm not sure how to push these changes to all minions. mine_interval is set to 1 minute but the changes aren't picked up by minions.
external_ip:
- mine_function: network.ip_addrs
- cidr: 172.0.0.0/8

Must specify saltenv=base?

I'm trying to understand what's wrong with my config that I must specify saltenv=base when running sudo salt '*' state.highstate saltenv=base. If I run the high state without specifying the saltenv, I get the error message:
No Top file or master_tops data matches found.
Running salt-call cp.get_file_str salt://top.sls on the minion or master pulls back the right top.sls file. Here's a snippet of my top.sls:
base:
# All computers including clients and servers
'*':
- states.schedule_highstate
# any windows machine server or client
'os:Windows':
- match: grain
- states.chocolatey
Also, I can run any state that's in the same directory or subdirectory as the top.sls without specifying the saltenv=. with sudo salt '*' state.apply states.(somestate).
While I do have base specified in /etc/salt/master like this:
file_roots:
base:
- /srv/saltstack/salt/base
There is nothing in filesystem on the Salt master. All of the salt and pillar files are coming from GitFS. Specifying the saltenv= does grab from the correct corresponding git branch, with the master branch responding to saltenv=base or no saltenv specified when doing state.apply (that works).
gitfs_remotes
- https://git.asminternational.org/SaltStack/salt.git:
- user: someuser
- password: somepassword
- ssl_verify: False
.
.
.
ext_pillar:
- git:
- master https://git.asminternational.org/SaltStack/pillar.git:
- name: base
- user: someuser
- password: somepassword
- ssl_verify: False
- env: base
- dev https://git.asminternational.org/SaltStack/pillar.git:
- name: dev
- user: someuser
- password: somepassword
- ssl_verify: False
- env: dev
- test https://git.asminternational.org/SaltStack/pillar.git:
- name: test
- user: someuser
- password: somepassword
- ssl_verify: False
- env: test
- prod https://git.asminternational.org/SaltStack/pillar.git:
- name: prod
- user: someuser
- password: somepassword
- ssl_verify: False
- env: prod
- experimental https://git.asminternational.org/SaltStack/pillar.git:
- user: someuser
- password: somepassword
- ssl_verify: False
- env: experimental
The behavior is so inconsistent where it can't find top.sls unless specifying the saltenv, but running states is fine without saltenv=.
Any ideas?
After more debugging I found the answer. One of the other environment top.sls files was malformed and causing an error. When specifying saltenv=base, none of the other top files are evaluated, which is why it worked. After I verified ALL of the top.sls files from all the environments things behaved as expected.
Note to self, verify all the top files, not just the one you are working on.

Generating documentation for salt stack states

I have a repository with salt states for provisioning my cluster of servers in the cloud. Over time, I kept on adding more states - the .sls files - into this repo. Now im starting to struggle what is what and what is where.
I am wondering if there is a there is some software utility/package that will generate documentation off my states repository, preferably as html pages, so that I can browse them and see their interdependencies.
UPDATE:
The state sls files look like this:
include:
- states.core.pip
virtualenv:
pip.installed:
- require:
- sls: states.core.pip
virtualenvwrapper:
pip.installed:
- require:
- sls: states.core.pip
And another sls example:
{% set user_home = '/home/username' %}
my_executable_virtualenv:
virtualenv.managed:
- name: {{ user_home }}/.virtualenvs/my_executable_virtualenv
- user: username
- system_site_packages: False
- pip_pkgs:
- requests
- numpy
- pip_upgrade: True
- require:
- sls: states.core
my_executable_supervisor_entry:
file.managed:
- name: /etc/supervisor/conf.d/my_executable.conf
- source: salt://files/supervisor_config/my_executable.conf
- user: username
- group: username
- mode: 644
- makedirs: False
- require:
- sls: states.core
I did some research and found that salt stack has created one. It does work as HTML pages too. According to the documentation. If you have python installed installing Sphinx is as easy as doing
C:\> pip install sphinx
Salt-stacks docs on this can be found here. According to the docs making the HTML documentation is as easy as doing:
cd /path/to/salt/doc
make HTML
I hope this answer is what you were looking for!
This needs a custom plugin which needs to be written.
There is no plugins directly available to render sls files.
There are some plugins available for rendering YAML files, may be you can modify the same to suite your requirement.
You can use some of the functions in the state module to list all the everything in the highstate for a minion:
# salt-call state.show_states --out=yaml
local:
- ufw.package.install
- ufw.config.file
- ufw.service.enable
- ufw.service.reload
- ufw.config.services
- ufw.config.applications
- ufw.service.running
- apt.apt_conf
- apt.unattended
- cacerts
- kerberos
- network
- editor
- mounts
- openssh
- openssh.config_ini
- openssh.known_hosts
...
And then view the compiled data for each one (also works with states not in the highstate):
# salt-call state.show_sls editor --out=yaml
local:
vim-tiny:
pkg:
- installed
- order: 10000
__sls__: csrf.editor
__env__: base
editor:
alternatives:
- path: /usr/bin/vim.tiny
- set
- order: 10001
__sls__: csrf.editor
__env__: base
Or to get the entire highstate at once with state.show_highstate.
I'm not aware of any tools to build HTML documentation from that. You'd have to do that yourself.
To access all states (not just a particular highstate), you can use salt-run fileserver.file_list | grep '.sls$' to find every state, and salt-run state.orchestrate_show_sls to get the rendered data for each (though you may need to supply pillar data).

start a system service using cron - SaltStack

I would like to override default tmp.conf at /usr/lib/tmpfiles.d/ with /etc/tmpfiles.d/tmp.conf and run the cron job at midnight on everyday. The service need to run as systemd-tmpfiles --clean. How can I run the service at midnight, Somebody help me please?
Sample code:
tmp.conf:
file.managed:
- name: /etc/tmpfiles.d/tmp.conf
- source: salt://tmp/files/tmp.conf
- user: root
- mode: 644
- require:
- user: root
run_systemd-tmpfiles:
cron.present:
- user: root
- minute: 0
- hour: 0
- require:
- file: tmp.conf
enable_tmp_service:
service.running:
- name: systemd-tmpfiles --clean
- enable: True
- require:
- cron: run_systemd-tmpfiles
If you just want the command to run as part of a cron, you would need to have that cron.present setup to run the command.
cron_systemd-tmpfiles:
cron.present:
- name: systemd-tmpfiles --clean
- user: root
- minute: 0
- hour: 0
- require:
- file: tmp.conf
If you then want to run it in this state, you can't use the tmpfile.service, you would just run the command through a cmd.run, or if you only want it run when the file.managed changes, you would use cmd.wait
run tmpfiles:
cmd.wait:
- name: systemd-tmpfiles --clean
- listen:
- file: tmp.conf
But systemd-tmpfiles.service is already run on boot if you are using systemd, so there is no reason to enable it again. And when it runs during the beginning of the boot process, it will run the same way tmpfile --clean runs.

Is it possible to regenerate the grains file on a host using salt-cloud?

Given I have a map file that defines some grains for certian hosts, if that map file changes and grains are added or removed, then is it possible to have salt-cloud update the /etc/salt/grains file on the hosts with the new values?
EDIT
Example of the map file:
private-general-ec2:
- beta-web:
grains:
env_prefix: beta
roles:
- app-host
- ruby
- beta-worker:
grains:
env_prefix: beta
roles:
- app-host
- resque
- builder
queues:
-
name: '*'
count: 2
Ideally, I'd like to be able to update the any of the grains.

Resources