I'm unable to change the mine_function on the minion hosts. How can I make changes to the function and push them to all minions?
cat /etc/salt/cloud:
minion:
mine_functions:
internal_ip:
- mine_function: grains.get
- ip_interfaces:eth0:0
external_ip:
- mine_function: grains.get
- ip_interfaces:eth1:0
I want to change the external_ip function as below. But I'm not sure how to push these changes to all minions. mine_interval is set to 1 minute but the changes aren't picked up by minions.
external_ip:
- mine_function: network.ip_addrs
- cidr: 172.0.0.0/8
Related
Assumption
vRA to Saltstack config integration is working fine
Saltstack config accepting the keys from the minion
I am triggering an event from vRA when I am creating a new VM. I would like to know how the user will know that the states which triggered by the event are completed or not
For instance:
reactor:
- 'my/custom/event':
- salt://reactor/custom.sls
/srv/salt/reactor/custom.sls
test_df:
local.cmd.run:
- tgt: "role:MyServer"
- tgt_type: grain
- arg:
- df -h > /tmp/test_df.txt
On Cloud-init running the following:
salt-call event.send 'my/custom/event'
====================================================================
How the USER will find out that the event completed successfully or not, with errors or without?
I would like to be able provision block devices using SaltStack. In this specific case on Ubuntu 20.04. There doesn't seem to be much documentation on the subject.
The goal is to partition and format a new block device as GPT with a single EXT4 filesystem, then mount it. If the block device already has a filesystem it will just be mounted. An entry should be added to /etc/fstab so that the device is automatically mounted on boot using it's partition label.
I was able to pull together a state file that seems to have gotten the job done, volume.sls:
disk_label_mysql:
module.run:
- name: partition.mklabel
- device: /dev/nvme2n1
- label_type: gpt
- unless: "parted /dev/nvme2n1 print | grep -i '^Partition Table: gpt'"
disk_partition_mysql:
module.run:
- name: parted_partition.mkpart
- device: /dev/nvme2n1
- fs_type: ext4
- part_type: primary
- start: 0%
- end: 100%
- unless: parted /dev/nvme2n1 print 1
- require:
- module: disk_label_mysql
disk_name_mysql:
module.run:
- name: partition.name
- device: /dev/nvme2n1
- partition: 1
- m_name: mysql
- unless: parted /dev/nvme2n1 print | grep mysql
- require:
- module: disk_partition_mysql
disk_format_mysql:
module.run:
- name: extfs.mkfs
- device: /dev/nvme2n1p1
- fs_type: ext4
- label: mysql
- unless: blkid --label mysql
- require:
- module: disk_name_mysql
disk_mount_mysql:
file.directory:
- name: /var/db/mysql
- user: root
- group: root
- file_mode: 644
- dir_mode: 777
- makedirs: True
mount.fstab_present:
- name: /dev/nvme2n1p1
- fs_file: /var/db/mysql
- fs_vfstype: ext4
- mount: True
- fs_mntops:
- defaults
- mount_by: label
- require:
- module: disk_format_mysql
After applying the sate I do see the device gets partitioned and a file system formatted.
parted /dev/nvme2n1 print
Model: Amazon Elastic Block Store (nvme)
Disk /dev/nvme2n1: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53.7GB 53.7GB ext4 mysql
It also gets added to /etc/fstab and mounted.
# cat /etc/fstab
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 1
LABEL=mysql /var/db/mysql ext4 defaults 0 0
# df -h /var/db/mysql
Filesystem Size Used Avail Use% Mounted on
/dev/nvme2n1p1 49G 53M 47G 1% /var/db/mysql
I would like to know if there is a "more correct way" to do this. The current approach looks very complicated. In my experience that usually means I'm doing it wrong.
There is a Salt state to manage block devices and it is called blockdev.
If there is no need to explicitly create and manage partitions on a disk, using this state could eliminate the need to label, partition, name, etc. the disk.
Example below will format the entire disk /dev/sdb with ext4 filesystem and mount it. If you have a partition, you could specify it instead of the disk.
disk_create:
blockdev.formatted:
- name: /dev/sdb
- fs_type: ext4
disk_mount:
file.directory:
- name: /mnt/newdisk
mount.mounted:
- name: /mnt/newdisk
- device: /dev/sdb
- fstype: ext4
I have a SaltStack state file (sls) that has a pretty simple state defined.
MyStateRule:
file.managed:
- source: salt://scripts/rule.ps1
- name: 'c:\scripts\rule.ps1'
cmd.run:
- name: powershell c:\scripts\rule.ps1
- require:
- file: MyStateRule
When I run a state.apply command, the cmd.run appears to execute every time, which I can see makes sense. What I want is to only run when the managed file needs to be copied over to the minion. Can I use file.managed in that case? What do I need to change, such that the script only runs when the file is copied over?
Got it -- rather than using "require," use onchanges:
cmd.run:
- name: powershell c:\scripts\rule.ps1
- onchanges:
- file: MyStateRule
I am deploying a cluster via SaltStack (on Azure) I've installed the client, which initiates a reactor, runs an orchestration to push a Mine config, do an update, restart salt-minion. (I upgraded that to restarting the box)
After all of that, I can't access the mine data until I restart the minion
/srv/reactor/startup_orchestration.sls
startup_orchestrate:
runner.state.orchestrate:
- mods: orchestration.startup
orchestration.startup
orchestration.mine:
salt.state:
- tgt: '*'
- sls:
- orchestration.mine
saltutil.sync_all:
salt.function:
- tgt: '*'
- reload_modules: True
mine.update:
salt.function:
- tgt: '*'
highstate_run:
salt.state:
- tgt: '*'
- highstate: True
orchestration.mine
{% if salt['grains.get']('MineDeploy') != 'complete' %}
/etc/salt/minion.d/globalmine.conf:
file.managed:
- source: salt:///orchestration/files/globalmine.conf
MineDeploy:
grains.present:
- value: complete
- require:
- service: rabbit_running
sleep 5 && /sbin/reboot:
cmd.run
{%- endif %}
How can I push a mine update, via a reactor and then get the data shortly afterwards?
I deploy my mine_functions from pillar so that I can update the functions on the fly
then you just have to do salt <target> saltutil.refresh_pillar and salt <target> mine.update to get your mine info on a new host.
Example:
/srv/pillar/my_mines.sls
mine_functions:
aws_cidr:
mine_function: grains.get
delimiter: '|'
key: ec2|network|interfaces|macs|{{ mac_addr }}|subnet_ipv4_cidr_block
zk_pub_ips:
- mine_function: grains.get
- ec2:public_ip
You would then make sure your pillar's top.sls targets the appropriate minions, then do the saltutil.refresh_pillar/mine.update to get your mine functions updated & mines supplied with data. After taking in the above pillar, I now have mine functions called aws_cidr and zk_pub_ips I can pull data from.
One caveat to this method is that mine_interval has to be defined in the minion config, so that parameter wouldn't be doable via pillar. Though if you're ok with the default 60-minute interval, this is a non-issue.
I have multiple salt deployment environments.
I have a requirement in which I raise an event from the minions, the master upon receiving the event, generates few files which I then want to copy to the minions.
How do I do this?
I was trying to get it to work using orchestrate. This is what I have right now:
reactor sls->
copy_cert:
runner.state.orchestrate:
- mods: _orch.copy_certs
- saltenv: 'central'
copy_certs sls->
copy_kube_certs:
salt.state:
- tgt: 'kubeminion'
- tgt_type: nodegroup
- sls:
- kubemaster.copy_certs
The problem is that I want to happen for all the environments and not just one. How do I do that?
Or is there a way to loop over the environments using jinja in some way.
Also is it possible using anything other than orchestrate.
You don't need to use orchestrate for this, all you need is the salt reactor.
Lets say you fire an event from the minion salt-call event.send tag='event/test' (you can watch the salt event bus using salt-run state.event pretty=True):
event/test {
"_stamp": "2017-05-24T10:36:05.907438",
"cmd": "_minion_event",
"data": {
"__pub_fun": "event.send",
"__pub_jid": "20170524133601757005",
"__pub_pid": 4590,
"__pub_tgt": "salt-call"
},
"id": "minion_A",
"tag": "event/test"
}
Now you need to decide what happens when salt receives the event, edit/create /etc/salt/master.d/reactor.conf (remember to restart the salt-master after editing this file):
reactor:
- event/test: # event tag to match
- /srv/reactor/some_state.sls # sls file to run
some_state.sls:
some_state:
local.state.apply:
- tgt: kubeminion
- tgt_type: nodegroup
- arg:
- kubemaster.copy_certs
- kwarg:
- saltenv: central
This will in turn apply the state kubemaster.copy_certs to all minions in the "kubeminion" nodegroup.
Hope this helps.