How to run state.highstate for a particular environment? - salt-stack

My environments structure in /etc/salt/master looks like this
file_roots:
base:
- /srv/salt
dev:
- /srv/salt/dev
stg:
- /srv/salt/stg
prod:
- /srv/salt/prod
And my top.sls file is in /srv/salt
dev:
'ip-10-1-1-28':
- devtest
stg:
'ip-10-1-1-252':
- stgtest
prod:
'ip-10-1-1-200':
- prodtest
Now I want to run state.highstate for a particular environment, say 'stg'. I am looking for something like this,
salt '*' state.highstate env=stg
How do I achieve this? My requirement is that every time I run the command, I don't want minions in all the environments to run the SLS files. Any solution?

You have this capacity but the correct command is:
salt '*' state.highstate saltenv=stg
Salt state documentation

Related

Salt External Pillar Issues

I am trying to configure an external pillar in github, but no matter what I cannot get the minions to successfully read top.sls. Below is my ext_pillar and pillar_roots config:
pillar_roots:
base:
- /srv/pillar
fileserver_backend:
- gitfs
- roots
gitfs_update_interval: 60
gitfs_base: main
gitfs_remotes:
- https://gituser:gittoken#github.com/gitaccount/saltstack.git:
- mountpoint: salt://
ext_pillar:
- git:
- main https://gituser:gittoken#github.com/gitaccount/saltpillar.git
I have the following in the root of my saltpillar repo:
top.sls:
base:
'*':
- data
data.sls:
info: some test data from remote pillar
Repos are accessible with the URIs provided. When I run salt '*' saltutil.refresh_pillar and then salt '*' pillar.items I get no results. However, I can put top.sls and data.sls directly into /srv/pillar and it works. I put the master in debug mode and don't see any errors running the commands. Any help is appreciated.
Does the following ext_pillar configuration fix your issue? I'm assuming your top.sls you posted is still in the main branch of your git repo.
ext_pillar:
- git:
- main https://gituser:gittoken#github.com/gitaccount/saltpillar.git
- env: base
Your top.sls must reference your actual branch name or you can add the env option to specify a different name.
https://docs.saltproject.io/en/latest/ref/pillar/all/salt.pillar.git_pillar.html

Minion cannot find file on master

On Minion:
ID: run_snmpv3_config
Function: file.managed
Name: /tmp/run_snmpv3_config_cmd.sh
Result: False
Comment: Source file salt://files/run_snmpv3_config_cmd.sh not found in saltenv 'base'
Started: 15:11:56.175325
Duration: 27.084 ms
Changes:
On master we confirm that the minion does in fact see the file:
master # salt minion cp.list_master | grep snmp
- files/run_snmpv3_config_cmd.sh
So why isn't it able to get it?
(In fact I wanted to use cmd.script but that errors out with Unable to cache script, so I tried to just copy the file, which doesn't work either as we see above.)
I called the state for debugging purposes on a client system using
salt-call --local state.apply teststate -l debug
Of course in this case it will look for file salt://x inside /srv/salt (or whatever the minion's config is) on the minion and not the master....

How to modify default options in Salt Minion config file from Master

I want to set "grains_cache" variable to "True" from Salt Master on all Minions. This variable is from default options that exist in minion config file and cannot be overridden by pillar data. So how can I set variables (for example "grains_cache", "grains_cache_expiration" or "log_file") from Master?
this should be an easy one. Manage the minion configuration file using the file.managed function.
A simple sls should help here:
minion_configuration:
file.managed:
- name: /etc/salt/minion
- contents: |
grains_cache: true
backup_mode: minion
salt-minion-restart:
cmd.wait:
- name: salt-call --local service.restart salt-minion
- bg: True
- order: last
- watch:
- file: salt-minion-config
In this example, saltstack ensures that the two lines beneath - contents: | are present within the minions configuration file.
The second state: salt-minion-restart will restart the salt-minion if the minion configuration file is being touched (managed by the first state).
So in short terms, this state adds your variables to the minion's configuration and restarts the minion afterwards.
This formula is os-independent.
The last thing left to do is, to target all of your minions with this.
If you want to know more about the cmd.wait and the shown example, please refer to this documentation.
I hope i could help.

How can I "sprinkle" my minions with custom grains when deploying salt-minion using Saltify (salt-cloud)?

I've gotten saltify to work on a fresh minion. I am able to specify a profile for the minion as well. However, I don't know how to assign custom grains to my minion during this process.
Here's my set up.
In /etc/salt/cloud.profiles.d/saltify.conf I have:
salt-this-webserver:
ssh_host: 10.66.77.99
ssh_username: opsuser
password: **********
provider: web-saltify-config
salt-this-fileserver:
ssh_host: 10.66.77.99
ssh_username: opsuser
password: **********
provider: file-saltify-config
In /etc/salt/cloud/cloud.providers I have:
web-saltify-config:
minion:
master: 10.66.77.44
driver: saltify
grains:
layers:
- infrastructure
roles:
- web-server
file-saltify-config:
minion:
master: 10.66.77.55
driver: saltify
grains:
layers:
- infrastructure
roles:
- file-server
When I run my command from my Salt master:
salt-cloud -p salt-this-fileserver slave-salttesting-01.eng.example.com
My /etc/salt/minion file on my minion looks like this:
grains:
salt-cloud:
driver: saltify
profile: salt-this-fileserver
provider: file-saltify-config:saltify
hash_type: sha256
id: slave-salttesting-01.eng.example.com
log_level: info
master: 10.66.77.99
I would really like it to also have:
grains:
layers:
- infrastructure
roles:
- file-server
I'd like for this to happen during the saltify stage rather than a subsequent step because it just fits really nicely into what I'm trying to accomplish at this step.
Is there a way to sprinkle some grains on my minion during "saltification"?
EDIT: The sync_after_install configuration parameter may have something to do with it but I'm not sure where to put my custom modules, grains, states, etc.
I found the grains from my cloud.providers file in /etc/salt/grains This appears to just work if you build your cloud.providers file in a similar fashion to the way I built mine (above).
I enabled debugging (in /etc/salt/cloud) and in the debugging output on the screen I can see a snippet of code that suggests that at some point a file named "grains" in the conf directory in git root may also be transferred over:
# Copy the grains file if found
if [ -f "$_TEMP_CONFIG_DIR/grains" ]; then
echodebug "Moving provided grains file from $_TEMP_CONFIG_DIR/grains to $_SALT_ETC_DIR/grains"
But, I am not sure because I didn't dig into it more since my grains are being sprinkled as I had hoped they would.

set config on salt minion from salt master

I need to set the saltstack configuration of a salt minion from a salt master. The salt.modules.config only appears to support getting configuration from the minion.
https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.config.html
salt '*' config.get file_roots
returns the file_roots from each minion, but surprisingly you can't execute
salt '*' config.set file_roots <custom configuration>
The only solution I can think of is to edit the /etc/salt/minion file using the salt.states.file module (https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html) and restart the salt-minion service. However, I have a hunch there is a better solution.
Yes, Salt can Salt itself!
We use the salt-formula to salt minions. The master might also be salted using this formula.
You should manage the files in /etc/salt/minion.d/ using Salt states.
An example (there are other ways to manage the restart):
/etc/salt/minion.d/default_env.conf:
file.serialize:
- dataset:
saltenv: base
pillarenv_from_saltenv: true
- formatter: yaml
/etc/salt/minion.d/logging.conf:
file.serialize:
- dataset:
log_fmt_console: '[%(levelname)s] %(message)s %(jid)s'
log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s %(jid)s'
logstash_udp_handler:
host: logstash
port: 10514
version: 1
msg_type: saltstack
- formatter: yaml
salt-minion:
service.running:
- enable: true
- watch:
- file: /etc/salt/minion.d/*
Stop state.apply to allow minion restart:
test.fail_without_changes:
- order: 1
- failhard: true
- onchanges:
- service: salt-minion

Resources