Must specify saltenv=base? - salt-stack

I'm trying to understand what's wrong with my config that I must specify saltenv=base when running sudo salt '*' state.highstate saltenv=base. If I run the high state without specifying the saltenv, I get the error message:
No Top file or master_tops data matches found.
Running salt-call cp.get_file_str salt://top.sls on the minion or master pulls back the right top.sls file. Here's a snippet of my top.sls:
base:
# All computers including clients and servers
'*':
- states.schedule_highstate
# any windows machine server or client
'os:Windows':
- match: grain
- states.chocolatey
Also, I can run any state that's in the same directory or subdirectory as the top.sls without specifying the saltenv=. with sudo salt '*' state.apply states.(somestate).
While I do have base specified in /etc/salt/master like this:
file_roots:
base:
- /srv/saltstack/salt/base
There is nothing in filesystem on the Salt master. All of the salt and pillar files are coming from GitFS. Specifying the saltenv= does grab from the correct corresponding git branch, with the master branch responding to saltenv=base or no saltenv specified when doing state.apply (that works).
gitfs_remotes
- https://git.asminternational.org/SaltStack/salt.git:
- user: someuser
- password: somepassword
- ssl_verify: False
.
.
.
ext_pillar:
- git:
- master https://git.asminternational.org/SaltStack/pillar.git:
- name: base
- user: someuser
- password: somepassword
- ssl_verify: False
- env: base
- dev https://git.asminternational.org/SaltStack/pillar.git:
- name: dev
- user: someuser
- password: somepassword
- ssl_verify: False
- env: dev
- test https://git.asminternational.org/SaltStack/pillar.git:
- name: test
- user: someuser
- password: somepassword
- ssl_verify: False
- env: test
- prod https://git.asminternational.org/SaltStack/pillar.git:
- name: prod
- user: someuser
- password: somepassword
- ssl_verify: False
- env: prod
- experimental https://git.asminternational.org/SaltStack/pillar.git:
- user: someuser
- password: somepassword
- ssl_verify: False
- env: experimental
The behavior is so inconsistent where it can't find top.sls unless specifying the saltenv, but running states is fine without saltenv=.
Any ideas?

After more debugging I found the answer. One of the other environment top.sls files was malformed and causing an error. When specifying saltenv=base, none of the other top files are evaluated, which is why it worked. After I verified ALL of the top.sls files from all the environments things behaved as expected.
Note to self, verify all the top files, not just the one you are working on.

Related

How to list files from multiple environments?

I am having the following configuration on fileserver_backend.conf
fileserver_backend:
- gitfs
- roots
gitfs_provider: pygit2
gitfs_remotes:
- http://x.git:
- name: x
- root: /
- user: x
- password: x
- insecure_auth: True
- base: master
- saltenv:
- master:
- ref: master
- mountpoint: salt://gitfs
listing the files from fileserver I am getting by default only the files in base environment.
salt-run fileserver.file_list
Version is: 3004.2
How I will make visible all the files from both environments (base & master) enviroment?
fileserver.file_list takes a saltenv argument to specify the environment:
salt-run fileserver.file_list saltenv=master
If you want files from both sources to be available in the same environment, you have to put them in the same environment. By default, the master branch will already be mapped to the base environment with no further configuration.
Documentation:
https://docs.saltproject.io/en/latest/ref/runners/all/salt.runners.fileserver.html#salt.runners.fileserver.file_list
https://docs.saltproject.io/en/latest/ref/configuration/master.html#std-conf_master-gitfs_remotes

How can I "sprinkle" my minions with custom grains when deploying salt-minion using Saltify (salt-cloud)?

I've gotten saltify to work on a fresh minion. I am able to specify a profile for the minion as well. However, I don't know how to assign custom grains to my minion during this process.
Here's my set up.
In /etc/salt/cloud.profiles.d/saltify.conf I have:
salt-this-webserver:
ssh_host: 10.66.77.99
ssh_username: opsuser
password: **********
provider: web-saltify-config
salt-this-fileserver:
ssh_host: 10.66.77.99
ssh_username: opsuser
password: **********
provider: file-saltify-config
In /etc/salt/cloud/cloud.providers I have:
web-saltify-config:
minion:
master: 10.66.77.44
driver: saltify
grains:
layers:
- infrastructure
roles:
- web-server
file-saltify-config:
minion:
master: 10.66.77.55
driver: saltify
grains:
layers:
- infrastructure
roles:
- file-server
When I run my command from my Salt master:
salt-cloud -p salt-this-fileserver slave-salttesting-01.eng.example.com
My /etc/salt/minion file on my minion looks like this:
grains:
salt-cloud:
driver: saltify
profile: salt-this-fileserver
provider: file-saltify-config:saltify
hash_type: sha256
id: slave-salttesting-01.eng.example.com
log_level: info
master: 10.66.77.99
I would really like it to also have:
grains:
layers:
- infrastructure
roles:
- file-server
I'd like for this to happen during the saltify stage rather than a subsequent step because it just fits really nicely into what I'm trying to accomplish at this step.
Is there a way to sprinkle some grains on my minion during "saltification"?
EDIT: The sync_after_install configuration parameter may have something to do with it but I'm not sure where to put my custom modules, grains, states, etc.
I found the grains from my cloud.providers file in /etc/salt/grains This appears to just work if you build your cloud.providers file in a similar fashion to the way I built mine (above).
I enabled debugging (in /etc/salt/cloud) and in the debugging output on the screen I can see a snippet of code that suggests that at some point a file named "grains" in the conf directory in git root may also be transferred over:
# Copy the grains file if found
if [ -f "$_TEMP_CONFIG_DIR/grains" ]; then
echodebug "Moving provided grains file from $_TEMP_CONFIG_DIR/grains to $_SALT_ETC_DIR/grains"
But, I am not sure because I didn't dig into it more since my grains are being sprinkled as I had hoped they would.

set config on salt minion from salt master

I need to set the saltstack configuration of a salt minion from a salt master. The salt.modules.config only appears to support getting configuration from the minion.
https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.config.html
salt '*' config.get file_roots
returns the file_roots from each minion, but surprisingly you can't execute
salt '*' config.set file_roots <custom configuration>
The only solution I can think of is to edit the /etc/salt/minion file using the salt.states.file module (https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html) and restart the salt-minion service. However, I have a hunch there is a better solution.
Yes, Salt can Salt itself!
We use the salt-formula to salt minions. The master might also be salted using this formula.
You should manage the files in /etc/salt/minion.d/ using Salt states.
An example (there are other ways to manage the restart):
/etc/salt/minion.d/default_env.conf:
file.serialize:
- dataset:
saltenv: base
pillarenv_from_saltenv: true
- formatter: yaml
/etc/salt/minion.d/logging.conf:
file.serialize:
- dataset:
log_fmt_console: '[%(levelname)s] %(message)s %(jid)s'
log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s %(jid)s'
logstash_udp_handler:
host: logstash
port: 10514
version: 1
msg_type: saltstack
- formatter: yaml
salt-minion:
service.running:
- enable: true
- watch:
- file: /etc/salt/minion.d/*
Stop state.apply to allow minion restart:
test.fail_without_changes:
- order: 1
- failhard: true
- onchanges:
- service: salt-minion

start a system service using cron - SaltStack

I would like to override default tmp.conf at /usr/lib/tmpfiles.d/ with /etc/tmpfiles.d/tmp.conf and run the cron job at midnight on everyday. The service need to run as systemd-tmpfiles --clean. How can I run the service at midnight, Somebody help me please?
Sample code:
tmp.conf:
file.managed:
- name: /etc/tmpfiles.d/tmp.conf
- source: salt://tmp/files/tmp.conf
- user: root
- mode: 644
- require:
- user: root
run_systemd-tmpfiles:
cron.present:
- user: root
- minute: 0
- hour: 0
- require:
- file: tmp.conf
enable_tmp_service:
service.running:
- name: systemd-tmpfiles --clean
- enable: True
- require:
- cron: run_systemd-tmpfiles
If you just want the command to run as part of a cron, you would need to have that cron.present setup to run the command.
cron_systemd-tmpfiles:
cron.present:
- name: systemd-tmpfiles --clean
- user: root
- minute: 0
- hour: 0
- require:
- file: tmp.conf
If you then want to run it in this state, you can't use the tmpfile.service, you would just run the command through a cmd.run, or if you only want it run when the file.managed changes, you would use cmd.wait
run tmpfiles:
cmd.wait:
- name: systemd-tmpfiles --clean
- listen:
- file: tmp.conf
But systemd-tmpfiles.service is already run on boot if you are using systemd, so there is no reason to enable it again. And when it runs during the beginning of the boot process, it will run the same way tmpfile --clean runs.

Service is already enabled, and is dead

I have the following states:
copy_over_systemd_service_files:
file.managed:
- name: /etc/systemd/system/consul-template.service
- source: salt://mesos/files/consul-template.service
- owner: consul
start_up_consul-template_service:
service.running:
- name: consul-template
- enable: True
- restart: True
- require:
- file: copy_over_systemd_service_files
- watch:
- /etc/systemd/system/consul-template.service
when I run my state file I get the following error:
ID: start_up_consul-template_service
Function: service.running
Name: consul-template
Result: False
Comment: Service consul-template is already enabled, and is dead
Started: 17:27:38.346659
Duration: 2835.888 ms
Changes:
I'm not sure what this means. All I want to do is restart the service once it's been copied over and I've done this before without issue. Looking back through the stack trace just shows that Salt ran systemctl is-enabled consult-template
I think I was over complicating things. Instead I'm doing this:
consul-template:
service.running:
- require:
- file: copy_over_systemd_service_files
- watch:
- /etc/systemd/system/consul-template.service

Resources