I need to set the saltstack configuration of a salt minion from a salt master. The salt.modules.config only appears to support getting configuration from the minion.
https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.config.html
salt '*' config.get file_roots
returns the file_roots from each minion, but surprisingly you can't execute
salt '*' config.set file_roots <custom configuration>
The only solution I can think of is to edit the /etc/salt/minion file using the salt.states.file module (https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html) and restart the salt-minion service. However, I have a hunch there is a better solution.
Yes, Salt can Salt itself!
We use the salt-formula to salt minions. The master might also be salted using this formula.
You should manage the files in /etc/salt/minion.d/ using Salt states.
An example (there are other ways to manage the restart):
/etc/salt/minion.d/default_env.conf:
file.serialize:
- dataset:
saltenv: base
pillarenv_from_saltenv: true
- formatter: yaml
/etc/salt/minion.d/logging.conf:
file.serialize:
- dataset:
log_fmt_console: '[%(levelname)s] %(message)s %(jid)s'
log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s %(jid)s'
logstash_udp_handler:
host: logstash
port: 10514
version: 1
msg_type: saltstack
- formatter: yaml
salt-minion:
service.running:
- enable: true
- watch:
- file: /etc/salt/minion.d/*
Stop state.apply to allow minion restart:
test.fail_without_changes:
- order: 1
- failhard: true
- onchanges:
- service: salt-minion
Related
I want to set "grains_cache" variable to "True" from Salt Master on all Minions. This variable is from default options that exist in minion config file and cannot be overridden by pillar data. So how can I set variables (for example "grains_cache", "grains_cache_expiration" or "log_file") from Master?
this should be an easy one. Manage the minion configuration file using the file.managed function.
A simple sls should help here:
minion_configuration:
file.managed:
- name: /etc/salt/minion
- contents: |
grains_cache: true
backup_mode: minion
salt-minion-restart:
cmd.wait:
- name: salt-call --local service.restart salt-minion
- bg: True
- order: last
- watch:
- file: salt-minion-config
In this example, saltstack ensures that the two lines beneath - contents: | are present within the minions configuration file.
The second state: salt-minion-restart will restart the salt-minion if the minion configuration file is being touched (managed by the first state).
So in short terms, this state adds your variables to the minion's configuration and restarts the minion afterwards.
This formula is os-independent.
The last thing left to do is, to target all of your minions with this.
If you want to know more about the cmd.wait and the shown example, please refer to this documentation.
I hope i could help.
I am currently trying to deploy Log-rhythm out into our environment that consists of 100+ Servers with the help of SaltStack:
While I am able to copy files over to a Windows minion with the use of file.managed, I am facing some difficulty in the process getting the IP Address of the minion server and adding this both to the .ini file and cmd.run file.
I would like to be able to do this for each minion that is connected to Salt:
While running salt -G 'roles:logging' state.apply. I seem to be getting the following error:
Rendering SLS 'base:pacakage-logrhythm' failed: Jinja variable 'dict object' has no attribute 'fqdn_ip4':
UPDATE:
I was able to resolve the issue within the ini files: by placing the following
ClientAddress={{ grains['fqdn_ip4'][0] }}
currently having issues with passing grains into the cmd.run section of the program:
create_dir:
file.directory:
- name: C:\logrhythm
/srv/salt/logrhythm/proxy1.ini:
file.managed:
- source: salt://logrhythm/proxy1.ini
- name: c:\logrhythm\proxy1.ini
- template: jinja
/srv/salt/logrhythm/proxy2.ini:
file.managed:
- source: salt://logrhythm/proxy2.ini
- name: c:\logrhythm\proxy2.ini
- tempalte: jinja
LRS_File:
file.managed:
- name: c:\logrhythm\LRSystemMonitor_64_7.4.2.8003.exe
- source: salt://logrhythm/LRSystemMonitor_64_7.4.2.8003.exe
LRS_Install:
cmd.run:
- name: 'LRSystemMonitor_64_7.4.2.8003.exe /s /v" /qn ADDLOCAL=System_Monitor,RT_FIM_Driver HOST=<> SERVERPORT=443 CLIENTADDRESS={{ grains[''fqdn_ip4''][0] }} CLIENTPORT=0"'
- cwd: C:\logrhythm
I think it should work. You may have a problem with the multiple quotes you use: simple then double then simple. Trying removing the simple quotes englobing all the command and the two simple for accessing the grains dict.
- name: LRSystemMonitor_64_7.4.2.8003.exe /s /v" /qn ADDLOCAL=System_Monitor,RT_FIM_Driver HOST=<> SERVERPORT=443 CLIENTADDRESS={{ grains['fqdn_ip4'][0] }} CLIENTPORT=0"
I've gotten saltify to work on a fresh minion. I am able to specify a profile for the minion as well. However, I don't know how to assign custom grains to my minion during this process.
Here's my set up.
In /etc/salt/cloud.profiles.d/saltify.conf I have:
salt-this-webserver:
ssh_host: 10.66.77.99
ssh_username: opsuser
password: **********
provider: web-saltify-config
salt-this-fileserver:
ssh_host: 10.66.77.99
ssh_username: opsuser
password: **********
provider: file-saltify-config
In /etc/salt/cloud/cloud.providers I have:
web-saltify-config:
minion:
master: 10.66.77.44
driver: saltify
grains:
layers:
- infrastructure
roles:
- web-server
file-saltify-config:
minion:
master: 10.66.77.55
driver: saltify
grains:
layers:
- infrastructure
roles:
- file-server
When I run my command from my Salt master:
salt-cloud -p salt-this-fileserver slave-salttesting-01.eng.example.com
My /etc/salt/minion file on my minion looks like this:
grains:
salt-cloud:
driver: saltify
profile: salt-this-fileserver
provider: file-saltify-config:saltify
hash_type: sha256
id: slave-salttesting-01.eng.example.com
log_level: info
master: 10.66.77.99
I would really like it to also have:
grains:
layers:
- infrastructure
roles:
- file-server
I'd like for this to happen during the saltify stage rather than a subsequent step because it just fits really nicely into what I'm trying to accomplish at this step.
Is there a way to sprinkle some grains on my minion during "saltification"?
EDIT: The sync_after_install configuration parameter may have something to do with it but I'm not sure where to put my custom modules, grains, states, etc.
I found the grains from my cloud.providers file in /etc/salt/grains This appears to just work if you build your cloud.providers file in a similar fashion to the way I built mine (above).
I enabled debugging (in /etc/salt/cloud) and in the debugging output on the screen I can see a snippet of code that suggests that at some point a file named "grains" in the conf directory in git root may also be transferred over:
# Copy the grains file if found
if [ -f "$_TEMP_CONFIG_DIR/grains" ]; then
echodebug "Moving provided grains file from $_TEMP_CONFIG_DIR/grains to $_SALT_ETC_DIR/grains"
But, I am not sure because I didn't dig into it more since my grains are being sprinkled as I had hoped they would.
I'm trying to understand what's wrong with my config that I must specify saltenv=base when running sudo salt '*' state.highstate saltenv=base. If I run the high state without specifying the saltenv, I get the error message:
No Top file or master_tops data matches found.
Running salt-call cp.get_file_str salt://top.sls on the minion or master pulls back the right top.sls file. Here's a snippet of my top.sls:
base:
# All computers including clients and servers
'*':
- states.schedule_highstate
# any windows machine server or client
'os:Windows':
- match: grain
- states.chocolatey
Also, I can run any state that's in the same directory or subdirectory as the top.sls without specifying the saltenv=. with sudo salt '*' state.apply states.(somestate).
While I do have base specified in /etc/salt/master like this:
file_roots:
base:
- /srv/saltstack/salt/base
There is nothing in filesystem on the Salt master. All of the salt and pillar files are coming from GitFS. Specifying the saltenv= does grab from the correct corresponding git branch, with the master branch responding to saltenv=base or no saltenv specified when doing state.apply (that works).
gitfs_remotes
- https://git.asminternational.org/SaltStack/salt.git:
- user: someuser
- password: somepassword
- ssl_verify: False
.
.
.
ext_pillar:
- git:
- master https://git.asminternational.org/SaltStack/pillar.git:
- name: base
- user: someuser
- password: somepassword
- ssl_verify: False
- env: base
- dev https://git.asminternational.org/SaltStack/pillar.git:
- name: dev
- user: someuser
- password: somepassword
- ssl_verify: False
- env: dev
- test https://git.asminternational.org/SaltStack/pillar.git:
- name: test
- user: someuser
- password: somepassword
- ssl_verify: False
- env: test
- prod https://git.asminternational.org/SaltStack/pillar.git:
- name: prod
- user: someuser
- password: somepassword
- ssl_verify: False
- env: prod
- experimental https://git.asminternational.org/SaltStack/pillar.git:
- user: someuser
- password: somepassword
- ssl_verify: False
- env: experimental
The behavior is so inconsistent where it can't find top.sls unless specifying the saltenv, but running states is fine without saltenv=.
Any ideas?
After more debugging I found the answer. One of the other environment top.sls files was malformed and causing an error. When specifying saltenv=base, none of the other top files are evaluated, which is why it worked. After I verified ALL of the top.sls files from all the environments things behaved as expected.
Note to self, verify all the top files, not just the one you are working on.
I need to access custom grains in my config files using Jinja templating. Here are my files.
[root#localhost salt]# cat my_config.conf
{{ grains['ip'] }}
[root#localhost salt]# cat test_jinja.sls
/root/my_config.conf:
file.managed:
- source: salt://my_config.conf
- user: root
- group: root
- mode: '0644'
- makedirs: True
- force: True
- template: jinja
[root#localhost salt]# salt-ssh 'my-ip' state.sls test_jinja
10.225.253.134:
----------
ID: /root/test
Function: file.managed
Result: False
Comment: Unable to manage file: Jinja variable 'dict object' has no attribute 'ip'
Started: 12:57:49.301697
Duration: 33.039 ms
Changes:
[root#localhost salt]# cat /etc/salt/roster
my-ip: # The id to reference the target system with
host: xx.xx.xx.133 # The IP address or DNS name of the remote host
user: root # The user to log in as
passwd: teledna # The password to log in with
grains:
ip: 'xx.xx.xx.133'
How to access the grains in the config files using salt-ssh???
This looks like this is a bug in salt, where the grains from the roster aren't shipped over to the minion, can you try this PR?
https://github.com/saltstack/salt/pull/40775
The reason is that there is no 'ip' grain.
To list all grains use salt '*' grains.items