I need to access custom grains in my config files using Jinja templating. Here are my files.
[root#localhost salt]# cat my_config.conf
{{ grains['ip'] }}
[root#localhost salt]# cat test_jinja.sls
/root/my_config.conf:
file.managed:
- source: salt://my_config.conf
- user: root
- group: root
- mode: '0644'
- makedirs: True
- force: True
- template: jinja
[root#localhost salt]# salt-ssh 'my-ip' state.sls test_jinja
10.225.253.134:
----------
ID: /root/test
Function: file.managed
Result: False
Comment: Unable to manage file: Jinja variable 'dict object' has no attribute 'ip'
Started: 12:57:49.301697
Duration: 33.039 ms
Changes:
[root#localhost salt]# cat /etc/salt/roster
my-ip: # The id to reference the target system with
host: xx.xx.xx.133 # The IP address or DNS name of the remote host
user: root # The user to log in as
passwd: teledna # The password to log in with
grains:
ip: 'xx.xx.xx.133'
How to access the grains in the config files using salt-ssh???
This looks like this is a bug in salt, where the grains from the roster aren't shipped over to the minion, can you try this PR?
https://github.com/saltstack/salt/pull/40775
The reason is that there is no 'ip' grain.
To list all grains use salt '*' grains.items
Related
I want to set "grains_cache" variable to "True" from Salt Master on all Minions. This variable is from default options that exist in minion config file and cannot be overridden by pillar data. So how can I set variables (for example "grains_cache", "grains_cache_expiration" or "log_file") from Master?
this should be an easy one. Manage the minion configuration file using the file.managed function.
A simple sls should help here:
minion_configuration:
file.managed:
- name: /etc/salt/minion
- contents: |
grains_cache: true
backup_mode: minion
salt-minion-restart:
cmd.wait:
- name: salt-call --local service.restart salt-minion
- bg: True
- order: last
- watch:
- file: salt-minion-config
In this example, saltstack ensures that the two lines beneath - contents: | are present within the minions configuration file.
The second state: salt-minion-restart will restart the salt-minion if the minion configuration file is being touched (managed by the first state).
So in short terms, this state adds your variables to the minion's configuration and restarts the minion afterwards.
This formula is os-independent.
The last thing left to do is, to target all of your minions with this.
If you want to know more about the cmd.wait and the shown example, please refer to this documentation.
I hope i could help.
I am currently trying to deploy Log-rhythm out into our environment that consists of 100+ Servers with the help of SaltStack:
While I am able to copy files over to a Windows minion with the use of file.managed, I am facing some difficulty in the process getting the IP Address of the minion server and adding this both to the .ini file and cmd.run file.
I would like to be able to do this for each minion that is connected to Salt:
While running salt -G 'roles:logging' state.apply. I seem to be getting the following error:
Rendering SLS 'base:pacakage-logrhythm' failed: Jinja variable 'dict object' has no attribute 'fqdn_ip4':
UPDATE:
I was able to resolve the issue within the ini files: by placing the following
ClientAddress={{ grains['fqdn_ip4'][0] }}
currently having issues with passing grains into the cmd.run section of the program:
create_dir:
file.directory:
- name: C:\logrhythm
/srv/salt/logrhythm/proxy1.ini:
file.managed:
- source: salt://logrhythm/proxy1.ini
- name: c:\logrhythm\proxy1.ini
- template: jinja
/srv/salt/logrhythm/proxy2.ini:
file.managed:
- source: salt://logrhythm/proxy2.ini
- name: c:\logrhythm\proxy2.ini
- tempalte: jinja
LRS_File:
file.managed:
- name: c:\logrhythm\LRSystemMonitor_64_7.4.2.8003.exe
- source: salt://logrhythm/LRSystemMonitor_64_7.4.2.8003.exe
LRS_Install:
cmd.run:
- name: 'LRSystemMonitor_64_7.4.2.8003.exe /s /v" /qn ADDLOCAL=System_Monitor,RT_FIM_Driver HOST=<> SERVERPORT=443 CLIENTADDRESS={{ grains[''fqdn_ip4''][0] }} CLIENTPORT=0"'
- cwd: C:\logrhythm
I think it should work. You may have a problem with the multiple quotes you use: simple then double then simple. Trying removing the simple quotes englobing all the command and the two simple for accessing the grains dict.
- name: LRSystemMonitor_64_7.4.2.8003.exe /s /v" /qn ADDLOCAL=System_Monitor,RT_FIM_Driver HOST=<> SERVERPORT=443 CLIENTADDRESS={{ grains['fqdn_ip4'][0] }} CLIENTPORT=0"
I've gotten saltify to work on a fresh minion. I am able to specify a profile for the minion as well. However, I don't know how to assign custom grains to my minion during this process.
Here's my set up.
In /etc/salt/cloud.profiles.d/saltify.conf I have:
salt-this-webserver:
ssh_host: 10.66.77.99
ssh_username: opsuser
password: **********
provider: web-saltify-config
salt-this-fileserver:
ssh_host: 10.66.77.99
ssh_username: opsuser
password: **********
provider: file-saltify-config
In /etc/salt/cloud/cloud.providers I have:
web-saltify-config:
minion:
master: 10.66.77.44
driver: saltify
grains:
layers:
- infrastructure
roles:
- web-server
file-saltify-config:
minion:
master: 10.66.77.55
driver: saltify
grains:
layers:
- infrastructure
roles:
- file-server
When I run my command from my Salt master:
salt-cloud -p salt-this-fileserver slave-salttesting-01.eng.example.com
My /etc/salt/minion file on my minion looks like this:
grains:
salt-cloud:
driver: saltify
profile: salt-this-fileserver
provider: file-saltify-config:saltify
hash_type: sha256
id: slave-salttesting-01.eng.example.com
log_level: info
master: 10.66.77.99
I would really like it to also have:
grains:
layers:
- infrastructure
roles:
- file-server
I'd like for this to happen during the saltify stage rather than a subsequent step because it just fits really nicely into what I'm trying to accomplish at this step.
Is there a way to sprinkle some grains on my minion during "saltification"?
EDIT: The sync_after_install configuration parameter may have something to do with it but I'm not sure where to put my custom modules, grains, states, etc.
I found the grains from my cloud.providers file in /etc/salt/grains This appears to just work if you build your cloud.providers file in a similar fashion to the way I built mine (above).
I enabled debugging (in /etc/salt/cloud) and in the debugging output on the screen I can see a snippet of code that suggests that at some point a file named "grains" in the conf directory in git root may also be transferred over:
# Copy the grains file if found
if [ -f "$_TEMP_CONFIG_DIR/grains" ]; then
echodebug "Moving provided grains file from $_TEMP_CONFIG_DIR/grains to $_SALT_ETC_DIR/grains"
But, I am not sure because I didn't dig into it more since my grains are being sprinkled as I had hoped they would.
In salt (2018.3.0) I created the following statefile that I started to write to collect existing ssh hostkey files from minions.
sshHostKeys:
cp.push:
- path: '/etc/ssh/ssh_host_dsa_key.pub'
- upload_path: '/'
Calling
salt-call state.apply sshHostKeys
I get:
local:
----------
ID: sshHostKeys
Function: cp.push
Result: False
Comment: State 'cp.push' was not found in SLS 'sshHostKeys'
Reason: 'cp.push' is not available.
Manually calling:
salt-call cp.push /etc/ssh/ssh_host_dsa_key.pub
works just fine, the file is copied to the salt master.
Anyone has an idea what I am doing wrong in the state file?
Thanks Rainer
Had the same problem. This should work:
custom function name:
module.run:
- name: cp.push
- path: <<your path>>
See issue on github for reference:
https://github.com/saltstack/salt/issues/42330
I need to set the saltstack configuration of a salt minion from a salt master. The salt.modules.config only appears to support getting configuration from the minion.
https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.config.html
salt '*' config.get file_roots
returns the file_roots from each minion, but surprisingly you can't execute
salt '*' config.set file_roots <custom configuration>
The only solution I can think of is to edit the /etc/salt/minion file using the salt.states.file module (https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html) and restart the salt-minion service. However, I have a hunch there is a better solution.
Yes, Salt can Salt itself!
We use the salt-formula to salt minions. The master might also be salted using this formula.
You should manage the files in /etc/salt/minion.d/ using Salt states.
An example (there are other ways to manage the restart):
/etc/salt/minion.d/default_env.conf:
file.serialize:
- dataset:
saltenv: base
pillarenv_from_saltenv: true
- formatter: yaml
/etc/salt/minion.d/logging.conf:
file.serialize:
- dataset:
log_fmt_console: '[%(levelname)s] %(message)s %(jid)s'
log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s %(jid)s'
logstash_udp_handler:
host: logstash
port: 10514
version: 1
msg_type: saltstack
- formatter: yaml
salt-minion:
service.running:
- enable: true
- watch:
- file: /etc/salt/minion.d/*
Stop state.apply to allow minion restart:
test.fail_without_changes:
- order: 1
- failhard: true
- onchanges:
- service: salt-minion