I am trying to configure an external pillar in github, but no matter what I cannot get the minions to successfully read top.sls. Below is my ext_pillar and pillar_roots config:
pillar_roots:
base:
- /srv/pillar
fileserver_backend:
- gitfs
- roots
gitfs_update_interval: 60
gitfs_base: main
gitfs_remotes:
- https://gituser:gittoken#github.com/gitaccount/saltstack.git:
- mountpoint: salt://
ext_pillar:
- git:
- main https://gituser:gittoken#github.com/gitaccount/saltpillar.git
I have the following in the root of my saltpillar repo:
top.sls:
base:
'*':
- data
data.sls:
info: some test data from remote pillar
Repos are accessible with the URIs provided. When I run salt '*' saltutil.refresh_pillar and then salt '*' pillar.items I get no results. However, I can put top.sls and data.sls directly into /srv/pillar and it works. I put the master in debug mode and don't see any errors running the commands. Any help is appreciated.
Does the following ext_pillar configuration fix your issue? I'm assuming your top.sls you posted is still in the main branch of your git repo.
ext_pillar:
- git:
- main https://gituser:gittoken#github.com/gitaccount/saltpillar.git
- env: base
Your top.sls must reference your actual branch name or you can add the env option to specify a different name.
https://docs.saltproject.io/en/latest/ref/pillar/all/salt.pillar.git_pillar.html
Related
I am having the following configuration on fileserver_backend.conf
fileserver_backend:
- gitfs
- roots
gitfs_provider: pygit2
gitfs_remotes:
- http://x.git:
- name: x
- root: /
- user: x
- password: x
- insecure_auth: True
- base: master
- saltenv:
- master:
- ref: master
- mountpoint: salt://gitfs
listing the files from fileserver I am getting by default only the files in base environment.
salt-run fileserver.file_list
Version is: 3004.2
How I will make visible all the files from both environments (base & master) enviroment?
fileserver.file_list takes a saltenv argument to specify the environment:
salt-run fileserver.file_list saltenv=master
If you want files from both sources to be available in the same environment, you have to put them in the same environment. By default, the master branch will already be mapped to the base environment with no further configuration.
Documentation:
https://docs.saltproject.io/en/latest/ref/runners/all/salt.runners.fileserver.html#salt.runners.fileserver.file_list
https://docs.saltproject.io/en/latest/ref/configuration/master.html#std-conf_master-gitfs_remotes
I want to set "grains_cache" variable to "True" from Salt Master on all Minions. This variable is from default options that exist in minion config file and cannot be overridden by pillar data. So how can I set variables (for example "grains_cache", "grains_cache_expiration" or "log_file") from Master?
this should be an easy one. Manage the minion configuration file using the file.managed function.
A simple sls should help here:
minion_configuration:
file.managed:
- name: /etc/salt/minion
- contents: |
grains_cache: true
backup_mode: minion
salt-minion-restart:
cmd.wait:
- name: salt-call --local service.restart salt-minion
- bg: True
- order: last
- watch:
- file: salt-minion-config
In this example, saltstack ensures that the two lines beneath - contents: | are present within the minions configuration file.
The second state: salt-minion-restart will restart the salt-minion if the minion configuration file is being touched (managed by the first state).
So in short terms, this state adds your variables to the minion's configuration and restarts the minion afterwards.
This formula is os-independent.
The last thing left to do is, to target all of your minions with this.
If you want to know more about the cmd.wait and the shown example, please refer to this documentation.
I hope i could help.
I've gotten saltify to work on a fresh minion. I am able to specify a profile for the minion as well. However, I don't know how to assign custom grains to my minion during this process.
Here's my set up.
In /etc/salt/cloud.profiles.d/saltify.conf I have:
salt-this-webserver:
ssh_host: 10.66.77.99
ssh_username: opsuser
password: **********
provider: web-saltify-config
salt-this-fileserver:
ssh_host: 10.66.77.99
ssh_username: opsuser
password: **********
provider: file-saltify-config
In /etc/salt/cloud/cloud.providers I have:
web-saltify-config:
minion:
master: 10.66.77.44
driver: saltify
grains:
layers:
- infrastructure
roles:
- web-server
file-saltify-config:
minion:
master: 10.66.77.55
driver: saltify
grains:
layers:
- infrastructure
roles:
- file-server
When I run my command from my Salt master:
salt-cloud -p salt-this-fileserver slave-salttesting-01.eng.example.com
My /etc/salt/minion file on my minion looks like this:
grains:
salt-cloud:
driver: saltify
profile: salt-this-fileserver
provider: file-saltify-config:saltify
hash_type: sha256
id: slave-salttesting-01.eng.example.com
log_level: info
master: 10.66.77.99
I would really like it to also have:
grains:
layers:
- infrastructure
roles:
- file-server
I'd like for this to happen during the saltify stage rather than a subsequent step because it just fits really nicely into what I'm trying to accomplish at this step.
Is there a way to sprinkle some grains on my minion during "saltification"?
EDIT: The sync_after_install configuration parameter may have something to do with it but I'm not sure where to put my custom modules, grains, states, etc.
I found the grains from my cloud.providers file in /etc/salt/grains This appears to just work if you build your cloud.providers file in a similar fashion to the way I built mine (above).
I enabled debugging (in /etc/salt/cloud) and in the debugging output on the screen I can see a snippet of code that suggests that at some point a file named "grains" in the conf directory in git root may also be transferred over:
# Copy the grains file if found
if [ -f "$_TEMP_CONFIG_DIR/grains" ]; then
echodebug "Moving provided grains file from $_TEMP_CONFIG_DIR/grains to $_SALT_ETC_DIR/grains"
But, I am not sure because I didn't dig into it more since my grains are being sprinkled as I had hoped they would.
I have a new Debian (9.3) install with new salt-master (2017.7.4) and salt-minion installed. In /etc/salt/minion.d I have a conf file containing:
master: 127.0.0.1
grains:
roles:
- 'introducer'
In /srv/salt/top.sls I have:
base:
# https://docs.saltstack.com/en/latest/ref/states/top.html
'G#roles:introducer':
- 'introducer'
In /srv/pillar/data.sls I have:
introducer:
location: 'tcp:x.x.x.x:y'
port: 'tcp:y'
When I run salt '*' state.apply, I encounter this failure:
668629:
Data failed to compile:
----------
Rendering SLS 'base:introducer' failed: Jinja variable 'salt.pillar object' has no attribute 'introducer'
ERROR: Minions returned with non-zero exit code
Why isn't the pillar data available?
Pillar data requires a top definition as well. The configuration described in the question has no Pillar top.sls so no Pillar data is selected for any of the minions.
To correct this, add a top.sls to the Pillar directory which selects the desired minions and makes the data available to them. For example, this /srv/pillar/top.sls:
base:
'*':
- 'data'
This makes the contents of /srv/pillar/data.sls available to all minions (selected by *) in the base environment.
I have a repository with salt states for provisioning my cluster of servers in the cloud. Over time, I kept on adding more states - the .sls files - into this repo. Now im starting to struggle what is what and what is where.
I am wondering if there is a there is some software utility/package that will generate documentation off my states repository, preferably as html pages, so that I can browse them and see their interdependencies.
UPDATE:
The state sls files look like this:
include:
- states.core.pip
virtualenv:
pip.installed:
- require:
- sls: states.core.pip
virtualenvwrapper:
pip.installed:
- require:
- sls: states.core.pip
And another sls example:
{% set user_home = '/home/username' %}
my_executable_virtualenv:
virtualenv.managed:
- name: {{ user_home }}/.virtualenvs/my_executable_virtualenv
- user: username
- system_site_packages: False
- pip_pkgs:
- requests
- numpy
- pip_upgrade: True
- require:
- sls: states.core
my_executable_supervisor_entry:
file.managed:
- name: /etc/supervisor/conf.d/my_executable.conf
- source: salt://files/supervisor_config/my_executable.conf
- user: username
- group: username
- mode: 644
- makedirs: False
- require:
- sls: states.core
I did some research and found that salt stack has created one. It does work as HTML pages too. According to the documentation. If you have python installed installing Sphinx is as easy as doing
C:\> pip install sphinx
Salt-stacks docs on this can be found here. According to the docs making the HTML documentation is as easy as doing:
cd /path/to/salt/doc
make HTML
I hope this answer is what you were looking for!
This needs a custom plugin which needs to be written.
There is no plugins directly available to render sls files.
There are some plugins available for rendering YAML files, may be you can modify the same to suite your requirement.
You can use some of the functions in the state module to list all the everything in the highstate for a minion:
# salt-call state.show_states --out=yaml
local:
- ufw.package.install
- ufw.config.file
- ufw.service.enable
- ufw.service.reload
- ufw.config.services
- ufw.config.applications
- ufw.service.running
- apt.apt_conf
- apt.unattended
- cacerts
- kerberos
- network
- editor
- mounts
- openssh
- openssh.config_ini
- openssh.known_hosts
...
And then view the compiled data for each one (also works with states not in the highstate):
# salt-call state.show_sls editor --out=yaml
local:
vim-tiny:
pkg:
- installed
- order: 10000
__sls__: csrf.editor
__env__: base
editor:
alternatives:
- path: /usr/bin/vim.tiny
- set
- order: 10001
__sls__: csrf.editor
__env__: base
Or to get the entire highstate at once with state.show_highstate.
I'm not aware of any tools to build HTML documentation from that. You'd have to do that yourself.
To access all states (not just a particular highstate), you can use salt-run fileserver.file_list | grep '.sls$' to find every state, and salt-run state.orchestrate_show_sls to get the rendered data for each (though you may need to supply pillar data).