I am relatively new to Salt, YAML, Jinja etc. Learning as I go. I have a yaml file being used with SaltStack to distribute several files to servers. A few of the files have a version indicator in the name. I would like to enter this once in the yaml file and then use it in name variables throughout the file. I’m not clear from searching if this is possible?
Example. I want to distribute a set of files to install DB2 version 11 fixpack 2. I have an install script, a configure script, a response file and a zip file with “fp2” in the name. I’d like to define a “ver” variable at the top of the yaml and then use it when defining the other filenames. Is this possible?
You can create a variable on the top of your state file using jinja format,
{% set variable_name = 'variable_value' %}
and then use {{ }} to render it like, {{ variable_name }}
Or if you want to use variable in a logic or within a loop,
{% if variable_name %}
Related
I would like to use include: twice in a sls file:
include:
- foo.bar
{% for system in salt['foo.get_systems'](pillar) %}
...
{% endfor %}
include:
- this.is.the.end
But this fails with this message:
- Rendering SLS 'base:example.test' failed: while constructing a mapping
in "<unicode string>", line 4, column 1
found conflicting ID 'include'
in "<unicode string>", line 106, column 1
I guess conflicting ID 'include' means that I can't use include: twice.
What can I do to execute something after the for-loop?
State files are not guaranteed to be executed in a script-like order. They describe a data structure that dictates which state functions need to be run, but you should not think about them as though they are scripts, because they aren't.
You need to "include" all the SLS files that you will be including in a single include statement, and then if you need to make sure that certain states are run before or after other states, you should use the state requisite parameters like require
You can walk around this by working with two sls files.
Move your current sls file ("example/test.sls" which contains the for-loop) to "example/step_one.sls". Then create a new file with this content:
include:
- example.step_one
- this.is.the.end
I was wondering if it is possible with Symfony 3.5 to use multiple translation files for a single language when using yml files.
Currently I have something like this:
AppBundle/Resources/translations/messages.en.yml
AppBundle/Resources/translations/messages.de.yml
which contains all my translations in either language. However I was wondering if it was possible to change this to the following structure:
AppBundle/Resources/translations/en/products.yml
AppBundle/Resources/translations/en/invoices.yml
AppBundle/Resources/translations/de/products.yml
AppBundle/Resources/translations/de/invoices.yml
I have been looking but I have been unable to find some kind of solution for this. I got it working for splitting up my routes.
AppBundle/Resources/config/routing.yml
appbundle_routes:
resource: '#AppBundle/Resources/config/routing'
type: directory
Inside that folder I got all my routes split like:
AppBundle/Resources/config/routing/products.yml
AppBundle/Resources/config/routing/users.yml
AppBundle/Resources/config/routing/invoices.yml
I was wondering if it was possible to achieve the same thing with translations?
Symfony's Translator requires files to by named in format domain.locale.loader. In case you have messages.en.yml:
messages is the default name of domain, you can also specify eg. invoices
en is the locale
yml is specifying YAML loader will be used
So your proposed use is not possible to achieve with standard set of configs and functionality. However, you can split your translations to different domain files. So paths would be:
AppBundle/Resources/translations/products.en.yml
AppBundle/Resources/translations/invoices.en.yml
And when you are using translator you specify the domain in which the translation should be looked for:
$translator->trans('translated.key', [], 'invoices');
Or in Twig:
{{ 'translated.key'|trans({},'invoices') }}
I created a complex state for API service, it involves git checkouts, python venv, uwsgi, nginx, etc etc. It works fine.
Now I would like to turn it into a template and execute it several times per minion, with variables supplied from pillar - i.e something like.
{% for apiserver in pillar.apiservers %}
include apiserver_template.sls, locals: apiserver.config
{% endfor %}
where apiserver_template will work with context supplied to it, with apiserver.config having all config data for each API instance. I know syntax is wrong but hopefully I am communicating the idea - ideally, something like executing ruby partials with supplying local variables.
How is it done properly in saltland?
It sounds to me like Jinja Macro is something you want to use for this. You can find more information about usage here: https://docs.saltstack.com/en/2015.8/topics/development/conventions/formulas.html#jinja-macros
In short what you will have in your case may look like:
{% macro api_server(git_repo, python_venv_path, python_venv_requirements) %}
{{python_venv_path}}:
virtualenv.managed:
- system_site_packages: False
- requirements: salt://{{python_venv_requirements}}
{{git_repo}}:
git.latest:
- name: {{git_repo}}
{% endmacro %}
Assuming you have a pillar apiservers where each api server has git_repo, python_venv_path and python_venv_requirements values, you can use the macro like this:
{% for server in salt.pillar.get('apiservers', []) %}
{{ api_server(server['git_repo'], server['python_venv_path'], server['python_venv_requirements']) }}
{% endfor %}
If you want - you can also put a macro in a separate state file and then import a marco as a regular salt resource.
Please also not that instead of pillar.apiservers I used salt.pillar.get('apiservers', []). This is a safer way to get data from pillar. If for some reason a pillar is unavailable - the later code will result in empty dict instead of failure in first case.
I'm having trouble using the salt mine in a pillar to dynamically create a list of hosts based on a grain value match. I don't get any error, I get no output for all hosts. Actually, I can't get any output for mine from the pillar at all even when using the example from the salt docs. I know it isn't an issue with my top file, because I can access all of the other pillar values. My test minion's mine.interval is set to 5. I've refreshed pillar data, and ran mine.update.
Here's an example of my pillar:
mine_functions:
network.ip_addrs: []
grains.item:
- host
- role
My template file that access the mine functions:
#I know this is writing the same list for each match, I'm just doing this for testing and I'll concat the results into a string when I know it works:
{% for host in salt['mine.get']('roles:web', 'grains.items:host', expr_form='grain') | dictsort() %}
serverList= {{ host }}
{% endfor %}
Output from CLI:
salt "server.domain.com" mine.get "*" "*"
server.domain.com:
----------
How do I get this to work? I get no errors, no output, it just runs smoothly, but nothing is written in the file and I get nothing from the command line. My goal here is to be able to dynamically build a list of servers that match a specific grain to set a configuration value in a config template. Am I down the wrong path here, is there a better way?
I'd recommend using mine.get directly in your sls file to get that list of hosts. I don't think there's any need to pass that through pillar data.
#Utah_Dave, thanks so much for the help both here and in IRC.
Posting this as an answer so anyone else searching for this gets a good example...
pillar:
mine_functions:
grains.items: []
template file:
{% set ft_hosts = [] %}
{% for grain_vals in salt['mine.get']('role:ps:ft:True', 'grains.items', expr_form='grain').items() %}
{% do ft_hosts.append(grain_vals[1]['host']) %}
{% endfor %}
ft.ps.server.hosts={{ ft_hosts|join('|') }}
Chef has a very elaborate (maybe too much so) scheme for cookbooks to provide default values of attributes. I think Puppet does something similar with class parameters where defaults usually go into params.pp. With Salt, I've seen:
specifying default value in dictionary/pillar lookups.
the grains.filter_by merging of default attribute values with user-provided pillar data (e.g., map.jinja in apache-formula)
in a call to file.managed state, specifying default attribute values as the defaults parameter and user-specified pillar data as context.
Option 1 seems to be the most common, but has the drawback that the template file becomes very hard to read. It also requires repeating the default value whenever the lookup is done, making it very easy to make a mistake.
Option 2 feels closest in spirit to Chef's approach, but seems to expect the defaults broken down into a dictionary of cases based on some filtering attribute (e.g., the OS type recorded in grains).
Option 3 is not bad, but puts attribute defaults into the state file, instead of separating them into their own file as they are with option 2.
Saltstack's best practices doc endorses Option 2, except that it doesn't address how to merge defaults with user-specified values without having to use grains.filter_by. Is there any way around it?
Note: The behavior of defaults.get changed in version 2015.8, and so the method described here no longer works. I am leaving this answer for users of older versions and will post a similar method for current versions.
defaults.get coupled with a defaults.yaml file should do what you want. Assume your formula tree looks like this:
my-formula/
files/
template.jinja
init.sls
defaults.yaml
# my-formula/init.sls
my-formula-conf-file:
file.managed:
- name: {{ salt['defaults.get']('conf_location') }}
- source: {{ salt['defaults.get']('conf_source') }}
... and so on.
# defaults.yaml
conf_location: /etc/my-formula.conf
conf_source: salt://my-formula/files/template.jinja
# pillar/my-formula.sls
my-formula:
conf_location: /etc/my-formula/something.conf
This will end with the configuration file placed at /etc/my-formula/something.conf (the pillar value) using salt://my-formula/files/template.jinja as the source (the default, for which no pillar override was supplied).
Note the unintuitive structure of the pillar and defaults files; defaults.get expects defaults.yaml to have its values at the root of the file, but expects the pillar overrides to be in a dictionary named after the formula, because consistency is for the weak.
The documentation for defaults.get gives its example using defaults.json instead of defaults.yaml. That works but I find yaml much more readable. And writable.
There is a bug using defaults.get from inside a managed template rather than within the state file, and as far as I know it's still open. It can still be made to work; the workaround is behind the link.
The behavior of defaults.get changed in 2015.8, possibly due to a bug. This answer describes a compatible method of getting the same results in (at least) 2015.8 and later.
Suppose your formula tree looks like this:
something/
files/
template.jinja
init.sls
defaults.yaml
# defaults.yaml
conf_location: /etc/something.conf
conf_source: salt://something/files/template.jinja
# pillar/something.sls
something:
conf_location: /etc/something/something.conf
The idea is that formula defaults are in defaults.yaml, but can be overridden in pillar. Anything not provided in pillar should use the value in defaults. You can accomplish this with a few lines at the top of any given .sls:
# something/init.sls
{%- set pget = salt['pillar.get'] %} # Convenience alias
{%- import_yaml slspath + "/defaults.yaml" as defaults %}
{%- set something = pget('something', defaults, merge=True) %}
something-conf-file:
file.managed:
- name: {{ something.conf_location }}
- source: {{ something.conf_source }}
- template: jinja
- context:
slspath: {{ slspath }}
... and so on.
What this does: The contents of defaults.yaml are loaded in as a nested dictionary. That nested dictionary is then merged with the contents of the something pillar key, with the pillar winning conflicts. The result is a nested dictionary containing both the defaults and any pillar overrides, which can then be used directly without concern to where a particular value came from.
slspath is not strictly required for this to work; it's a magic variable that contains the directory path to the currently-running sls. I like to use it because it decouples the formula from any particular location in the directory tree. It is not normally available from managed templates, which is why I pass it on as explicit context above. It may not work as expected in older versions, in which case you'll have to provide a path relative to the root of the salt tree.
The downside to this method is that, so far as I know, you can't access the final dictionary with salt's colon-based nested-keys syntax; you need to descend through it one level at a time. I have not had problems with that (dot syntax is easier to type anyway), but it is a downside. Another downside is the need for a few lines of boilerplate at the top of any .sls or template using the technique.
There are a few upsides. One is that you can loop over the final dictionary or its sub-dicts with .items() and the Right Thing will happen, which was not the case with defaults.get and which drove me insane. Another is that, if and when the salt team restores defaults.get's old functionality, the defaults/pillar structure suggested here is already compatible and they'll work fine side by side.