In my salt state files I have several occurrences of a pattern which consists of defining a remote repository and importing a gpg key file definition, e.g.
import_packman_gpg_key:
cmd.run:
- name: rpm --import http://packman.inode.at/gpg-pubkey-1abd1afb.asc
- unless: rpm -q gpg-pubkey-1abd1afb-54176598
packman-essentials:
pkgrepo.managed:
- baseurl: http://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Tumbleweed/Essentials/
- humanname: Packman (Essentials)
- refresh: 1
require:
- cmd: import_packman_gpg_keygpg-pubkey-1abd1afb-54176598
I would like to abstract these away as a different state, e.g.
packman-essentials:
repo_with_key.managed:
- gpg_key_id: 1abd1afb-54176598
- gpg_key_src: http://packman.inode.at/gpg-pubkey-1abd1afb.asc
- repo_url: http://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Tumbleweed/Essentials/
- repo_name: Packman (Essentials)
which will in turn expand to the initial declarations above. I've looked into custom salt states ( see https://docs.saltstack.com/en/latest/ref/states/writing.html#example-state-module ) but I only found references on how to create one using Python. I'm looking for one which is based only on state definitions, as writing code for my specific problem looks overkill.
How can I create a custom state which reuses the template I've been using to manage package repositories?
This is what macros are for
Here is an example of simple macros for some heavily used by me constructs
However in your example, why do you cmd.run to import key?
pkgrepo.managed seems to support gpgkey option to download the key
Related
We want to change the way we do the frontends in symfony and we'd like some level of "reflection":
We want the system to be able to detect "Template displayProduct.html.twig needs xxxx other file, defines yyyy block and uses the zzzz variable".
I would like a command similar to:
php bin/console debug:template displayProduct.html.twig
that responds something like this:
Template: displayProduct.html.twig
Requires: # And tells us what other files are needed
- widgetPrice.html.twig
- widgetAvailability.html.twig
Defines: # And tells us what {% block xxxx %} are defined
- body
- title
- javascripts_own
- javascripts_general
Uses these variables: # <= This is the most important for us now
- productTitle
- price
- stock
- language
We are now visually scanning complex templates for needed variables and it's a killer. We need to automatically tell "this template needs this and that to work".
PD: Functional tests are not the solution, as we want to apply all this to dynamically generated templates stored in databases, to make users able to modify their pages, so we can't write a test for every potential future unknown template that users will write.
Does this already exist somehow?
Imagine that we have a single yaml file that describes roles of users in variety of systems. it may look like this:
username:
systemA:
- roleA
- roleB
systemB:
- roleC
I would like to use this file a source for all the minions to populate the list of users and roles for their respective systems. So the minion of systemA would have only this in it's pillar:
username:
- roleA
- roleB
I'm not sure that I want to make it kinda default pillar and rip parts out of it depending on the minion using jinja. But other options, like regenerating pillars using python from this file on every change or storing this data in DB and using ext_pillar, looks even worse to me. But may be I'm just don't see something obvious.
Thanks!
User roles aren't usually secret, so this doesn't need any transformation, and it doesn't need to be in pillars in the first place.
Simply lookup using the current minion id (or whatever "systemA" is) in your states and/or map.jinja:
{% set roles = data["username"][grains["id"]] %}
Lets say in below salt call. I want the salt not to execute anymore further if a file exist.
In below example. It should execute run-aerospike-mem_disk-check but once if it detects file exist in check_bad_conf. It shouldn't execute run-aerospike-config and get some RED color in salt as failed. How to achieve it.
Lets say in below salt call. I want the salt not to execute anymore further if a file exist.
In below example. It should execute run-aerospike-mem_disk-check but once if it detects file exist in check_bad_conf. It shouldn't execute run-aerospike-config and get some RED color in salt execution as failed. How to achieve it.
run-aerospike-mem_disk-check:
cmd.wait:
- name: /var/local/aero_dmcheck.sh
- watch:
- file: /var/local/aero_config
check_bad_conf:
file.exist : /tmp/badconf
- failhard : yes
run-aerospike-config:
cmd.wait:
- name: /var/local/aero_config.pl
- watch:
- file: /var/local/aero_config.yml
Make it more clear please.
Why you are so badly after "RED"?
You have disk-check to be run only if the bad file exists?
If so it must not run aerospike-config?
Run once no matter if the file still exists?
cmd.wait accepts onlyif argument in which you can run command to assert file absence/presence/anything (it is exit code-driven).
However you need to know that if onlyif is not satisfied the state turns GREEN not RED
There is also creates argument that is designed exactly for running the command if the given file exists.
Refer to cmd manual
As you probably want aerospike-config to be run only if the disk-check was not run maybe it is sufficient to use file.missing like so:
check:
file.missing:
- name: /tmp/badconf
run-aerospike-config:
cmd.wait:
- name: /var/local/aero_config.pl
- watch:
- file: /var/local/aero_config.yml
- require:
- file: check
Read more about Salt requisites here
If you want your run-aerospike-mem_disk-check script to be run exactly once, why don't you use cmd.script and add stateful argument to prohibit further executions?
I actually never really used it but you can use some kind of "if-statement". Check this out:
Check file exists and create a symlink
I think in your case file.present or file.absent would fit.
Hope that helps.
The task is: we have a blueprint with all needed node templates described in it,
and we want to create a deployment, that includes all these nodes, but we don't want all of them to be created during the "install" workflow.
I mean, e.g. it's needed to install all nodes in created deployment, except some of them, for example, openstack instance's volume.
But we know - it may be needed to create and add volume later and we should leave the ability to do so.
As far as volume template expects some input (it's name, for example) i want to pass 'null' as input and NOT to get volume created while "install" workflow.
Solutions like to create many various blueprints, or to delete some nodes after creation - are not acceptable.
Is that possible and how it may be performed?
I appreciate all your insights
Thanks in advance!
We've got a similar sort of requirement. Our plan is to use Cloudify 3.4's scaling capability - which is supposed to be used for multiple instances, but works just as well for just 0 or 1 instances.
Supply 0 as the value for the number_of_nodes input into the blueprint below - only tested with a local cfy install (but should be fine) - and the create & start operations will not be called. To instantiate the node post-install you'd use the built-in scale workflow. Alternatively, supply 1 at install and the node will be created.
tosca_definitions_version: cloudify_dsl_1_3
imports:
- http://www.getcloudify.org/spec/cloudify/3.4.1/types.yaml
inputs:
number_of_nodes:
default: 0
node_templates:
some_vm:
type: cloudify.nodes.Root
capabilities:
scalable:
properties:
default_instances: { get_input: number_of_nodes }
max_instances: 1
Is it possible to run a SaltStack command that, say, looks to see if a process is running on a machine, and aggregate the results of running that command on multiple minions?
Essentially, I'd like to see all the results that are returned from the minions displayed in something like an ASCII table. Is it possible to have an uber-result formatter that waits for all the results to come back, then applies the format? Perhaps there's another approach?
If you want to do this entirely within Salt, I would recommend creating an "outputter" that displays the data how you want.
A "highstate" outputter was recently created that might give you a good starting point. The highstate outputter creates a small summary table of the returned data. It can be found here:
https://github.com/saltstack/salt/blob/develop/salt/output/highstate.py
I'd recommend perusing the code of the other outputters as well.
If you want to use another tool to create this report, I would recommend adding "--out json" to your command at the cli. This will cause Salt to return the data in json format which you can then pipe to another application for processing.
This was asked a long time ago, but I stumbled across it more than once, and I thought another approach might be useful – use the survey Salt runner:
$ salt-run survey.hash '*' cmd.run 'dpkg -l python-django'
|_
----------
pool:
- machine2
- machine4
- machine5
result:
dpkg-query: no packages found matching python-django
|_
----------
pool:
- machine1
- machine3
result:
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-============-============-=================================
ii python-django 1.4.22-1+deb all High-level Python web development