How does one pass positional arguments to mine function aliases? - salt-stack

Using salt 2014.7.0, I can add the following to a minion configuration:
mine_functions:
cmd.run: [echo hello]
And then, on the salt master, I can see my 'test' minion retrieving "hello" from the mine:
salt 'test' mine.update
salt 'test' mine.get 'test' cmd.run
test:
----------
test:
hello
This all works perfectly well. However, I would like to use a mine function alias instead of the cmd.run handle. The documentation is not clear on how to do this, and nothing I've tried so far will work. The following attempts have failed:
Passing the arguments through in the mine_function:
mine_functions:
say_hello:
mine_function:
cmd.run: [echo hello]
Passing the arguments through as the "name" field:
mine_functions:
say_hello:
mine_function: cmd.run
name: echo hello
Passing the argments through as an "args" list:
mine_functions:
say_hello:
mine_function: cmd.run
args: [echo hello]
But none of these result in the desired outcome, accessing "hello" through the say_hello alias, i.e.:
salt 'test' mine.update
salt 'test' mine.get 'test' say_hello
test:
----------
test:
hello
What is the correct way to pass arguments to mine functions when using mine function aliases?

Found a workable solution!
The trick is to use a list as the value for the mine function alias, with the mine_function key as the first value, like so:
mine_functions:
say_hello:
- mine_function: cmd.run
- echo hello
This results in the desired output:
salt 'test' mine.update
salt 'test' mine.get 'test' say_hello
test:
----------
test:
hello

I believe the actual solution to this is:
mine_functions:
say_hello:
- mine_function: cmd.run
- cmd: echo test
The key is specifying cmd: echo test rather than name: echo test.
Here's why:
mine_functions execute salt modules, not salt states (per this documentation). This is the same as what happens when you run salt <target> <module>.<method> [args] on the command-line. Notably, command modules don't necessarily follow the state convention of having name as the first parameter.
If you take a look at the documentation for the salt.modules.cmdmod module (which is used by referencing cmd.run, not cmdmod.run, for some reason), you'll note that its first parameter is named cmd rather than name. Using that as the key instead does what you need it to do.

Related

How to set state_output=changes in a salt-stack schedule?

I have a salt schedule calling state.apply and using the highstate returner to write out a file. The schedule is being kicked off as expected, and the output file is being created, but all the unchanged states are included in the output.
On the command line, I'd force only diffs and errors to be with the --state_output=changes option of salt.
Is there a way to include set state_output=changes in the schedule somehow ?
My defining the schedule in the pillar data and it looks something like this:
schedule:
mysched:
function: state.apply
seconds: 3600
kwargs:
test: True
returner: highstate
returner_kwargs:
report_format: yaml
report_delivery: file
file_output: /path/to/mysched.yaml
I fixed this by switching the schedule as per below. Instead of calling state.apply directly, the schedule uses cmd.run to to kick off a salt-call command that does the state.apply, and that command can include the state-output flag.
schedule:
mysched:
function: cmd.run
args:
- "salt-call state.apply --state-output=changes --log-level=warning test=True > /path/to/mysched.out 2>&1"
seconds: 3600

How to apply a top file when using salt-ssh and roster file

I'm new to salt, and I'm trying to use salt-ssh to manage hosts. I have the following roster file
~/salt/roster
pi:
host: raspberypi1.local
tty: True
sudo: True
I have salt states
~/salt/states/docker.sls
I am able to apply the salt states by calling the state explicitly
sudo salt-ssh '*' -c . state.apply docker
How can I make it so that I don't have to call the state directly? I want the raspberypi1.local node to always run the docker state.
Things I've tried
Make ~/salt/top.sls
base:
'pi*':
- docker
However the top.sls appears to be ignored by salt-ssh
I've tried editing ~/salt/Saltfile to point at a specific file_roots
salt-ssh:
roster_file: /Users/foobar/salt/roster
config_dir: /Users/foobar/salt
log_file: /Users/foobar/salt/log.txt
ssh_log_file: /Users/foobar/salt/ssh-log.txt
file_roots:
base:
- /Users/foobar/salt/top.sls
Here file_roots also appears to be ignored.
Whats the proper way to tie states to nodes when using salt-ssh?
I moved ~/salt/top.sls to ~/salt/states/top.sls, and removed file_roots: entirely from the Saltfile (it belongs in the master file). And now I am able to apply states like so:
sudo salt-ssh '*' -c . state.apply

How to deploy a Vue.js app with env vars on Google Cloud Build?

I want to deploy a Vue.js app with google cloud build (to Firebase Hosting). Even if this is a fairly trivial use of the two products, the implementations of the two service for environment variables seems contradictory.
Google Cloud Build requires environment variables to start with "VUE_APP" prefix otherwise it completely ignores them and their content is undefined. On the other hand, Google cloud build requires variables to have the prefix "_", otherwise it throws the substitution variable is not "a valid built-in substitution". So I don't see anyway to pass the variables from the Google Cloud Build to the Vue.js app.
I also tried the following:
- name: 'gcr.io/cloud-builders/npm'
args: [ 'ci', '--prefix', 'web/vue_js_landing/' ]
env: [ 'VUE_APP_FIREBASE_WEB_API_KEY=${_FIREBASE_WEB_API_KEY}' ]
But it throws 'key in the template "VUE_APP_FIREBASE_WEB_API_KEY" is not a valid built-in substitution'
Anyone aware of a workaround for this situation?
Correction:
This question was misleading. The code above answers my question. There's also a typo in it. It should have been "args: [ 'build', '--prefix', 'web/vue_js_landing/' ]". The error I mentioned, if someone else stumbles upon, this was triggered because I did "echo $VUE_APP_FIREBASE_WEB_API_KEY" at some other point in my config and that's maybe because the ALLOW_LOOSE flag was not set.
Is this your entire build config? Please post if you have more
I'm not sure how the VUE_APP prefix is required in your context, but you can set substitutions and env vars without this prefix regularly.
Substitutions need _ prefix are replaced during build but are not in env during build step. Env vars don't need extra prefix, but need to be called with $$.
How are you passing the web API Key? Here is an example passing it through cli
gcloud builds submit --no-source --substitutions _SECRET_KEY='123'
cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/npm'
entrypoint: 'bash'
args: ['-c', 'echo $$FIREBASE_WEB_API_KEY']
env: ['FIREBASE_WEB_API_KEY=${_SECRET_KEY}']
Here is another example showing both substitutions and env variables. You can run with gcloud builds submit --no-source if you want to play with it.
cloudbuild.yaml
steps:
- id: 'breakout syntax'
name: 'gcr.io/cloud-builders/npm'
entrypoint: 'bash'
args:
- '-c'
- |
echo 'print all env vars'
env
echo 'print one env var with $$'
echo '1: '$$BUILD_ENV_VAR
echo '2: '$$STEP_ENV_VAR
echo '3: '$$SUB_IN_ENV_VAR
echo 'print one substitution with $ or ${}'
echo '1: '${_SUB_VAR}
echo '2: ' $$_SUB_VAR ## doesn't exist in env, fails
echo '3: '$_SUB_VAR
env: ['STEP_ENV_VAR=step-var']
substitutions:
_SUB_VAR: sub-var
options:
env:
- BUILD_ENV_VAR=env-var
- SUB_IN_ENV_VAR=env-var-with-${_SUB_VAR}
sources: cloud build docs: subs, cloud build docs: build steps, mastering cloud build syntax (bash things)

pass arguments to tmuxinator project file

I have a project file like
windows:
- server:
layout: even-vertical
panes:
- ssh -t {pass value in here} tail -f -n 100 /var/log/app.log
-
I would like to pass the SSH host in as I start the session. Something like
mux project for.bar
Can this be done
Checkout this section Tmuxinator's readme.
You can also pass arguments to your projects, and access them with
ERB. Simple arguments are available in an array named #args.
Eg:
$ tmuxinator start project foo
~/.tmuxinator/project.yml
name: project root: ~/<%= #args[0] %>
... You can also pass key-value pairs using the format key=value.
These will be available in a hash named #settings.
Eg:
$ tmuxinator start project workspace=~/workspace/todo
~/.tmuxinator/project.yml
name: project root: ~/<%= #settings["workspace"] %>
...

How do you use the 'publish' module in your own module

I need to execute a script on another minion. The best solution seems to be Peer Publishing, but the only documentation I have been able to find only shows how to do it via CLI.
How can I define the following in a module?
salt-call system.example.com publish.publish '*' cmd.run './script_to_run'
You want the salt.client.Caller() API.
#!/usr/bin/env python
import salt.client
salt_call = salt.client.Caller()
salt_call.function('publish.publish', 'web001',
'cmd.run', 'logger "publish.publish success"')
You have to run the above as the salt user (usually root).
Then scoot over to web001 and confirm the message is in /var/log/syslog. Worked for me.
The syntax for the .sls file:
salt-call publish.publish \* cmd.run 'cd /*directory* && ./script_to_run.sh:
cmd.run
Alternative syntax:
execute script on other minion:
cmd.run
- name: salt-call publish.publish \* cmd.run 'cd /*directory* && ./script_to_run.sh
What I specifically did (I needed to execute a command, but only if a published command executed successfully. Which command to publish depends on the role of the minion):
execute script:
cmd.run:
- name: *some shell command here*
- cwd: /*directory*
- require:
- file: *some file here*
{% if 'role_1' in grains['roles'] -%}
- onlyif: salt-call publish.publish \* cmd.run 'cd /*other_directory* && ./script_to_run_A.sh'
{% elif 'role_2' in grains['roles'] -%}
- onlyif: salt-call publish.publish \* cmd.run 'cd /*other_directory* && ./script_to_run_B.sh'
{% endif %}
Remember to enable peer communication in /etc/salt/master under the section 'Peer Publish Settings':
peer:
.*:
- .*
This configuration is not secure, since it enables all minions to execute all commands on fellow minions, but I have not figured out the correct syntax to select minions based on their role yet.
Another note is that it probably would be better to create a custom command containing the cmd.run and then enable only that, since enabling all nodes to execute arbitrary scripts on each other is not secure.
The essence of this answer is the same as Dan Garthwaite's, but what I needed was a solution for a .sls file.

Resources