How can I use salt-mine when using salt-ssh - salt-stack

I have a saltstack state which requires accessing the salt mine for it to execute correctly. This has been working fine, but we have recently switched to using salt-ssh and it is producing the following error
TypeError encountered executing example_token: 'FunctionWrapper' object is not callable
This mine function is set up in my pillar as follows
mine_functions:
example_token:
- mine_function: cp.get_file_str
- file:///tmp/example.txt
This is called in the state using
salt['mine.get'](minion_host_name, 'example_token')[minion_host_name]
Like I mentioned this has always worked when calling salt '*' state.apply
But after switching to salt-ssh -i '*' state.apply
Also switching to salt-ssh was out of my hands and going back is not an option. I have also tried declaring the functions in the roster rather than the pillar but produces the same result

Related

AttributeError: module 'salt.modules' has no attribute 'cp'

The CP command from the terminal is working as expected via
salt "*" cp_push .... works as expected from the master terminal. However using the command in an execution module fails with:
AttributeError: module 'salt.modules' has no attribute 'cp'
Salt is imported in the execution module via:
import salt
The function is being called as:
salt.modules.cp.push(path=str(latest_report))
Not an answer but a work around.
used:
salt.modules.cmdmod.run("salt-call cp.push *desired path*, shell="powershell")
Using powershell solved a plethora of other errors in default Command Prompt

Getting jsonpath error when executing any commands in zsh

error: error executing jsonpath "{.contexts[?(#.name==\"\")].context.namespace}": Error executing template: <nil> is not array or slice and cannot be filtered. Printing more information for debugging the template:
template was:
{.contexts[?(#.name=="")].context.namespace}
object given to jsonpath engine was:
map[string]interface {}{"apiVersion":"v1", "clusters":interface {}(nil), "contexts":interface {}(nil), "current-context":"", "kind":"Config", "preferences":map[string]interface {}{}, "users":interface {}(nil)}
I also had this issue. It started after I upgraded my kubectl to a newer version.
In my case In my zsh I have a custom prompt using POWERLEVEL_9K plugin which displays the current k8s cluster/namespace.
This prompt config in my ~/.zshrc looks like
POWERLEVEL9K_RIGHT_PROMPT_ELEMENTS=(status root_indicator command_execution_time kubecontext background_jobs history time)
Remove kubecontext from there and here you go.
If you'd like to remain k8s info in your prompt, the template in the theme needs to be fixed. I was using POWERLEVEL9K and switching to POWERLEVEL10K did a trick for me.

Provisioning prometheus with saltstack

Using this formula:
https://github.com/bechtoldt/saltstack-prometheus-formula.git
for provisioning prometheus, I can't achieve convergence.
Fails pretty early on:
# salt prometheus-master state.apply test=True
prometheus-master:
Data failed to compile:
----------
No matching sls found for 'prometheus' in env 'base'
ERROR: Minions returned with non-zero exit code
Have 'prometheus' defined in bot states & pillars top.sls.
bechtoldt's formula requires additional code taken from his GitHub repository to work, formhelper, https://github.com/bechtoldt/salt-modules/blob/master/_modules/formhelper.py in https://github.com/bechtoldt/salt-modules which is his special way to manage pillars that gives him more flexibility to manage versions.
It is certainly not as straightforward as pillars are on their own, and you will need to understand that to operate the prometheus formula, so unfortunately it is not going to work out of the box.

saltstack 'Pillar failed to render with the following messages'

I'm getting the following error message when I do a state.apply:
[ERROR ] Data passed to highstate outputter is not a
valid highstate return: {'sonia9': ['Pillar failed to
render with the following messages:', "Rendering SLS 'users'
failed. Please see master log for details."]}
Is it possible to see the actually rendering and where it failed?
I've already tried:
log_level: garbage in /etc/salt/master, restarted daemon
salt-call -l debug state.apply on the minion
I get the same unhelpful error message, and no more detail about the actual rendering.
Sometimes it can happen that minion has stale cache. I have experience with the frustration when salt is reporting that something failed to render but that "something" is no longer listed in the top.sls files and saltmaster log doesn't say anything at all.
What can help in this case is to refresh grains on the affected minion (also refreshes pillars by default):
salt <target_host_pattern> saltutil.refresh_grains
I have found that if your pillar has duplicates In my case the same package was listed in the yaml twice. (long List) it would fail to complile but give no reason.. so to shorten the answer you may have to just clean your pillar and 1980's debug the file
Looks like users.sls under your pillar location (mostly /srv/pillar) is not correctly formed.
Run > salt sonia9 pillar.items OR salt minion state.sls filenameto check it

Phabricator Making Assumptions About Environment

I am attempting to get Phabricator running on Solaris over apache. The website is working, but all of the cli scripts are not. For example, phd.
The first problem, is that it is not passing arguments to the underling manage-daemons.php script that it invokes. Looking at the phd file, this does not surprise me:
$> cat phd
../scripts/daemon/manage_daemons.php
Now, given my default shell is bash, this isn't going to pass-through my arguments. To do this, I have modified the script:
#! /bin/bash
../scripts/daemon/manage_daemons.php $*
This will now pass-through the arguments, but it's now failing to find transative scripts it requires via relative path:
./phd start
Preparing to launch daemons.
NOTE: Logs will appear in '/var/tmp/phd/log/daemons.log'.
Launching daemon "PhabricatorRepositoryPullLocalDaemon".
[2014-05-09 19:29:59] EXCEPTION: (CommandException) Command failed with error #127!
COMMAND
exec ./phd-daemon 'PhabricatorRepositoryPullLocalDaemon' --daemonize --log='/var/tmp/phd/log/daemons.log' --phd='/var/tmp/phd/pid'
STDOUT
(empty)
STDERR
./phd-daemon: line 1: launch_daemon.php: not found
at [/XXX/XXX/libphutil/src/future/exec/ExecFuture.php:398]
#0 ExecFuture::resolvex() called at [/XXX/XXX/phabricator/src/applications/daemon/management/PhabricatorDaemonManagementWorkflow.php:167]
#1 PhabricatorDaemonManagementWorkflow::launchDaemon(PhabricatorRepositoryPullLocalDaemon, Array , false) called at [/XXX/XXX/phabricator/src/applications/daemon/management/PhabricatorDaemonManagementWorkflow.php:246]
#2 PhabricatorDaemonManagementWorkflow::executeStartCommand() called at [/XXX/XXX/phabricator/src/applications/daemon/management/PhabricatorDaemonManagementStartWorkflow.php:18]
#3 PhabricatorDaemonManagementStartWorkflow::execute(Object PhutilArgumentParser) called at [/XXX/XXX/libphutil/src/parser/argument/PhutilArgumentParser.php:396]
#4 PhutilArgumentParser::parseWorkflowsFull(Array of size 9 starting with: { 0 => Object PhabricatorDaemonManagementListWorkflow }) called at [/XXX/XXX/libphutil/src/parser/argument/PhutilArgumentParser.php:292]
#5 PhutilArgumentParser::parseWorkflows(Array of size 9 starting with: { 0 => Object PhabricatorDaemonManagementListWorkflow }) called at [/XXX/XXX/phabricator/scripts/daemon/manage_daemons.php:30]
Note I have obscured my paths with XXX as they give away sensitive information.
Now, obviously I shouldn't be modifying these scripts. This is an indication that some prerequisite is not set up properly.
It's clear to me that Phabricator is making some (bold) assumption about my setup. But I'm not quite sure what...?
These are supposed to be symlinks. For example, if you look at "phd" in the repository on GitHub, you can see that the file type is "symbolic link":
https://github.com/facebook/phabricator/blob/master/bin/phd
Something in your environment is incorrectly turning the symlinks into normal files. I'm not aware of any Git configuration which can cause this, although it's possible there is something. One situation where I've seen this happen is when a working copy was cloned, then copied using something like rsync without appropriate flags to preserve symlinks.

Resources