saltstack 'Pillar failed to render with the following messages' - salt-stack

I'm getting the following error message when I do a state.apply:
[ERROR ] Data passed to highstate outputter is not a
valid highstate return: {'sonia9': ['Pillar failed to
render with the following messages:', "Rendering SLS 'users'
failed. Please see master log for details."]}
Is it possible to see the actually rendering and where it failed?
I've already tried:
log_level: garbage in /etc/salt/master, restarted daemon
salt-call -l debug state.apply on the minion
I get the same unhelpful error message, and no more detail about the actual rendering.

Sometimes it can happen that minion has stale cache. I have experience with the frustration when salt is reporting that something failed to render but that "something" is no longer listed in the top.sls files and saltmaster log doesn't say anything at all.
What can help in this case is to refresh grains on the affected minion (also refreshes pillars by default):
salt <target_host_pattern> saltutil.refresh_grains

I have found that if your pillar has duplicates In my case the same package was listed in the yaml twice. (long List) it would fail to complile but give no reason.. so to shorten the answer you may have to just clean your pillar and 1980's debug the file

Looks like users.sls under your pillar location (mostly /srv/pillar) is not correctly formed.
Run > salt sonia9 pillar.items OR salt minion state.sls filenameto check it

Related

How can I use salt-mine when using salt-ssh

I have a saltstack state which requires accessing the salt mine for it to execute correctly. This has been working fine, but we have recently switched to using salt-ssh and it is producing the following error
TypeError encountered executing example_token: 'FunctionWrapper' object is not callable
This mine function is set up in my pillar as follows
mine_functions:
example_token:
- mine_function: cp.get_file_str
- file:///tmp/example.txt
This is called in the state using
salt['mine.get'](minion_host_name, 'example_token')[minion_host_name]
Like I mentioned this has always worked when calling salt '*' state.apply
But after switching to salt-ssh -i '*' state.apply
Also switching to salt-ssh was out of my hands and going back is not an option. I have also tried declaring the functions in the roster rather than the pillar but produces the same result

Grakn Error; trying to load schema for "phone calls" example

I am trying to run the example grakn migration "phone_calls" (using python and JSON files).
Before reaching there, I need to load the schema, but I am having trouble with getting the schema loaded, as shown here: https://dev.grakn.ai/docs/examples/phone-calls-schema
System:
-Mac OS 10.15
-grakn-core 1.8.3
-python 3.7.3
The grakn server is started. I checked and the 48555 TCP port is open, so I don't think there is any firewall issue. The schema file is in the same folder (phone_calls) as where the json data files is, for the next step. I am using a virtual environment. The error is below:
(project1_env) (base) tiffanytoor1#MacBook-Pro-2 onco % grakn server start
Storage is already running
Grakn Core Server is already running
(project1_env) (base) tiffanytoor1#MacBook-Pro-2 onco % grakn console --keyspace phone_calls --file phone_calls/schema.gql
Unable to create connection to Grakn instance at localhost:48555
Cause: io.grpc.StatusRuntimeException
UNKNOWN: Could not reach any contact point, make sure you've provided valid addresses (showing first 1, use getErrors() for more: Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5f59fd46): com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...] init query OPTIONS: error writing ). Please check server logs for the stack trace.
I would appreciate any help! Thanks!
Nevermind -- I found the solution, in case any one else runs into a similar problem. The server configuration file needs to be edited: point the data directory to your project data files (here: the phone_calls data files) & change the server IP address to your own.

csync/sqlite error when running ownCloud command

I am running owncloudcmd to sync files from a local* path to an ownCloud/Nextcloud server, all running Debian 8. However it fails with the error:
[5] csync_statedb_query sqlite3_compile error: disk I/O error - on
query PRAGMA quick_check; [6] csync_statedb_load ERR: sqlite3
integrity check failed - bail out: disk I/O error. #### ERROR during
csync_update : "CSync failed to load the journal file. The journal
file is corrupted."
I am not very familiar with csync or sqlite so I am a bit in the dark and although I can find talk of this issue through googling, I can't find a fix. The data in this case can be dumped to start over so I'm happy to flush any database or anything else. I've trying removing the created csync and journal files assuming one of them was corrupted but it doesn't seem to change anything.
I have read talk about changing PRAGMA settings to ignore the error (or check) but I can't see how this is implemented either.
Is anyone able to show me how to clear out the corruption?
*the local file is a mounted path to an AWS S3 bucket but I think this is irrelevant because it is working on other systems fine.

Provisioning prometheus with saltstack

Using this formula:
https://github.com/bechtoldt/saltstack-prometheus-formula.git
for provisioning prometheus, I can't achieve convergence.
Fails pretty early on:
# salt prometheus-master state.apply test=True
prometheus-master:
Data failed to compile:
----------
No matching sls found for 'prometheus' in env 'base'
ERROR: Minions returned with non-zero exit code
Have 'prometheus' defined in bot states & pillars top.sls.
bechtoldt's formula requires additional code taken from his GitHub repository to work, formhelper, https://github.com/bechtoldt/salt-modules/blob/master/_modules/formhelper.py in https://github.com/bechtoldt/salt-modules which is his special way to manage pillars that gives him more flexibility to manage versions.
It is certainly not as straightforward as pillars are on their own, and you will need to understand that to operate the prometheus formula, so unfortunately it is not going to work out of the box.

SVN - SQLite - disk I/O error

When trying to commit to my SVN repository, I got the following error:
Working copy 'Z:\prace-pj\projects\other\CopyRT' locked.
So I run the clean up command and then the commit succeeded, but at the end of the response message, there was the following error:
Error bumping revisions post-commit (details follow):
disk I/O error, executing statement 'RELEASE s11'
Now when I try to e.g. update the repository, it says that it is stil locked. When I clean up and try to update again, I get an error like this:
disk I/O error, executing statement 'RELEASE s2'
sqlite: disk I/O error
What should I do to fix this?
For others reference, I just had this same error and found that one of my log files was taking up all my space (and could not write to the HDD because there was no free space).
Run (to make sure you have enough disk space)
df -h
Then I just needed to run:
svn cleanup
This resolved the error for me.
have you tried:
svn unlock --force path/to/workingcopy
? Seems it can be pointed at a url if the problem is in the repository itself... I've only used an unlock operation via the tortoise gui before, but I assume it just wraps the svn command anyway.
hope that helps

Resources