Using this formula:
https://github.com/bechtoldt/saltstack-prometheus-formula.git
for provisioning prometheus, I can't achieve convergence.
Fails pretty early on:
# salt prometheus-master state.apply test=True
prometheus-master:
Data failed to compile:
----------
No matching sls found for 'prometheus' in env 'base'
ERROR: Minions returned with non-zero exit code
Have 'prometheus' defined in bot states & pillars top.sls.
bechtoldt's formula requires additional code taken from his GitHub repository to work, formhelper, https://github.com/bechtoldt/salt-modules/blob/master/_modules/formhelper.py in https://github.com/bechtoldt/salt-modules which is his special way to manage pillars that gives him more flexibility to manage versions.
It is certainly not as straightforward as pillars are on their own, and you will need to understand that to operate the prometheus formula, so unfortunately it is not going to work out of the box.
Related
I have a saltstack state which requires accessing the salt mine for it to execute correctly. This has been working fine, but we have recently switched to using salt-ssh and it is producing the following error
TypeError encountered executing example_token: 'FunctionWrapper' object is not callable
This mine function is set up in my pillar as follows
mine_functions:
example_token:
- mine_function: cp.get_file_str
- file:///tmp/example.txt
This is called in the state using
salt['mine.get'](minion_host_name, 'example_token')[minion_host_name]
Like I mentioned this has always worked when calling salt '*' state.apply
But after switching to salt-ssh -i '*' state.apply
Also switching to salt-ssh was out of my hands and going back is not an option. I have also tried declaring the functions in the roster rather than the pillar but produces the same result
I'm getting the following error message when I do a state.apply:
[ERROR ] Data passed to highstate outputter is not a
valid highstate return: {'sonia9': ['Pillar failed to
render with the following messages:', "Rendering SLS 'users'
failed. Please see master log for details."]}
Is it possible to see the actually rendering and where it failed?
I've already tried:
log_level: garbage in /etc/salt/master, restarted daemon
salt-call -l debug state.apply on the minion
I get the same unhelpful error message, and no more detail about the actual rendering.
Sometimes it can happen that minion has stale cache. I have experience with the frustration when salt is reporting that something failed to render but that "something" is no longer listed in the top.sls files and saltmaster log doesn't say anything at all.
What can help in this case is to refresh grains on the affected minion (also refreshes pillars by default):
salt <target_host_pattern> saltutil.refresh_grains
I have found that if your pillar has duplicates In my case the same package was listed in the yaml twice. (long List) it would fail to complile but give no reason.. so to shorten the answer you may have to just clean your pillar and 1980's debug the file
Looks like users.sls under your pillar location (mostly /srv/pillar) is not correctly formed.
Run > salt sonia9 pillar.items OR salt minion state.sls filenameto check it
I've found the sbt-groovy plugin and it properly compiles both the test and the main sources just fine. However, the definedTests key is always empty; SBT never discovers any groovy tests. I've verified this with a very simple single src/test/groovy/Test.groovy with a single method annotated #Test which should be picked up by the junit-interface.
I think the root of the issue is that the sbt-groovy plugin needs to define the task "definedTests" in its own plugin source code. This task provides a Seq[TestDefinition].
Looking at how SBT itself populates the sequence reveals it uses additional output from the scala compiler (which also happens to compile java files, so it works out of the box for java) in an Analysis class which is populated by output from the IncrementalCompiler
I've fiddled around with the taskdef, but I'm not sure I'm even on the right path. Documentation on this stuff is pretty sparse, or heavily connected to the IncrementalCompiler.
What code do I need in sbt-groovy to produce a Seq[TestDefinition] that satisfies SBT so that I can run tests (picked up by the junit-interface) that are written in Groovy?
The test detection code is in Tests.discover, which you might be interested in.
It seems like all you need is the list of methods with annotations and list of subclasses.
If you have some way of finding them out, you probably could mimic what's going on in the code.
The discovery code, as you mentioned, relies on the the Analysis datatype, which is the inner gut of
the incremental compiler. You might be able to take advantage of the fact that it is
sbt (not Scala or Java compiler) that is responsible for incremental compilation.
For Java compilation, AnalyzingJavaCompiler.compile calls the compiler and then does the analysis.
In theory, you could define an AnalyzingGroovyCompiler that uses the same mechanism
as the one used for the Java compilation. This is not exactly a walk in the park since
some of the parts are hidden behind private[sbt].
Long story short, I put together a hacky proof-of-concept that wrangles the incremental compiler into generating Analysis for Groovy code, and was able to detect a test.
https://github.com/eed3si9n/sbt-groovy-test
I've only tested with one simple use case
import org.junit.Test
import org.junit.Assert
class Foo {
#Test
public void foo() {
Assert.assertEquals(1, 2)
}
}
Running test from sbt yeilds the following output:
> test
[info] Start Compiling Test Groovy sources : /Users/xxx/sbt-2167-groovy/src/test/groovy
[error] Test Foo.foo failed: expected:<1> but was:<2>, took 0.062 sec
[error] Failed: Total 1, Failed 1, Errors 0, Passed 0
[error] Failed tests:
[error] Foo
[error] (test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 1 s, completed Aug 23, 2015 5:05:01 AM
It might not work with future versions of sbt. Caveat emptor.
I have an R package that sends out a job to the OpenMPI cluster I have running by means of the Rmpi package. All works as expected within an R session run from the console. However, when I try to execute the relevant function with from my OpenCPU server like this (details changed to protect the innocent):
curl -XPOST http://99.999.999.99/ocpu/library/MyPackage/R/my_cluster_function
I get this error:
R call failed: process died.
(Other, non-cluster calling functions within the package work as expected via OpenCPU). I noticed in /var/log/kern.log a variety of requests being DENIED by apparmor, and I have been able to resolve most of them by adding entries into /etc/apparmor.d/opencpu.d/custom to allow OpenMPI to access the files it needs. However, I cannot resolve these two issues (again, IP address changed) related to "open" requests for location "/":
Oct 26 03:49:58 99.999.999.99 kernel: [142952.551234] type=1400 audit(1414295398.849:957): apparmor="DENIED" operation="open" profile="opencpu-main" name="/" pid=22486 comm="orted" requested_mask="r" denied_mask="r" fsuid=33 ouid=0
Oct 26 03:49:58 99.999.999.99 kernel: [142952.556422] type=1400 audit(1414295398.857:958): apparmor="DENIED" operation="open" profile="opencpu-main" name="/" pid=22485 comm="apache2" requested_mask="r" denied_mask="r" fsuid=33 ouid=0
Adding this to my apparmor rules did not help:
/* r,
Two questions:
Why is opencpu trying to read from my root level directory (or does this mean something else)?
More urgently, how can I resolve this apparmor issue?
Thanks.
You might need to add both apparmor rules
/ r,
/* r,
The first rule allows directory listing of / and the second rule allows read access to any file under /.
I don't understand why Rmpi wants to read / or why were you getting process died error instead of access denied. Are you sure the problem is completely resolved?
I'm really just looking to pick the community's brain for some leads in figuring out what is going on with the issue I'm having.
I'm writing a MR job with RHadoop (rmr2, v3.0.0) and things are great -- IO with HDFS, mapping, reducing. No problems. Life is great.
I'm trying to schedule the job with Apache Oozie, and am running into some issues:
Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
hadoop streaming failed with error code 1
I've read the rmr2 debugging guide, but nothing is really getting to the stderr because the job fails before anything even gets scheduled.
In my head, everything points to a difference in environments. However, Oozie is running the job as the same user that I'm able to run everything with via cli, and all of the R environment variables (fetched with Sys.getenv()) are the same, excepting there's some additional class path stuff set with Oozie.
I can post more of the OS or Hadoop versions and config details, but sleuthing some version-specific bugs seems like a bit of a red herring as everything runs fine at the command line.
Anybody have any thoughts what might be some helpful next steps in hunting this beast down?
UPDATE:
I overwrote the system function in the base package to log the user, the host name of the node, and the command being executed before the internal call to system. So before any system call is actually executed, I get something like the following in the stderr:
user#host.name
/usr/bin/hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming-2.2.0.2.0.6.0-102.jar ...
When ran with Oozie, the command printed in the stderr fails with an exit status of 1. When I run the command on user#host.name, it runs successfully. So essentially the EXACT same command with the SAME user on the SAME node fails with Oozie, but runs successfully from cli.