How to make salt execution fail if a file exist - salt-stack

Lets say in below salt call. I want the salt not to execute anymore further if a file exist.
In below example. It should execute run-aerospike-mem_disk-check but once if it detects file exist in check_bad_conf. It shouldn't execute run-aerospike-config and get some RED color in salt as failed. How to achieve it.
Lets say in below salt call. I want the salt not to execute anymore further if a file exist.
In below example. It should execute run-aerospike-mem_disk-check but once if it detects file exist in check_bad_conf. It shouldn't execute run-aerospike-config and get some RED color in salt execution as failed. How to achieve it.
run-aerospike-mem_disk-check:
cmd.wait:
- name: /var/local/aero_dmcheck.sh
- watch:
- file: /var/local/aero_config
check_bad_conf:
file.exist : /tmp/badconf
- failhard : yes
run-aerospike-config:
cmd.wait:
- name: /var/local/aero_config.pl
- watch:
- file: /var/local/aero_config.yml

Make it more clear please.
Why you are so badly after "RED"?
You have disk-check to be run only if the bad file exists?
If so it must not run aerospike-config?
Run once no matter if the file still exists?
cmd.wait accepts onlyif argument in which you can run command to assert file absence/presence/anything (it is exit code-driven).
However you need to know that if onlyif is not satisfied the state turns GREEN not RED
There is also creates argument that is designed exactly for running the command if the given file exists.
Refer to cmd manual
As you probably want aerospike-config to be run only if the disk-check was not run maybe it is sufficient to use file.missing like so:
check:
file.missing:
- name: /tmp/badconf
run-aerospike-config:
cmd.wait:
- name: /var/local/aero_config.pl
- watch:
- file: /var/local/aero_config.yml
- require:
- file: check
Read more about Salt requisites here
If you want your run-aerospike-mem_disk-check script to be run exactly once, why don't you use cmd.script and add stateful argument to prohibit further executions?

I actually never really used it but you can use some kind of "if-statement". Check this out:
Check file exists and create a symlink
I think in your case file.present or file.absent would fit.
Hope that helps.

Related

Reusing salt state snippets

In my salt state files I have several occurrences of a pattern which consists of defining a remote repository and importing a gpg key file definition, e.g.
import_packman_gpg_key:
cmd.run:
- name: rpm --import http://packman.inode.at/gpg-pubkey-1abd1afb.asc
- unless: rpm -q gpg-pubkey-1abd1afb-54176598
packman-essentials:
pkgrepo.managed:
- baseurl: http://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Tumbleweed/Essentials/
- humanname: Packman (Essentials)
- refresh: 1
require:
- cmd: import_packman_gpg_keygpg-pubkey-1abd1afb-54176598
I would like to abstract these away as a different state, e.g.
packman-essentials:
repo_with_key.managed:
- gpg_key_id: 1abd1afb-54176598
- gpg_key_src: http://packman.inode.at/gpg-pubkey-1abd1afb.asc
- repo_url: http://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Tumbleweed/Essentials/
- repo_name: Packman (Essentials)
which will in turn expand to the initial declarations above. I've looked into custom salt states ( see https://docs.saltstack.com/en/latest/ref/states/writing.html#example-state-module ) but I only found references on how to create one using Python. I'm looking for one which is based only on state definitions, as writing code for my specific problem looks overkill.
How can I create a custom state which reuses the template I've been using to manage package repositories?
This is what macros are for
Here is an example of simple macros for some heavily used by me constructs
However in your example, why do you cmd.run to import key?
pkgrepo.managed seems to support gpgkey option to download the key

Use SQLite with biicode

So far I have been able to succesfully use boost, cereal and gtest using biicode but I am having troubles with sqlite. I am trying to use it doing the following:
#include <sqlite3.h>
So I edited my biicode.conf to include those lines, including the alising for the header:
[requirements]
sqlite/sqlite:9
[includes]
sqlite.h: sqlite/sqlite/sqlite3/sqlite3.h
But when I try to call bii cpp:build it does the following
WARN: Removing unused reference to "sqlite/sqlite: 9" from myuser/test "requirements"
Then I ended up with the expected:
database_impl.cpp:(.text+0x516): undefined reference to `sqlite3_exec'
Surprisingly, the compilation succedd even though sqlite3.h is obviously not included but that's maybe because the call to sqlite is from a template function.
I have looked at the example but CMakeList.txt does not seem to add any additional includes directories. For example for boost I had to add:
SET(Boost_USE_STATIC_LIBS OFF)
bii_find_boost(COMPONENTS chrono system filesystem log thread REQUIRED)
target_include_directories(${BII_BLOCK_TARGET} INTERFACE ${Boost_INCLUDE_DIRS})
target_link_libraries(${BII_BLOCK_TARGET} INTERFACE ${Boost_LIBRARIES})
But the two examples I found here and here don't seem to add anything to the includes directories, not even a link folder. I suppose sqlite has to be compiled with your sources so how do I make biicode add those files to my projects automatically ?
There're several problems. You just wrote in [includes] section sqlite.h instead of sqlite3.h and you should only write the prefix, sqlite/sqlite/sqlite3, later instead of the full dependency name.
Then, you can solve it so:
[requirements]
sqlite/sqlite: 9
[includes]
sqlite3.h: sqlite/sqlite/sqlite3
Or, you could try the SQLite version uploaded into fenix user:
[requirements]
fenix/sqlite: 0
[includes]
sqlite3.h: fenix/sqlite
Don't worry about a "WARN" message that says biicode is ignoring sqlite3.c file because it's harcoded into the block CMakeLists.txt to catch this file ;)
Note: you should write your external #includes's with double quotes instead of <>, because the last ones are referred to system headers and biicode could be using some system deps and you don't realize it.

grunt updating options in one task so subsequent tasks can use them

I need to run grunt-bump which bumps the version number in the package.json, then run grunt-xmlpoke and update a config file with new version number.
So I have tried a couple of things. Inside the grunt.initConfig I run bump, then I run xmlpoke.
1) xmlpoke takes grunt.file.readJSON('package.json').version
or
2) after bump I run a custom task that adds the new version to a grunt option and xmlpoke takes a value of grunt.options("versionNumber")
In both of these versions the xml result is the pre-bump version. So xmlpoke is getting it's values before the tasks are run and the uses them when it's task is called. But I need it to take the value that is the result of a previous task.
Is there anyway to do this?
Ok, I have figured out the, somewhat obvious, solution.
Using grunt-bump you can update the package.config, you can also update the package.config that is often read into the variable pkg at the beginning of the initConfig. so in the setup of the bump task you specify
{
updateConfigs:['pkg']
}
Then in the xmlpoke I can do
{ xpath:'myxpath', value:'blablabla/<%=pkg.version%>'}
and this works. What I was doing before was
{ xpath:'myxpath', value:'blablabla/' + grunt.options.versionNumber}
where I had set the versionnumber in a previous task after the bump. Or
{ xpath:'myxpath', value:'blablabla/'+ grunt.file.readJSON('package.json').version}
neither of those worked. I guess I was just getting to smart for my own good as the <%= %> is the more common and typical way of accessing parameters from within the initConfig.
Anyway, there you have it. Or I have it.

How to get and set the default output directory in Robot Framework(Ride) in Run time

I would like to move all my output files to a custom location, to a Run directory created based on Date time during Run time. The output folder by datetime is created in the TestSetup
I have function "Process_Output_files" which will move the files to the Run folder(Run1,Run2,Run3 Folders).
I have tried using the argument-d and used the function "Process_Output_files" as suite tear down to move the output files to the respective Run directory.
But I get the following error "The process cannot access the file because it is being used by another process". I know this is because the Robot Framework (Ride) is currently using this.
If I dont use the -d argument, the output files are getting saved in temp folders.
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\output.xml
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\log.html
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\report.html
My question is, Is there a way to get move the files to custom location during run time with in Robot Framework.
You can use the following syntax in RIDE (Arguments:) to create the output in newfolders dynamically
--outputdir C:/AutomationLogs/%date:~-4,4%%date:~-10,2%%date:~-7,2% --timestampoutputs
The above syntax gives you the output in below folder:
Output: C:\AutomationLogs\20151125\output-20151125-155017.xml
Log: C:\AutomationLogs\20151125\log-20151125-155017.html
Report: C:\AutomationLogs\20151125\report-20151125-155017.html
Hope this helps :)
I understand the end result you want is to have your output files in their custom folders. If this is your desire, it can be accomplished at runtime and you won't have to move them as part of your post processing. This will not work in RIDE, unfortunately, since the folder structure is created dynamically. I have two options for you.
Option 1: Use a script to kick off your tests
RIDE is awesome, but in my humble opinion, one shouldn't be using it to run ones tests, only to build and debug ones tests. Scripts are far more powerful and flexible.
Assuming you have a test, test2.txt, you wish to run, the script you use to do this could be something like:
from time import gmtime, strftime
import os
#strftime returns string representations of a date-time tuple.
#gmtime returns the date-time tuple representing greenwich mean time
dts=strftime("%Y.%m.%d.%H.%M.%S", gmtime())
cmd="pybot -d Run%s test2"%(dts,)
os.system(cmd)
As an aside, if you do intend to do post processing of your files using rebot, be aware you may not need to create intermediate log and report files. The output.xml files contain everything you need, so if you don't want to create superfluous files, use --log NONE --report NONE
Option 2: Use a listener to do post processing
A listener is a program you write that responds to events (x_start, x_end, etc). The close() event is akin to the teardown function and is the last thing called. So, assuming you have a function moveFiles() you simply need to create a listener class (myListener), define the close() method to call your moveFiles() function, and alert your test that it should report to a listener with the argument --listener myListener.
This option should be compatible with RIDE though I admit I have never tried to use listeners with the IDE.
At least you can write a custom run script that handles the moving of files after the test case execution. In this case the files are no longer used by pybot.

How can I aggregate SaltStack command results?

Is it possible to run a SaltStack command that, say, looks to see if a process is running on a machine, and aggregate the results of running that command on multiple minions?
Essentially, I'd like to see all the results that are returned from the minions displayed in something like an ASCII table. Is it possible to have an uber-result formatter that waits for all the results to come back, then applies the format? Perhaps there's another approach?
If you want to do this entirely within Salt, I would recommend creating an "outputter" that displays the data how you want.
A "highstate" outputter was recently created that might give you a good starting point. The highstate outputter creates a small summary table of the returned data. It can be found here:
https://github.com/saltstack/salt/blob/develop/salt/output/highstate.py
I'd recommend perusing the code of the other outputters as well.
If you want to use another tool to create this report, I would recommend adding "--out json" to your command at the cli. This will cause Salt to return the data in json format which you can then pipe to another application for processing.
This was asked a long time ago, but I stumbled across it more than once, and I thought another approach might be useful – use the survey Salt runner:
$ salt-run survey.hash '*' cmd.run 'dpkg -l python-django'
|_
----------
pool:
- machine2
- machine4
- machine5
result:
dpkg-query: no packages found matching python-django
|_
----------
pool:
- machine1
- machine3
result:
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-============-============-=================================
ii python-django 1.4.22-1+deb all High-level Python web development

Resources