Is it possible to run a SaltStack command that, say, looks to see if a process is running on a machine, and aggregate the results of running that command on multiple minions?
Essentially, I'd like to see all the results that are returned from the minions displayed in something like an ASCII table. Is it possible to have an uber-result formatter that waits for all the results to come back, then applies the format? Perhaps there's another approach?
If you want to do this entirely within Salt, I would recommend creating an "outputter" that displays the data how you want.
A "highstate" outputter was recently created that might give you a good starting point. The highstate outputter creates a small summary table of the returned data. It can be found here:
https://github.com/saltstack/salt/blob/develop/salt/output/highstate.py
I'd recommend perusing the code of the other outputters as well.
If you want to use another tool to create this report, I would recommend adding "--out json" to your command at the cli. This will cause Salt to return the data in json format which you can then pipe to another application for processing.
This was asked a long time ago, but I stumbled across it more than once, and I thought another approach might be useful – use the survey Salt runner:
$ salt-run survey.hash '*' cmd.run 'dpkg -l python-django'
|_
----------
pool:
- machine2
- machine4
- machine5
result:
dpkg-query: no packages found matching python-django
|_
----------
pool:
- machine1
- machine3
result:
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-============-============-=================================
ii python-django 1.4.22-1+deb all High-level Python web development
Related
On input consider db-dump(from dbeaver), having this format:
{
"select": [
{<row1>},
{<row2>}
],
"select": {}
}
say that I'm debugging bigger script, and just want to see first few rows, from first statement. How to do that effectively in rather huge file?
Template:
jq 'keys[0] as $k|.[$k]|limit(1;.[])' dump
isn't really great, as it need to fetch all keys first. Template
jq '.[0]|limit(1;.[])' dump
sadly does not seem to be valid one, and
jq 'first(.[])|limit(1;.[])' dump
does not seem to have any performance benefit.
What would be the best way to just access first field in object without actually testing it's name or caring for rest of fields?
One strategy would be to use the —stream command-line option. It’s a bit tricky to use, but if you want to use jq or gojq, it’s the way to go for a space-time efficient solution for a large input.
Far easier to use would be my jm script, which is intended precisely to achieve the kind of objective you describe. In particular, please note its —-limit option. E.g. you could start with:
jm -s —-limit 1
See
https://github.com/pkoppstein/jm
How to read a 100+GB file with jq without running out of memory
Given that weird object with identical keys, you can use the --stream option to access all items before the JSON processor would eliminate the duplicates, fromstream and truncate_stream to dissect the input, and limit to reduce the output to just a few items:
jq --stream -cn 'limit(5; fromstream(2|truncate_stream(inputs)))' dump.json
{<row1>}
{<row2>}
{<row3>}
{<row4>}
{<row5>}
I have the following issue:
I use salt stack to manage my minions, which are running in different datacenters. But the package repositories are not consistent: Not all have the latest versions of salt. With Salt stack I can of course work around that, so I added to the top.sls:
'not G#saltversion:3003.1':
- fixes.saltversion
But I don't like that up there. I've tried several variants, but couldn't manage to select minions which have a specific grain less than a specific version. Like in this case: To select all minions which have an older version than 3003.1 to apply a state on them, that gets the package directly from a different repo.
How do I select "less than" of a Grain?
I've googled around already and didn't find anything matching my case. The Docs are also not helpful for my case. I've read about custom matcher: But do I really need to implement a custom matcher for that?
Thanks in advance for your help everyone
Have a look at the following grain: saltversioninfo.
This grain is a list: [ majorversion, patchversion ].
You can target minions with releases later than Fluorine (2019.2.0) like this:
'P#saltversioninfo:0:\b(?:3[0-9]{2}[0-2])\b or ( G#saltversioninfo:0:3003 and G#saltversioninfo:1:0 ):
- match: compound
- fixes.saltversion
This compound match will target minions with release between 3000 and 3003.0.
This is a bit static and you need to modify this after a new release. But I hope this will help you.
EDIT:
The matcher above is untested, I don't have minions with older version. You should test the matcher before with the following salt command:
salt -C 'P#saltversioninfo:0:\b(?:3[0-9]{2}[0-2])\b or ( G#saltversioninfo:0:3003 and G#saltversioninfo:1:0 )' test.ping
I am attempting to write a tool that will automate the generation of a visual studio test playlist based on failed tests from the spec flow report, we recently increased our testThreadCount to 4 and when using the LivingDocumentation plugin to generate the TestExecution.json file it is only generating a result for 1 in 4 tests and I think this is due to the threadCount so 4 tests are being seen as a single execution.
My aim is to generate a fully qualified test name for each of the failed tests using the TestExecution file but this will not work if I am only generating 25% of the results. Could I ask if anyone has an idea of a workaround for this?
<Execution stopAfterFailures="0" testThreadCount="4" testSchedulingMode="Sequential" retryFor="Failing" retryCount="0" />
This is our current execution settings in the .srprofile
We made this possible with the latest version of SpecFlow and the SpecFlow+ LivingDoc Plugin.
You can configure the filename for the TestExecution.json via specflow.json.
Here is an example:
{
"livingDocGenerator": {
"enabled": true,
"filePath": "TestExecution_{ProcessId}_{ThreadId}.json"
}
}
ProcessId and ThreadId will be replaced with values and you get for every thread a separate TestExecution.json.
You can then give the livingdoc CLI tool or the Azure DevOps task a list of TestExecution.jsons.
Example:
livingdoc test-assembly BookShop.AcceptanceTests.dll -t TestExecution*.json
This generates you one LivingDoc with all the test execution results combines.
Documentation links:
https://docs.specflow.org/projects/specflow-livingdoc/en/latest/LivingDocGenerator/Setup-the-LivingDocPlugin.html
https://docs.specflow.org/projects/specflow-livingdoc/en/latest/Guides/Merging-Multiple-test-results.html
I have Symfony2 project with Behat used for BDD tests.
Most of tests are tagged, like:
#database #user_management #admin
Scenario: Attempt .....
....
....
#product #admin
Scenario: Login ....
....
....
I would like to be able to list all scenerios tagged with specific tag, before running entire test suite. Is it possible?
I mean I can write small script which analyzes all features files, but I hope there is some behat magic switch/flag, already implemented, but not documented, which does what I need.
I'm not personally aware of a built-in way, partly because it depends on what you expect as an output.
One way to get something similar would be to dry-run the scenarios, with the full items being displayed, and then grep for 'scenario', to get the name/summary:
behat --format=pretty --tags '#domains' --dry-run | grep -i scenario
I would like to move all my output files to a custom location, to a Run directory created based on Date time during Run time. The output folder by datetime is created in the TestSetup
I have function "Process_Output_files" which will move the files to the Run folder(Run1,Run2,Run3 Folders).
I have tried using the argument-d and used the function "Process_Output_files" as suite tear down to move the output files to the respective Run directory.
But I get the following error "The process cannot access the file because it is being used by another process". I know this is because the Robot Framework (Ride) is currently using this.
If I dont use the -d argument, the output files are getting saved in temp folders.
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\output.xml
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\log.html
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\report.html
My question is, Is there a way to get move the files to custom location during run time with in Robot Framework.
You can use the following syntax in RIDE (Arguments:) to create the output in newfolders dynamically
--outputdir C:/AutomationLogs/%date:~-4,4%%date:~-10,2%%date:~-7,2% --timestampoutputs
The above syntax gives you the output in below folder:
Output: C:\AutomationLogs\20151125\output-20151125-155017.xml
Log: C:\AutomationLogs\20151125\log-20151125-155017.html
Report: C:\AutomationLogs\20151125\report-20151125-155017.html
Hope this helps :)
I understand the end result you want is to have your output files in their custom folders. If this is your desire, it can be accomplished at runtime and you won't have to move them as part of your post processing. This will not work in RIDE, unfortunately, since the folder structure is created dynamically. I have two options for you.
Option 1: Use a script to kick off your tests
RIDE is awesome, but in my humble opinion, one shouldn't be using it to run ones tests, only to build and debug ones tests. Scripts are far more powerful and flexible.
Assuming you have a test, test2.txt, you wish to run, the script you use to do this could be something like:
from time import gmtime, strftime
import os
#strftime returns string representations of a date-time tuple.
#gmtime returns the date-time tuple representing greenwich mean time
dts=strftime("%Y.%m.%d.%H.%M.%S", gmtime())
cmd="pybot -d Run%s test2"%(dts,)
os.system(cmd)
As an aside, if you do intend to do post processing of your files using rebot, be aware you may not need to create intermediate log and report files. The output.xml files contain everything you need, so if you don't want to create superfluous files, use --log NONE --report NONE
Option 2: Use a listener to do post processing
A listener is a program you write that responds to events (x_start, x_end, etc). The close() event is akin to the teardown function and is the last thing called. So, assuming you have a function moveFiles() you simply need to create a listener class (myListener), define the close() method to call your moveFiles() function, and alert your test that it should report to a listener with the argument --listener myListener.
This option should be compatible with RIDE though I admit I have never tried to use listeners with the IDE.
At least you can write a custom run script that handles the moving of files after the test case execution. In this case the files are no longer used by pybot.