I'm trying to understand how Salt order and prioritizes matched minions in a top.sls file for a Pillar.
I want Salt to prioritize my entries in the Pillar but I get seemingly random sorting orders (not first, not last, not alphabetical afaik). I have had a look at the order option but would prefer not to use it (if it is even available in Pillars?)
/srv/pillar/top.sls
base:
'*':
- users
'office-london-*':
- office.general.london
'office-ny-*':
- office.general.ny
'office-*-cust-*':
- office.cust
'office-*-cust-ntp*':
- office.cust-ntp
minions
office-london-cust -> office.general.london
office-london-cust-server1 -> office.cust
office-london-cust-ntp-server1 -> office.cust-ntp
office-ny-cust -> office.general.ny
office-ny-cust-server1 -> office.cust
office-ny-cust-ntp-server1 -> office.cust-ntp
Here are some links to Github issues I've had a look at without figuring this out:
https://github.com/saltstack/salt/pull/1287
https://github.com/saltstack/salt/issues/13657
https://github.com/saltstack/salt/issues/1432
https://github.com/saltstack/salt/issues/14723
The minion will look at each match line in order and add the corresponding sls file to its list of sls files to apply. The minion will then compile that list of sls files into a data structure in the same order they were defined in the top.sls.
Require statements and similar requisites can modify the order of execution
Related
I have input files:
Bob_1.fastq.gz
Bob_2.fastq.gz
Bob_3.fastq.gz
Bob_4.fastq.gz
Ron_1.fastq.gz
Ron_2.fastq.gz
Ron_3.fastq.gz
Ron_4.fastq.gz
I am running demultiplexing and trimming steps in one snakefile, like this:
workdir: "/path/to/dir/"
(SAMPLES,) =glob_wildcards('/path/to/dir/raw/{sample}.fastq.gz')
rule all:
input:
expand("demulptiplex/{sample}.fastq.gz", sample=SAMPLES),
expand("trimmed/{sample}.trimmed.fastq.gz", sample=SAMPLES)
rule sabre:
input:
infile="/path/to/dir/raw/{sample}.fastq.gz",
barcodefile= "files/{sample}.txt"
output:
unknownfile=temp("demulptiplex/unknown_barcode_{sample}.fastq.gz"),
shell:
"""
/Tools/sabre-master2/sabre se -f {input.infile} -b {input.barcodefile} -u {output.unknownfile}
"""
rule trimmomatic_se:
input:
r="{sample}.fastq.gz"
output:
r="trimmed/{sample}.trimmed.fastq.gz"
threads: 10
shell:
"""java -jar /Tools/Trimmomatic-0.36/trimmomatic-0.36.jar SE -threads {threads} {input.r} {output.r} ILLUMINACLIP:/Tools/Trimmomatic-0.36/adapters/TruSeq3-SE.fa:2:30:10 LEADING:3 TRAILING:3 SLIDINGWINDOW:4:15 MINLEN:36"""
The demultiplex output files are like this:
Bob_1_CL1.fastq.gz.... Bob_1_CL345.fastq.gz
Bob_2_CL1.fastq.gz.... Bob_1_CL248.fastq.gz
Ron_1_dad1.fastq.gz... Ron_1_dad67.fastq.gz
and so on
So,if I do not specify the demultiplex output file the program would create it by itself. My problem is how to specify/introduce a new wildcard from the output of the previous rule in the next trimming step, as the wildcards are different from initial sample now.
Wildcards just need to be consistent in a rule, not across the workflow. The issue here is that you have a rule generating 'unknown' outputs that you need to process further. For that you need to use checkpoints.
Read through the second block of code about aggregating. Your checkpoint will be demultiplexing and if you don't have any other steps, all will be your aggregate step that calls checkpoints.demultiplex.get. If you search for checkpoint on stackoverflow you will find lots of examples; it's a hard feature to use at first!
I am trying to work through the Salt Formulas documentation and seem to be having a fundamental misunderstanding of what a salt formula really is.
Understandably, this question may seem like a duplicate of these questions, but due to my failing to grasp the basic concepts I'm also struggling to make use of the answers to these questions.
I thought, that a salt formula is basically just a package that implements extra functions, a lot like
#include <string.h>
in C, or
import numpy as np
in Python. Thus, I thought, I could download the salt-formula-linux to /srv/formulas/salt-formula-linux/, add that to file_roots, restart the master (all as per the docs), and then write a file like swapoff.sls containing
disable_swap:
linux:
storage:
swap:
file:
enabled: False
(the above is somewhat similar to the examples in the repo's root) in hope that the formula would then handle removing the swap entry from /etc/fstab and running swapoff -a for me. Needless to say, this didn't work, clearly because I'm not understanding what a salt formula is meant to be.
So, what is a salt formula and how do I use it? Can I make use of it as a library of functions too?
This answer might not be fully correct in all technicalities, but this is what solved my problem.
A salt formula is not a library of functions. It is, rather, a collection of state files. While often a state file can be very simple, such as some of my user defined
--> top.sls <--
base:
'*':
- docker
--> docker.sls <--
install_docker_1703:
pkgrepo.managed:
# stuff
pkg.installed:
- name: docker-ce
creating a state file like
--> swapoff.sls <--
disable_swap:
linux.storage.swap: # and so on
is, perhaps, not the way to go. Well, at least, maybe not for a beginner with lacking knowledge.
Instead, add an item to top.sls:
- linux.storage.swap
This is not enough, however. Most formulas (or the state files within them, if you will) are highly parametrizable, i.e. they're full of placeholders with variable names, such as {{ swap.device }}. If there's nothing to fill this gap, the state fill will not be able to do anything. These gaps are filled from pillars.
All that remains, is to create a file like swap.sls in /srv/pillar/ that would contain something like (as per the examples of that formula)
linux:
storage:
enabled: true
swap:
file:
enabled: true
engine: file
device: /swapfile
size: 1024
and also /srv/pillar/top.sls with
base:
'*':
- swap
Perhaps /srv/pillar should also be included in pillar_roots in /etc/salt/master.
So now /srv/salt/top.sls runs /srv/formulas/salt-formula-linux/linux/storage/swap.sls which using the guidance of /srv/pillar/top.sls pulls some parameters from /srv/pillar/swap.sls and enables a swapfile.
I have a set of data on which I ran the multistorage command on column 'type' and now I have these paths in hdfs: "/output/type1/", "/output/type2/", "/output/type3/" etc.
Now,
Everyday i run a script with multistorage command on column 'type' to produce "/tmp/type1/", "/tmp/type2/", "/tmp/type3/" etc
(Types here could be either < or = the types in master output that is already present).
Since Pig doesn't allow me to provide the output path of an already existing directory, my script that runs everyday is /tmp/.
Is there a way to combine /tmp/ with /output/, under the right 'type' subdirectories?
Expected to have /tmp/type1/file under /output/type1/ as /output/type1/file
and so on.This way i can delete the /tmp and run the script again.
Any help is appreciated.
Thanks in advance.
Pig cannot handle directories but invoking fs commands. Mapping temporary directories to the final directories requires more than what can Pig do. You can use the FileSystem Api in a tiny java program and run it separatly or in Oozie workflow.
In addition to that you need to ensure that appended files have different filenames than the existant ones, this is not the default behaviour and you can achieve it by this command:
%declare timestamp `date +"%s"`
SET mapreduce.output.basename '$timestamp'
/* here we used the timestamp to get unicity*/
I've got a few configuration values in my application that are tied to ip, mac addr, server name, etc and am trying to come up with a modular state that will match on the correct value from my pillar with a pillar.get. I'm not finding any info on how to use grain values in pillar.get.
pillar example:
account:
qa:
server1:
appUsername: 'user2'
appPassword: 'password2'
server2:
appUsername: 'user2'
appPassword: 'password2'
prod:
server3:
appUsername: 'user3'
appPassword: 'password3'
server4:
appUsername: 'user4'
appPassword: 'password4'
Lines from my template:
keyUser={{ salt['pillar.get']('account:grains['env']:grains['id']:appUsername', 'default_user') }}
keyPass={{ salt['pillar.get']('account:grains['env']:grains['id']:appPassword', 'default_pass') }}
This just seems so natural, but whatever I try errors out, or will escape actual grain lookup and give me the defaults. I also can't find anything on google. Anybody have a solution? Should I dynamically set the appUsername and appPassword values on the pillar instead? I like the layout in the pillar as is, because it's a great easy to read lookup table, without a ton of conditional jinja.
First you can't just embed grains['env'] into the pillar lookup string- you'll need to concatenate. Second, your Jinja assignment looks wrong. Try this:
{% set keyUser = pillar.get('account:'~grains['env']~':'~grains['id']~':appUsername', 'default_user') %}
~ is the concatenate operator in Jinja.
Also, salt['pillar.get']('blah') is the same as pillar.get('blah').
However! It's difficult to be sure without the actual error and/or the full template.
I am new to robot framework and wanted to see if i can get any simple code for custom report. I am also fine with answer to my problem. I went through all questions related to report but could not find any specific answer to my problem. currently my report contains log and wanted to see if i can remove log information from reports and save report in specific location. I just want to get PASS/FAIL information in my report. Can any one give me example how i can overcome this problem? I also need to know how i can save my report in different location. Any example would be helpful. Thank you in advance.
There is a tool called Rebot which is part of Robot Framework.
By default, Robot Framework creates XML reports. The XML reports are automatically converted into HTML reports by Rebot.
You can set the location of the output files in the execution by specifying the parameter --outputdir (and thus set a different base directory for outputs).
From the documentaiton:
All output files can be set using an absolute path, in which case they are created to the specified place, but in other cases, the path is considered relative to the output directory. The default output directory is the directory where the execution is started from, but it can be altered with the --outputdir (-d) option. The path set with this option is, again, relative to the execution directory, but can naturally be given also as an absolute path. Regardless of how a path to an individual output file is obtained, its parent directory is created automatically, if it does not exist already.
You can call Rebot yourself to control this conversion.
You can also run Rebot after the test was run in order to create new output on a different location.
See documentation in:
http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#post-processing-outputs
The following example shows how to store the HTML reports in a different location and including only partial data:
rebot --include smoke --name Smoke_Tests c:\results\output.xml --outputdir c:\version1.0\reports
In the example above, we process the file c:\results\output.xml, create a new report called Smoke_Tests that includes only tests with the tag smoke and save it to the output folder c:\version1.0\reports
In addition you can also set the location of the log file (HTML) from the execution.
The command line option --log (-l) determines where log files are created.
The command line option --report (-r) determines where report files are created
Removing log lines can be done a bit differently. If you run rebot --help you'll get the following options:
--removekeywords all|passed|for|wuks|name: * Remove keyword data
from all generated outputs. Keywords containing
warnings are not removed except in `all` mode.
all: remove data from all keywords
passed: remove data only from keywords in passed
test cases and suites
for: remove passed iterations from for loops
wuks: remove all but the last failing keyword
inside `BuiltIn.Wait Until Keyword Succeeds`
name:: remove data from keywords that match
the given pattern. The pattern is matched
against the full name of the keyword (e.g.
'MyLib.Keyword', 'resource.Second Keyword'),
is case, space, and underscore insensitive,
and may contain `*` and `?` as wildcards.
Examples: --removekeywords name:Lib.HugeKw
--removekeywords name:myresource.*
--flattenkeywords for|foritem|name: * Flattens matching keywords
in all generated outputs. Matching keywords get all
log messages from their child keywords and children
are discarded otherwise.
for: flatten for loops fully
foritem: flatten individual for loop iterations
name:: flatten matched keywords using same
matching rules as with
`--removekeywords name:`