Identing a map from a yaml template in terraform - dictionary

I am in Terraform 14 and I am trying to add labels to my template file which should generate a YAML:
Template File:
labels:
${labels}
Code:
locals {
labels = merge(
var.labels,
map(
"module", basename(abspath(path.module)),
"module_version", var.module_version
)
)
prometheus_config = templatefile("${path.module}/prometheus.tmpl", {
labels = indent(8, yamlencode(local.labels))
})
When I try to add the labels indenting with 8 this outputs in the template file causing YAML errors:
Error Output:
labels:
"module": "my_module"
"module_version": "1.0"
As you can see the module_version has indent 8 which is correct but the module line is not indented.
I tried many things like moving ${labels} everywhere in the beginning, with multiple indentations but nothing seems to work.

It is for this reason that the templatefile documentation recommends using yamlencode for the entire data structure, rather than trying to concantenate bits of YAML together using just string templates. That way the yamlencode function can guarantee you a correctly-formatted result and you only have to produce a suitable data structure:
In your case, that would involve replacing the template contents with the following:
${yamlencode({
labels = labels
})}
...and then replacing the prometheus_config definition with the following:
locals {
prometheus_config = templatefile("${path.module}/prometheus.tmpl", {
labels = local.labels
})
}
Notice that the yamlencode call is now inside the template, and it covers the entire YAML document rather than just a fragment of it.
With a simple test configuration I put together with some hard-coded values for the variables you didn't show, I got the following value for local.prometheus_config:
"labels":
"module": "example"
"module_version": "1.0"
If this was a full example of the YAML you are aiming to generate then I might also consider simplifying but just inlining the yamlencode call directly inside the local value definition, and not have the separate template file at all:
locals {
prometheus_config = yamlencode({
labels = local.labels
})
}
If the real YAML is much larger or is likely to grow larger later then I'd probably still keep that templatefile call, but I just wanted to note this for completeness, since there's more than one way to get this done.

So using terraform 14 I was not able to transform lists or maps into yaml with yamlencode. Every option I tried using the suggested answer produced results with the wrong indentation. Maybe due to the many indentation levels in the file... I am not sure. So I dropped the use of yamlencode in the solution.
Solution:
I decided to use inline solution so for lists I transform then with string with join and for maps I use jsonencode so:
# var.list is a list in terraform
# local.abels is a map in terraform
thanos_config = templatefile("${path.module}/thanos.tmpl", {
urls = join(",", var.list),
labels = jsonencode(local.labels)
})
The resulting plan output is a bit ugly but it works.

Related

What is the best way to use expand() with one unknown variable in Snakemake?

I am currently using Snakemake for a bioinformatics project. Given a human reference genome (hg19) and a bam file, I want to be able to specify that there will be multiple output files with the same name but different extensions. Here is my code
rule gridss_preprocess:
input:
ref=config['ref'],
bam=config['bamdir'] + "{sample}.dedup.downsampled.bam",
bai=config['bamdir'] + "{sample}.dedup.downsampled.bam.bai"
output:
expand(config['bamdir'] + "{sample}.dedup.downsampled.bam{ext}", ext = config['workreq'], sample = "{sample}")
Currently config['workreq'] is a list of extensions that start with "."
For example, I want to be able to use expand to indicate the following files
S1.dedup.downsampled.bam.cigar_metrics
S1.dedup.downsampled.bam.computesamtags.changes.tsv
S1.dedup.downsampled.bam.coverage.blacklist.bed
S1.dedup.downsampled.bam.idsv_metrics
I want to be able to do this for multiple sample files, S_. Currently I am not getting an error when I try to do a dry run. However, I am not sure if this will run properly.
Am I doing this right?
expand() defines a list of files. If you're using two parameters, the cartesian product will be used. Thus, your rule will define as output ALL files with your extension list for ALL samples. Since you define a wildcard in your input, I think that what you want is all files with your extension for ONE sample. And this rule will be executed as many times as the number of samples.
You're mixing up wildcards and placeholders for the expand() function. You can define a wildcard inside an expand() by doubling the brackets:
rule all:
input: expand(config['bamdir'] + "{sample}.dedup.downsampled.bam{ext}", ext = config['workreq'], sample=SAMPLELIST)
rule gridss_preprocess:
input:
ref=config['ref'],
bam=config['bamdir'] + "{sample}.dedup.downsampled.bam",
bai=config['bamdir'] + "{sample}.dedup.downsampled.bam.bai"
output:
expand(config['bamdir'] + "{{sample}}.dedup.downsampled.bam{ext}", ext = config['workreq'])
This expand function will expand in list
{sample}.dedup.downsampled.bam.cigar_metrics
{sample}.dedup.downsampled.bam.computesamtags.changes.tsv
{sample}.dedup.downsampled.bam.coverage.blacklist.bed
{sample}.dedup.downsampled.bam.idsv_metrics
and thus define the wildcard sample to match the files in the input.

How to append / add layers to geopackages in PyQGIS

For a project I am creating different layers which should all be written into one geopackage.
I am using QGIS 3.16.1 and the Python console inside QGIS which runs on Python 3.7
I tried many things but cannot figure out how to do this. This is what I used so far.
vl = QgsVectorLayer("Point", "points1", "memory")
vl2 = QgsVectorLayer("Point", "points2", "memory")
pr = vl.dataProvider()
pr.addAttributes([QgsField("DayID", QVariant.Int), QgsField("distance", QVariant.Double)])
vl.updateFields()
f = QgsFeature()
for x in range(len(tag_temp)):
f.setGeometry(QgsGeometry.fromPointXY(QgsPointXY(lon[x],lat[x])))
f.setAttributes([dayID[x], distance[x]])
pr.addFeature(f)
vl.updateExtents()
# I'll do the same for vl2 but with other data
uri ="D:/Documents/QGIS/test.gpkg"
options = QgsVectorFileWriter.SaveVectorOptions()
context = QgsProject.instance().transformContext()
QgsVectorFileWriter.writeAsVectorFormatV2(vl1,uri,context,options)
QgsVectorFileWriter.writeAsVectorFormatV2(vl2,uri,context,options)
Problem is that the in the 'test.gpkg' a layer is created called 'test' and not 'points1' or 'points2'.
And the second QgsVectorFileWriter.writeAsVectorFormatV2() also overwrites the output of the first one instead of appending the layer into the existing geopackage.
I also tried to create single .geopackages and then use 'Package Layers' processing tool (processing.run("native:package") to merge all layers into one geopackage, but then the attributes types are all converted into strings unfortunately.
Any help is much appreciated. Many thanks in advance.
You need to change the SaveVectorOptions, in particular the mode of actionOnExistingFile after creating the gpkg file :
options = QgsVectorFileWriter.SaveVectorOptions()
#options.driverName = "GPKG"
options.layerName = v1.name()
QgsVectorFileWriter.writeAsVectorFormatV2(v1,uri,context,options)
#switch mode to append layer instead of overwriting the file
options.actionOnExistingFile = QgsVectorFileWriter.CreateOrOverwriteLayer
options.layerName = v2.name()
QgsVectorFileWriter.writeAsVectorFormatV2(v2,uri,context,options)
The documentation is here : SaveVectorOptions
I also tried to create single .geopackages and then use 'Package Layers' processing tool (processing.run("native:package") to merge all layers into one geopackage, but then the attributes types are all converted into strings unfortunately.
This is definitively the recommended way, please consider reporting the bug

Snakemake: how to use one integer from list each call as input to script?

I'm trying to practice writing workflows in snakemake.
The contents of my Snakefile:
configfile: "config.yaml"
rule get_col:
input:
expand("data/{file}.csv",file=config["datname"])
output:
expand("output/{file}_col{param}.csv",file=config["datname"],param=config["cols"])
params:
col=config["cols"]
script:
"scripts/getCols.R"
The contents of config.yaml:
cols:
[2,4]
datname:
"GSE3790_expression_data"
My R script:
getCols=function(input,output,col) {
dat=read.csv(input)
dat=dat[,col]
write.csv(dat,output,row.names=F)
}
getCols(snakemake#input[[1]],snakemake#output[[1]],snakemake#params[['col']])
It seems like both columns are being called at once. What I'm trying to accomplish is one column being called from the list per output file.
Since the second output never gets a chance to be created (both columns are used to create first output), snakemake throws an error:
Waiting at most 5 seconds for missing files.
MissingOutputException in line 3 of /Users/rebecca/Desktop/snakemake-tutorial/practice/Snakefile:
Job completed successfully, but some output files are missing.
On a slightly unrelated note, I thought I could write the input as:
'"data/{file}.csv"'
But that returns:
WildcardError in line 4 of /Users/rebecca/Desktop/snakemake-tutorial/practice/Snakefile:
Wildcards in input files cannot be determined from output files:
'file'
Any help would be much appreciated!
Looks like you want to run your Rscript twice per file, once for every value of col. In this case, the rule needs to be called twice as well.
The use of expand is also a bit too much here, in my opinion. expand fills your wildcards with all possible values and returns a list of the resulting files. So the output for this rule would be all possible combinations between files and cols, which the simple script can not create in one run.
This is also the reason why file can not be inferred from the output - it gets expanded there.
Instead, try writing your rule easier for just one file and column and expand on the resulting output, in a rule which needs this output as an input. If you generated the final output of your workflow, put it as input in a rule all to tell the workflow what the ultimate goal is.
rule all:
input:
expand("output/{file}_col{param}.csv",
file=config["datname"], param=config["cols"])
rule get_col:
input:
"data/{file}.csv"
output:
"output/{file}_col{param}.csv"
params:
col=lambda wc: wc.param
script:
"scripts/getCols.R"
Snakemake will infer from rule all (or any other rule to further use the output) what needs to be done and will call the rule get_col accordingly.

Set CSS for dynamic element CoffeeScript

I am trying to write a code that set CSS for some dynamic elements added on click on a link.
As per the example code in CoffeeScript tutorial it should be working with the following code.
temp = temp+1
$ '.box_'+temp
.css 'background', 'white'
Here temp is a variable integer.
I tried with static values like
$ '.box_1'
.css 'background', 'white'
but it returns something like this with .css not a function error
$('.box_1'.css('left', 100));
Just add parens to remove ambiguity and save yourself the headache.
temp = temp+1
$('.box_'+temp)
.css('background', 'white')
Sexy function calls are optional syntactic sugar, not required. You should not be using language features if doing so makes your code less clear for humans or machines (or in this case, both!)

Cucumber: In which order the feature tags are followed in a cucumber script?

I am facing an issue where I need to run script with three features. Let's say we have 3 feature files with tag names as #smoke1, #smoke2 and #smoke3. And I want these to be executed in that sequence.
Issue is that smoke3 is executing first and rest of them afterwards.
This is my script:
#Cucumber.Options(
glue = { "com.abc", "cucumber.runtime.java.spring.hooks" },
features = "classpath:",
format = { "json", "json:target/cucumber.json" },
tags = "#smoke1, #smoke2, #smoke3"
)
public class ex_Test extends AbstractTest { }
Warning: This only works in older versions of Cucumber.
Cucumber feature files are executed in alphabetical order by path and filename. The execution order is not based on tags.
However, if you specifically specify features, they should be run in the order declared.
For example:
#Cucumber.Options(features={"first_smoke.feature", "another_smoke.feature"})
Should run first_smoke and then another_smoke (compared to the default which is to run in the other order.
Ok we got it , We can have multiple tags for a single scenario like this #tag1 #tag2 #tag3.
You can not define the order in way below.
#Cucumber.Options(features={"first_smoke.feature", "another_smoke.feature"})
Cucumber determines the only alphabetical order and even only first letter of the word.
You can have how many tags you want in feature file, if you want to trigger feature file more times, it's not working like you will add tag more time or more tags from feature like:
tags = {"#Reports,#Reports"}
And tests are triggered in alphabetical order, it's checking tags not feature file name.

Resources