How to chain together two calls to dirname in qmake? - qt

Given the following in a qmake file:
SOME_PATH = "/path/to/folder/with/file.txt"
message("parent dir: $$dirname(SOME_PATH)")
message("grandparent dir: $$dirname($$dirname(SOME_PATH))")
It prints:
Project MESSAGE: parent dir: /path/to/folder/with
Project MESSAGE: grandparent dir:
How do I chain together two calls to dirname without needing to use an intermediate variable? Other replacement functions seem to work when chained, but this one does not.

Related

Snakemake wildcards: Using wildcarded files from directory output

I'm new to Snakemake and try to use specific files in a rule, from the directory() output of another rule that clones a git repo.
Currently, this gives me an error Wildcards in input files cannot be determined from output files: 'json_file', and I don't understand why. I have previously worked through the tutorial at https://carpentries-incubator.github.io/workflows-snakemake/index.html.
The difference between my workflow and the tutorial workflow is that I want to create the data I use later in the first step, whereas in the tutorial, the data was already there.
Workflow description in plain text:
Clone a git repository to path {path}
Run a script {script} on every single JSON files in the directory {path}/parsed/ in parallel to produce the aggregate result {result}
GIT_PATH = config['git_local_path'] # git/
PARSED_JSON_PATH = f'{GIT_PATH}parsed/'
GIT_URL = config['git_url']
# A single parsed JSON file
PARSED_JSON_FILE = f'{PARSED_JSON_PATH}{{json_file}}.json'
# Build a list of parsed JSON file names
PARSED_JSON_FILE_NAMES = glob_wildcards(PARSED_JSON_FILE).json_file
# All parsed JSON files
ALL_PARSED_JSONS = expand(PARSED_JSON_FILE, json_file=PARSED_JSON_FILE_NAMES)
rule all:
input: 'result.json'
rule clone_git:
output: directory(GIT_PATH)
threads: 1
conda: f'{ENVS_DIR}git.yml'
shell: f'git clone --depth 1 {GIT_URL} {{output}}'
rule extract_json:
input:
cmd='scripts/extract_json.py',
json_file=PARSED_JSON_FILE
output: 'result.json'
threads: 50
shell: 'python {input.cmd} {input.json_file} {output}'
Running only clone_git works fine (if I set an all input of GIT_PATH).
Why do I get the error message? Is this because the JSON files don't exist when the workflow is started?
Also - I don't know if this matters - this is a subworkflow used with module.
What you need seems to be a checkpoint rule which is first executed and only then snakemake determines which .json files are present and runs your extract/aggregate functions. Here's an example adapted:
I'm struggling to fully understand the file and folder structure you get after cloning your git repo. So I have fallen back to the best practices by Snakemake of using resources for downloaded and results for created files.
You'll need to re-adjust those paths to match your case again:
GIT_PATH = config["git_local_path"] # git/
GIT_URL = config["git_url"]
checkpoint clone_git:
output:
git=directory(GIT_PATH),
threads: 1
conda:
f"{ENVS_DIR}git.yml"
shell:
f"git clone --depth 1 {GIT_URL} {{output.git}}"
rule extract_json:
input:
cmd="scripts/extract_json.py",
json_file="resources/{file_name}.json",
output:
"results/parsed_files/{file_name}.json",
shell:
"python {input.cmd} {input.json_file} {output}"
def get_all_json_file_names(wildcards):
git_dir = checkpoints.clone_git.get(**wildcards).output["git"]
file_names = glob_wildcards(
"resources/{file_name}.json"
).file_name
return expand(
"results/parsed_files/{file_name}.json",
file_name=file_names,
)
# Rule has checkpoint dependency: Only after the checkpoint is executed
# the rule is executed which then evaluates the function to determine all
# json files downloaded from the git repo
rule aggregate:
input:
get_all_json_file_names
output:
"result.json",
default_target: True
shell:
# TODO: Action which combines all JSON files
edit: Moved the expand(...) from rule aggregate into get_all_json_file_names.

Add an extension in a .pro variable

I'm trying to print a message with QMake but I have problems with extensions:
lib_name = $$1
message("test1: $$MYPATH/$$lib_name/src/$$lib_name.pri");
message("test2: $$MYPATH/$$lib_name/src/$$lib_name");
For some reason, test1 doesn't print the correct path. It just prints the path until src/. But, test2 is ok. It prints everything until the value in $$1.
Any workaround?
QMake supports variables (objects) with members that can be used using the dot . operator e.g. target.path for INSTALLS. So, in your case, $$lib_name.pri means that you're accessing the member pri of lib_name which doesn't exist so there's no output.
You need to enclose variables in curly braces for QMake to distinguish them from the surrounding text i.e. $${lib_name}.pri.
Example:
message("test1: $$MYPATH/$$lib_name/src/$${lib_name}.pri");
# ~~~~~~~~~~~~
For more examples of objects, see Adding Custom Target and Adding Compilers sections of QMake's Advanced Usage page.
Here's another relevant SO thread: QMake - How to add and use a variable into the .pro file

How to force robot framework to pick robot files in sequential order?

I have robot files in a folder (tests) as shown below:
tests
1_robotfile1.robot
2_robotfile2.robot
3_robotfile3.robot
4_robotfile4.robot
5_robotfile5.robot
6_robotfile6.robot
7_robotfile7.robot
8_robotfile8.robot
9_robotfile9.robot
10_robotfile10.robot
11_robotfile11.robot
Now if I execute '/root/users1/power$ pybot root/user1/tests' command, robot files are running in following order:
tests
1_robotfile1.robot
10_robotfile10.robot
11_robotfile11.robot
2_robotfile2.robot
3_robotfile3.robot
4_robotfile4.robot
5_robotfile5.robot
6_robotfile6.robot
7_robotfile7.robot
8_robotfile8.robot
9_robotfile9.robot
I want to force robot_framework to pick robot files in sequential order, like 1,2,3,4,5....
Do we have any option for this?
If you have the option of renaming your files, you just need to make sure that the prefix is sortable. For numbers, that means they should all have the same number of digits.
I recommend renaming your test cases to have three or four digits for the prefix:
001_robotfile1.robot
002_robotfile2.robot
003_robotfile3.robot
004_robotfile4.robot
005_robotfile5.robot
006_robotfile6.robot
007_robotfile7.robot
008_robotfile8.robot
009_robotfile9.robot
010_robotfile10.robot
011_robotfile11.robot
...
With that, they will sort in the order that you expect.
Following #Emna answer, RF docs ( http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#execution-order ) provides some solution.
So what could you do:
rename all the files to have consecutive and computer numbering (001-test.robot instead of 1-test.robot). This may break any internal references to other files (resources), hard to add test in-between,error prone when execution order needs to be changed
you can tag it as Emna
idea from RF docs - write a script to create argument file which will keep ordering in proper way and use it as argument to robot execution. For 1000+ files it should not take longer than few seconds.
try to design tests to not be dependent from execution order, use suite setup instead.
good luck ;)
Tag the tests as foo and bar so you can run each test separately:
pybot -i foo tests
or
pybot -i bar tests
and decide the order you want
pybot -i bar tests || pybot -i foo tests

Compile LESS files with source maps

How can I compile a LESS file to output a source map file (.css.map) in addition to a CSS file? Is there a way to do it on both command line (NodeJS's lessc) and on any GUI-based programs?
Update: New shortest answer
The docs have been updated! As new features hit LESS, sometimes the docs lag behind a bit, so if you're looking for bleeding-edge features, you're still probably better off running lessc (see longer answer) and checking what pops out of the help text.
http://lesscss.org/usage/
Short answer
You're looking for any number of the following options from the command line:
--source-map[=FILENAME] Outputs a v3 sourcemap to the filename (or output filename.map)
--source-map-rootpath=X adds this path onto the sourcemap filename and less file paths
--source-map-basepath=X Sets sourcemap base path, defaults to current working directory.
--source-map-less-inline puts the less files into the map instead of referencing them
--source-map-map-inline puts the map (and any less files) into the output css file
--source-map-url=URL the complete url and filename put in the less file
As I write this I'm not aware of any GUI options that generate maps (source maps were only added to LESS in the last few months) -- sorry to not have any better news. I'm sure they'll add support in as they update over the next year.
Longer answer
If you run lessc from the command line without any parameters it will give you all the options. (In my experience, this is more up to date than their documentation, so it'll at least get you pointed in the right direction.) with all the most recent map stuff included.
The easiest combo to use for dev is --source-map-less-inline --source-map-map-inline as that will give you your source maps embedded in your output css.
If you'd like to add a separate map file, you can use --source-map which, from my.less will output my.css and my.css.map
For reference: when I run my copy (v 1.6.1 at the moment) I get
usage: lessc [option option=parameter ...] <source> [destination]
If source is set to `-' (dash or hyphen-minus), input is read from stdin.
options:
-h, --help Print help (this message) and exit.
--include-path=PATHS Set include paths. Separated by `:'. Use `;' on Windows.
-M, --depends Output a makefile import dependency list to stdout
--no-color Disable colorized output.
--no-ie-compat Disable IE compatibility checks.
--no-js Disable JavaScript in less files
-l, --lint Syntax check only (lint).
-s, --silent Suppress output of error messages.
--strict-imports Force evaluation of imports.
--insecure Allow imports from insecure https hosts.
-v, --version Print version number and exit.
-x, --compress Compress output by removing some whitespaces.
--clean-css Compress output using clean-css
--clean-option=opt:val Pass an option to clean css, using CLI arguments from
https://github.com/GoalSmashers/clean-css e.g.
--clean-option=--selectors-merge-mode:ie8
and to switch on advanced use --clean-option=--advanced
--source-map[=FILENAME] Outputs a v3 sourcemap to the filename (or output filename.map)
--source-map-rootpath=X adds this path onto the sourcemap filename and less file paths
--source-map-basepath=X Sets sourcemap base path, defaults to current working directory.
--source-map-less-inline puts the less files into the map instead of referencing them
--source-map-map-inline puts the map (and any less files) into the output css file
--source-map-url=URL the complete url and filename put in the less file
-rp, --rootpath=URL Set rootpath for url rewriting in relative imports and urls.
Works with or without the relative-urls option.
-ru, --relative-urls re-write relative urls to the base less file.
-sm=on|off Turn on or off strict math, where in strict mode, math
--strict-math=on|off requires brackets. This option may default to on and then
be removed in the future.
-su=on|off Allow mixed units, e.g. 1px+1em or 1px*1px which have units
--strict-units=on|off that cannot be represented.
--global-var='VAR=VALUE' Defines a variable that can be referenced by the file.
--modify-var='VAR=VALUE' Modifies a variable already declared in the file.
-------------------------- Deprecated ----------------
-O0, -O1, -O2 Set the parser's optimization level. The lower
the number, the less nodes it will create in the
tree. This could matter for debugging, or if you
want to access the individual nodes in the tree.
--line-numbers=TYPE Outputs filename and line numbers.
TYPE can be either 'comments', which will output
the debug info within comments, 'mediaquery'
that will output the information within a fake
media query which is compatible with the SASS
format, and 'all' which will do both.
--verbose Be verbose.
If the command line doesn't suite you, Grunt is great at this type of thing. You can configure the grunt-contrib-less plugin to generate inline maps with a config like this:
less: {
options: {
sourceMap:true,
outputSourceFiles: true
},
lessFiles: {
expand: true,
flatten:false,
src: ['**/*.less'],
dest: ['dist/'],
ext: '.css',
}
},
https://github.com/gruntjs/grunt-contrib-less
Example to Create Map and CSS file from Less File
Install latest Node JS and go to command prompt and run npm install less, Now less installed successfully
Go to Command Prompt and move to less file folder that we are going to create
For e.g., I am going to change HelloWorld [Less File]
In Command prompt go to C:\Project\CSS or give the correct path in the below command.
Run following Command in Command Prompt
lessc HelloWorld.less HelloWorld.css --source-map=HelloWorld.css.map –verbose
Now CSS and Map file is generated in the respective folder.
For more reference check the link : royalarun.blogspot.com

how to specify wildcards in a filename for amazon EMR job

If I run a EMR job and specify wildcards in the directory path it all works fine
e.g: s3n://mybucket///*/fileName.gz --- picks all files with name fileName.gz under subdirectories of mybucket
However when I specify wildcards in the fileName then emr logs show an error that no match found. It seems to treat the '' character as a literal character part of fileName instead as a wildcard
e.g: s3n//mybucket/Dir1/fileName..gz
gives an error back that no matches were found for fielName.*.gz in that directory
How do we specify wildcards in filename for an amazon emr job
Just went through this myself. It is very useful to pass NON-globbed wildcard expressions from the start script to spark/pyspark because the distribution mechanism inside the spark program can be efficient when presented when something like this; note globbing at both directory level and filename level:
df = spark.read.json('s3://my-bucket/archive/*/2014/7/G.*.json.bz2')
Not to mention of course that almost all the time you want globbing to occur on the remote resource, not your local launch environment.
The trick is to ensure that the initial shell variable does not get globbed when created and also protected when presented to aws emr add-steps. Here is a simple launch script that assumes a cluster has been created. To
show it can be done, we also escape newlines to make it easier to see the args. Be careful, however, NOT to re-introduce extra whitespace when doing this!
# Use single quotes to stop globbing at the var level:
DATA_URI='s3://my-bucket/archive/*/2014/7/G.*.json.bz2'
# DO NOT add trailing slash to the output_uri. S3 will
# automatically create subdirs under that. e.g.
# --output_uri s3://$SRC_BUCKET/V4_t
# will be created and populated with many part-0000-... files.
# If you are not renaming or deleting the output_uri for each run,
# make sure your spark program uses overwrite mode for dataframe output e.g.
# dfx.write.mode("overwrite").json(output_uri)
# Careful to protect the DATA_URI arg by wrapping it single quotes:
aws emr add-steps \
--cluster-id j-3CDMYEF3NJGHR \
--steps Type=Spark,\
Name="myAnalytics",\
ActionOnFailure=CONTINUE,\
Args=[\
s3://$SRC_BUCKET/blunders.py,\
--game_data,\'$DATA_URI\',
--output_uri,s3://$SRC_BUCKET/V4_t]

Resources