I'm trying to switch a project over to meson/ninja but it takes multiple executions of ninja to successfully complete, and at first glance it appears that the early failures are because of incomplete build steps that are supposed to precede it. The steps that seem out of order are usually while I'm calling a sed through custom_target to fix some auto generated output during the compile of a library (lib-b) that depends on another one (lib-a). This is essentially what I have for (lib-b):
lib_b = shared_library('lib-b-' + apiversion, lib_b_sources,
link_args: [ '-Wl,--version-script' ],
vala_header: 'lib-b.h',
vala_args: lib_b_vala_args,
vala_vapi: 'lib-b-#0#.vapi'.format(apiversion),
vala_gir: 'Lib-#0#.gir'.format(apiversion),
dependencies: lib_b_deps,
c_args: lib_b_args,
soversion: soversion,
install: true,
install_dir: [ true, true, true, false ],
)
custom_target('LibB-#0#.gir'.format(apiversion),
command: [ sed,
'-e', 's|Lib[.]|LibB.|g',
'-e', 's|namespace name="Lib"|namespace name="LibB"|g',
'-e', 's|LibB[.]Foo|LibA.Foo|g',
'-e', 's|<package name="lib-b-#0#"/>|<include name="LibA" version="#0#"/><package name="lib-b-#0#"/>|'.format(apiversion),
join_paths(meson.current_build_dir(), 'Lib-#0#.gir'.format(apiversion)),
],
output: 'LibB-#0#.gir'.format(apiversion),
capture: true,
install: true,
install_dir: dir_gir,
)
if g_ir_compiler.found()
custom_target('LibB-#0#.typelib'.format(apiversion),
command: [ g_ir_compiler,
'--shared-library', lib_b.full_path(),
'--includedir', lib_a_girdir,
'--output', '#OUTPUT#',
join_paths(meson.current_build_dir(), 'LibB-#0#.gir'.format(apiversion)),
],
output: 'LibB-#0#.typelib'.format(apiversion),
depends: lib_b,
install: true,
install_dir: dir_typelib,
)
endif
After the first run of ninja I get errors like this:
FAILED: src/lib/b/LibB-1.0.gir
/usr/bin/python3 /usr/bin/meson --internal exe /home/user/proj/_build/meson-private/meson_exe_sed_3429069bbdfaffa2d113b782ce02a55d5fd96973.dat
/usr/bin/sed: can't read /home/user/proj/_build/src/lib/b/Lib-1.0.gir: No such file or directory
Where Lib-1.0.gir is supposed to be one of the outputs of the shared_library command. If I run it again it creates the file, makes it further, and complains with a similar error. I run it again, further this time but errors on other internal projects that depend on the file that isn't being created. I run it one last time and it completes.
It would surprise me if meson/ninja was not capable to executing build steps in a sequence where one must precede the other. Have I done something gross/stupid here? I'm basically porting over an autotools setup that works, but is slooooow.
You have to put outputs into the input keyword of your custom_target's so they understand the dependency and run in the correct order.
Also you should be using the gnome module to generate gir files: http://mesonbuild.com/Gnome-module.html#gnomegenerate_gir
Related
I'm new to Snakemake and try to use specific files in a rule, from the directory() output of another rule that clones a git repo.
Currently, this gives me an error Wildcards in input files cannot be determined from output files: 'json_file', and I don't understand why. I have previously worked through the tutorial at https://carpentries-incubator.github.io/workflows-snakemake/index.html.
The difference between my workflow and the tutorial workflow is that I want to create the data I use later in the first step, whereas in the tutorial, the data was already there.
Workflow description in plain text:
Clone a git repository to path {path}
Run a script {script} on every single JSON files in the directory {path}/parsed/ in parallel to produce the aggregate result {result}
GIT_PATH = config['git_local_path'] # git/
PARSED_JSON_PATH = f'{GIT_PATH}parsed/'
GIT_URL = config['git_url']
# A single parsed JSON file
PARSED_JSON_FILE = f'{PARSED_JSON_PATH}{{json_file}}.json'
# Build a list of parsed JSON file names
PARSED_JSON_FILE_NAMES = glob_wildcards(PARSED_JSON_FILE).json_file
# All parsed JSON files
ALL_PARSED_JSONS = expand(PARSED_JSON_FILE, json_file=PARSED_JSON_FILE_NAMES)
rule all:
input: 'result.json'
rule clone_git:
output: directory(GIT_PATH)
threads: 1
conda: f'{ENVS_DIR}git.yml'
shell: f'git clone --depth 1 {GIT_URL} {{output}}'
rule extract_json:
input:
cmd='scripts/extract_json.py',
json_file=PARSED_JSON_FILE
output: 'result.json'
threads: 50
shell: 'python {input.cmd} {input.json_file} {output}'
Running only clone_git works fine (if I set an all input of GIT_PATH).
Why do I get the error message? Is this because the JSON files don't exist when the workflow is started?
Also - I don't know if this matters - this is a subworkflow used with module.
What you need seems to be a checkpoint rule which is first executed and only then snakemake determines which .json files are present and runs your extract/aggregate functions. Here's an example adapted:
I'm struggling to fully understand the file and folder structure you get after cloning your git repo. So I have fallen back to the best practices by Snakemake of using resources for downloaded and results for created files.
You'll need to re-adjust those paths to match your case again:
GIT_PATH = config["git_local_path"] # git/
GIT_URL = config["git_url"]
checkpoint clone_git:
output:
git=directory(GIT_PATH),
threads: 1
conda:
f"{ENVS_DIR}git.yml"
shell:
f"git clone --depth 1 {GIT_URL} {{output.git}}"
rule extract_json:
input:
cmd="scripts/extract_json.py",
json_file="resources/{file_name}.json",
output:
"results/parsed_files/{file_name}.json",
shell:
"python {input.cmd} {input.json_file} {output}"
def get_all_json_file_names(wildcards):
git_dir = checkpoints.clone_git.get(**wildcards).output["git"]
file_names = glob_wildcards(
"resources/{file_name}.json"
).file_name
return expand(
"results/parsed_files/{file_name}.json",
file_name=file_names,
)
# Rule has checkpoint dependency: Only after the checkpoint is executed
# the rule is executed which then evaluates the function to determine all
# json files downloaded from the git repo
rule aggregate:
input:
get_all_json_file_names
output:
"result.json",
default_target: True
shell:
# TODO: Action which combines all JSON files
edit: Moved the expand(...) from rule aggregate into get_all_json_file_names.
I'm new to VS Code and am trying to use R there. I think I have it set-up correctly with all required sub-programs installed. When attempting to print "Hello World", the output tab produces messages that makes me think VS Code is recognizing the R code. Also, I can hover over my R code in VS Code and the hints pop up explaining the R functions; so I did something right I think.
I noticed that the bottom right of the VS Code screen says "R: (not attached)"
When I click on that and try to attach it, I get this:
PS C:\Users\Bob > .vsc.attach()
At line:1 char:13
+ .vsc.attach()
+ ~
An expression was expected after '('.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : ExpectedExpression
In the terminal, if I click the down arrow next to Powershell, I get these options:
PowerShell(Default)
Command Prompt
JavaScipt Debug Terminal
Split Terminal
Configure Terminal Settings
Select Default Profile
To get to my JSON file, I go to View >> Command Pallette then type "Open Settings JSON". It contains this:
{
"workbench.colorTheme": "Default Light+",
"r.alwaysUseActiveTerminal": true,
"r.rpath.windows": "C:\\Program Files\\R\\R-4.1.2\\bin\\x64\\R.exe",
"security.workspace.trust.untrustedFiles": "open",
"files.associations": {
"*.R": "r"
},
"r.lsp.debug": true,
"r.rterm.windows": "C:\\Program Files\\R\\R-4.1.2\\bin\\x64\\R.exe",
"r.bracketedPaste": true,
"r.source.focus": "terminal",
"r.rterm.option": [
"--no-save",
"--no-restore",
"--r-binary=C:\\Program Files\\R\\R-4.1.2\\bin\\R.exe"
],
"terminal.external.windowsExec": "C:\\Program Files\\R\\R-4.1.2\\bin\\R.exe"
}
I am not using radian right now. I want to get basic R working in VSCode then I'll connect with Radian.
It keeps trying to run my R script using PowerShell and I think this is the main problem. What do I do to get it to run using R? Thanks.
Based on what is in your JSON settings, it should work if you remove the \\x64 portion of your r.path.windows and r.term.windows paths:
"r.rpath.windows": "C:\\Program Files\\R\\R-4.1.2\\bin\\R.exe",
"r.rterm.windows": "C:\\Program Files\\R\\R-4.1.2\\bin\\R.exe",
// truncated
After changing that, you should be able to switch to an R terminal by using the drop-down menu in the terminal window.
Additionally, you can set R to be your default terminal by adding an additional line to your JSON Settings:
"terminal.integrated.defaultProfile.windows": "R",
// truncated
As discussed in q/66678305, newer Jupyter versions store in addition to the source code and output of cells an ID for the purpose of e.g. linking to a cell.
However, these IDs aren't stable but often change even when the cell's source code was not touched. As a result, if you have the .ipynb file under version control with e.g. git, the commits end up having lots of rather funny sounding “changed lines” that don't correspond to any actual change made in the commit. Like,
{
"cell_type": "code",
"execution_count": null,
- "id": "respected-breach",
+ "id": "incident-winning",
"metadata": {},
"outputs": [],
Is there a way to prevent this?
Answer for Git on Linux. Probably also works on MacOS, but not Windows.
It is good practice to not VCS the .ipynb files as saved by Jupyter, but instead a filtered version that does not contain all the volatile information. For this purpose, various git hooks are available; the one I'm using is based on https://github.com/toobaz/ipynb_output_filter/blob/master/ipynb_output_filter.py.
Strangely enough, it turns out this script can not be modified to remove the "id" field from cells. Namely, if you try to remove that field in the filtering loop, like with
for field in ("prompt_number", "execution_number", "id"):
if field in cell:
del cell[field]
then the write function from jupyter_nbformat will just put an id back in. It is possible to merely change the id to something constant, but then Jupyter will complain about nonunique ids.
As a hack to circumvent this, I now use this filter with a simple grep to delete the ID:
#!/bin/bash
grep -v '^ *"id": "[a-z\-]*",$'
Store that in e.g. ~/bin/ipynb_output_filter.sh, make it executable (chmod +x ~/bin/ipynb_output_filter.sh) and ensure you have the following ~/.gitattributes file:
*.ipynb filter=dropoutput_ipynb
and in your git config (either global ~/.gitconfig or project)
[core]
attributesfile = ~/.gitattributes
[filter "dropoutput_ipynb"]
clean = ~/bin/ipynb_output_filter.sh
smudge = cat
If you want to use a standard python filter in addition to that, you can invoke it before the grep in ~/bin/ipynb_output_filter.sh, like
#!/bin/bash
~/bin/ipynb_output_filter.py | grep -v '^ *"id": "[a-z\-]*",$'
I'm writing am openembedded/bitbake recipe for openembedded-classic. My recipe RDEPENDS on keyutils, and everything seems to work, except one thing:
I want to append a single line to the /etc/request-key.conf file installed by the keyutils package. So I added the following to my recipe:
pkg_postinst_${PN} () {
echo 'create ... more stuff ..' >> ${sysconfdir}/request-key.conf
}
However, the intended added line is missing in my resulting image.
My recipe inherits update-rc.d if that makes any difference.
My main question is: How do i debug this? Currently I am constructing an entire rootfs image, and then poke-around in that to see, if the changes show up. Surely there is a better way?
UPDATE:
Changed recipe to:
pkg_postinst_${PN} () {
echo 'create ... more stuff ...' >> ${D}${sysconfdir}/request-key.conf
}
But still no luck.
As far as I know, postinst runs at rootfs creation, and only at first boot if rootfs fails.
So there is a easy way to execute something only first boot. Just check for $D, like this:
pkg_postinst_stuff() {
#!/bin/sh -e
if [ x"$D" = "x" ]; then
# do something at first boot here
else
exit 1
fi
}
postinst scripts are ran at roots time, so ${sysconfdir} is /etc on your host. Use $D${sysconfdir} to write to the file inside the rootfs being generated.
OE-Classic is pretty ancient so you really should update to oe-core.
That said, Do postinst's run at first boot? I'm not sure. Also look in the recipes work directory in the temp directory and read the log and run files to see if there are any clues there.
One more thing. If foo RDEPENDS on bar that just means "when foo is installed, bar is also installed". I'm not sure it makes assertions about what is installed during the install phase, when your postinst is running.
If using $D doesn't fix the problem try editing your postinst to copy the existing file you're trying to edit somewhere else, so you can verify that it exists in the first place. Its possible that you're appending to a file that doesn't exist yet, and the the package that installs the file replaces it.
I am using command line options in my grunt script: http://kurst.co.uk/transfer/Gruntfile.js
However the command grunt --vers:0.0.1 always returns 'undefined' when I try to get the option:
var version = grunt.option('vers') || '';
Can you help me get this working ?
I tried different (CLI) commands:
grunt vers:asd
grunt -vers:asd
grunt vers=asd
as well as using :
grunt.option('-vers');
grunt.option('--vers');
But no luck so far. Hopefully I am missing something simple.
This is my package.js file:
{
"name": "",
"version": "0.1.0",
"description": "Kurst EventDispatcher / Docs Demo ",
"devDependencies": {
"grunt": "~0.4.1",
"grunt-contrib-yuidoc": "*",
"grunt-typescript": "~0.1.3",
"uglify-js": "~2.3.5",
"grunt-lib-contrib": "~0.6.0",
"grunt-contrib-uglify":"*"
}
}
The proper syntax for specifying a command line argument in Grunt is:
grunt --option1=myValue
Then, in the grunt file you can access the value and print it like this:
console.log( grunt.option( "option1" ) );
Also, another reason you are probably having issues with --vers is because its already a grunt option that returns the version:
★ grunt --vers
grunt-cli v0.1.7
grunt v0.4.1
So it would probably be a good idea to switch to a different option name.
It is worth mentioning that as the amount of command line arguments you want to use grows, you will run into collisions with some arguments that grunt uses internally.
I got around this problem with nopt-grunt
From the Plugin author:
Grunt is awesome. Grunt's support for using additional command line options is not awesome. The current documentation is misleading in that they give examples of using boolean flags and options with values, but they don't tell you that it only works that way with a single option. Try and use more than one option and things fall apart quickly.
It's definitely worth checking out