I've got this section in my project's root's meson.build:
if get_option('gen_py_bindings')
message('told to build py bindings')
custom_target('py_bindings',
command: ['env', '_MESON_MODULE_NAME=' + meson.project_name(), ',_MESON_MODULE_VERSION=' + meson.project_version(), './py3_bindings/setup.py', 'build'],
depends: [mainlib],
depend_files: files(['py3_bindings/module.c', 'py3_bindings/setup.py']),
input: ['py3_bindings/setup.py'],
install: false, output: 'sharedextension.so')
endif
It's a custom target that runs a setup.py script to build python bindings for my project's library.
The problem is that it is always seemingly up-to-date. I've used the depends keyword argument to specify that it depends on another build target in the project, and the depend_files keyword argument to specify that it depends on the C source file that the script uses to build the extension, as well the actual script that is being ran as the command. I've also used the input keyword argument even though I don't understand the difference between it and depend_files.
I can only get the custom target to regenerate if I make a change to meson.build (the message() call is displayed successfully).
No other change will do. I've tried updating all files listed in the custom target but it always results in: ninja: no work to do.. Even if other out-of-date targets get rebuilt/relinked/etc...
I'm using ninja 1.9.0 and meson 0.52.1 on linux.
I am also well aware of the build_always_stale keyword argument but I don't want to use it unless necessary. (update: setting it to true still doesn't result in the target rebuilding, looks like there's something more at play here but I can't figure it out).
By default, custom targets don't get built when running plain ninja and thus the build_by_default keyword argument needs to be passed and set to true, e.g.
custom_target('target', build_by_default: true)
Related
I have activated GitHub Action to CI for R on Win, OSX, and Linux. On Windows only happen the following warning (which is turned into an error).
* checking top-level files ... WARNING
Conversion of 'README.md' failed:
[WARNING] This document format requires a nonempty <title> element.
Please specify either 'title' or 'pagetitle' in the metadata,
e.g. by using --metadata pagetitle="..." on the command line.
Falling back to 'README'
here the full report.
It seems it is called by CMD check directly, hence I cannot change the call to pandoc.
On the other hand, I tried to include a pandoc argument into the README.Rmd header as it was suggested here:
output:
github_document:
pandoc_args: "--number-offset=1,0"
toc: true
pagetitle: covid19ita
It has no effect.
NOTE: I would not remove error_on = "warning" from CMD check.
What can I do to make the test pass on win too?
It seems that the main issue is related to the inability to fetch badges in the rendered document. That could be seen here (pass for R3.5 but not for R3.6), or here (pass for R3.6 but not for R3.5), and here (where the missing badge is no more the AppVeyor one but the CodeCov one).
So my current solution is to be sure all the badges are ready before or change after the check action takes place (eg using only gh-action).
I can run julia script with arguments from Powershell as > julia test.jl 'a' 'b'. I can run a script from REPL with include("test.jl") but include accepts just one argument - the path to the script.
From playing around with include it seems that it runs a script as a code block with all the variables referencing the current(?) scope so if I explicitly redefine ARGS variable in REPL it catches on and displays corresponding script results:
>ARGS="c","d"
>include("test.jl") # prints its arguments in a loop
c
d
This however gives a warning for redefining ARGS and doesn't seem the intended way of doing that. Is there another way to run a script from REPL (or from another script) while stating its arguments explicitly?
You probably don't want to run a self-contained script by includeing it. There are two options:
If the script isn't in your control and calling it from the command-line is the canonical interface, just call it in a separate Julia process. run(`$JULIA_HOME/julia path/to/script.jl arg1 arg2`). See running external commands for more details.
If you have control over the script, it'd probably make more sense to split it up into two parts: a library-like file that just defines Julia functions (but doesn't run any analyses) and a command-line file that parses the arguments and calls the functions defined by the library. Both command-line interface and the second script your writing now can include the library — or better yet make the library-like file a full-fledged package.
This solution is not clean or Julia style of doing things. But if you insist:
To avoid the warning when messing with ARGS use the original ARGS but mutate its contents. Like the following:
empty!(ARGS)
push!(ARGS,"argument1")
push!(ARGS,"argument2")
include("file.jl")
And this question is also a duplicate, or related to: juliapassing-argument-to-the-includefile-jl as #AlexanderMorley pointed to.
Not sure if it helps, but it took me a while to figure this:
On your path "C:\Users\\.julia\config\" there may be a .jl file called startup.jl
The trick is that not always Julia setup will create this. So, if neither the directory nor the .jl file exists, create them.
Julia will treat this .jl file as a command list to be executed every time you run REPL. It is very handy in order to set the directory of your projects (i.e. C:\MyJuliaProject\MyJuliaScript.jl using cd("")) and frequent used libs (like using Pkg, using LinearAlgebra, etc)
I wanted to share this as I didn't find anyone explicit saying this directory might not exist in your Julia device's installation. It took me more than it should to figure this thing out.
I'm trying to get package references resolved during a build, using GNAT Programming Suite (hosted on Win XP). In the Builder Results, I get errors like this one:
file "ac_configuration_s.ada" not found
Clicking on the error takes me to a line like this:
with
Ac_Configuration,
Dispense_Timer,
...
The first item (Ac_Configuration) isn't resolved, but the second item (Dispense_Time) is resolved. I have several others that do or don't resolve. All of the files in question (spec and body) are identified as source files.
When I hover my mouse over the line with the error, a popup shows up that offers this:
(Cross-references info not up to date. This is a guess.)
Ac_Configuration
local package declared at D_Ac_Config_S.Ada:85
The guess is correct, but I don't know how to use this. How do I get this to correctly build?
Update
Here is teh call to gcc
gcc -c "-gnatec=C:\Source\build\GNAT-TEMP-000001.TMP" -I- -gnatA
-x ada "-gnatem=C:\Source\build\GNAT-TEMP-000002.TMP" "C:\Source\C_Cbt_Main_B.Ada"
I don't see a reference to teh "miimal" switch.
In this case, there is no corresponding body file file D_Ac_Config_S.Ada. So the is no body file to compile separately.
When I right click on the package reference inside the with, I can goto the declaration of Ac_Configuration and every other package name that is the source of an error. So these lreferences are being resolved somehow.
By the way, I have not used ADA before, so I'm still trying to understand everything.
It looks as though you're using _s.ada as the suffix for specs, and I'm guessing _b.ada for bodies?
GNAT may have difficulty with this naming convention. It's possible, using a GNAT Project file (.gpr), to alter GNAT's default convention ({unit-name}.ads for specs, {unit-name}.adb for bodies) but the rules (see "Spec_Suffix") say "It cannot start with an underscore followed by an alphanumeric character" (I haven't tried this, but you can see that it would confuse the issue if you had a package Foo_S, for example).
LATER: It turns out that GNAT (GPL, 4.7, 4.8) is quite happy with your suffixes!
If the package Ac_Configuration is really a local package declared at line 85 of D_Ac_Config_S.Ada, then there's your problem; you can only with a library unit, which in this case would be D_Ac_Config.
with D_Ac_Config;
...
package Foo is
...
Bar : D_Ac_Config.Ac_Configuration.Baz;
I wonder whether D_Ac_Config_S.Ada (for example) actually contains multiple Ada units? (if so, compiling that file should result in a compilation error such as end of file expected, file can have only one compilation unit). GNAT doesn't support this at compile time, providing instead a utility gnatchop.
Would it be possible to just gnatchop all the source and be done with it?
Hm, I think it sounds like the compiler's got a bad set of objects/ALIs it's working with, hence the cross-reference not up to date error. (Usually the compiler's good about keeping things up to date; but you may want to check to see if the "minimal recompilation" switch is set for the project.)
Have you tried compiling just the ["owning"] file D_Ac_Config_S.Ada? (i.e. if it were a spec, go to the corresponding body and compile that.) That should force its ALI/object files to be updated.
Then try building as normal.
-- PS: you might have to clean first.
While I used to compile a single source file with Cmd+K in prior versions of Xcode, how does one do the same in Xcode 4? (Note that this is different than preprocessing or showing the disassembly of the file.) If compiling from a command line is proposed then it must be such that the project's settings, include paths, preprocessor definitions, etc., are all included.
An example use case is where I make a header file change but only want to test the change's effect with respect to a single source file, not all of the files that depend upon that header.
The command has been moved to the Perform Action submenu. Look under
Product > Perform Action > Compile filename.cpp
To assign Cmd+K to it, go to
File > Preferences > Key Bindings > Product Menu group
and you'll find Compile File where you can assign a key. Cmd+K is assigned to Clear Console now by default, so be sure to remove that binding to avoid conflicts.
One way that I have found to do this is to using the following menu commands:
Product -> Generate Output -> Generate Preprocessed File
Product -> Generate Output -> Generate Assembly File
This may not be exactly what you want, but it will compile the single file.
When you build a project, xcode runs compilation command. You can check the log, search for your file and copy paste that command on Terminal. It'll compile only the file for which you copy/pasted on the terminal.
If your file is C (or C++) file, then simply open your terminal, go to the folder in which the file resides and type
gcc -o outputFile inputFile.c
I am not familar with Objective-c that much, but GCC might work since it's only a superset of C, just like C++.
Hope that was helpful :)
The keyboard shortcut Cmd+K on Xcode 3 and before has been remapped to Cmd+B on Xcode 4
Along the same lines, Cmd+Return was remapped to Cmd+R (in case you ever used that)
The common requirement for single file compilation is checking it for syntax errors. (atleast for me). Since xcode4 highlights syntax errors as you type. It seems apple removed that feature.
I have a custom robot framework library which accepts an argument to initialize it.
*** Settings ***
Library NotifyUsers ${max_messages}
This works just fine when executed from the command line using pybot:
pybot --variable max_messages:4 my_test
However, this variable doesn't exist in Ride when it imports the library at startup. I've tried defining it in the Arguments field on the Run tab, but that isn't instantiated until you run a test.
If I replace the variable and hard code an integer argument, it works fine in Ride.
Apologies to Bryan Oakley! I somehow managed to delete his answer which pointed me in the right direction.
Adding an entry to the variable table does not resolve this issue, however using a Variable File does! It appears that Ride will import variable files when it starts up. Variable Tables contained in a test suite are not resolved until run time.