In Artifactory, I have a build foo, which uses dependencies produced by build bar.
I want to list the files of bar that were used as dependencies to build foo at job number 42.
How do I request this in Artifactory Query Language?
So far I tried this:
items.find(
{
"dependency.module.build.name":"foo",
"dependency.module.build.number":"42"
}
)
which looks like it returns dependencies of build "foo" in general, but returns a lot more dependencies than what should be correct (I get over 200, when I know that foo only gets 10 dependencies in total, all of them from bar).
Additionally, I notice that I can't display build name for these dependencies for some reason:
adding .include("artifact.module.build.name") to my request, like in this answer, causes the response to be empty.
EDIT: for this last issue, it looks like I needed to use .include("#build.name") instead.
Using
"dependency.module.build.name":"foo",
"dependency.module.build.number":"42"
Will produce all dependencies of the build foo, not just these that were created by bar
So i'm guessing you want something that's similar to
"dependency.module.build.name":"foo",
"dependency.module.build.number":"42",
"artifact.module.build.name":"bar"
Basically asking for all artifacts that were dependencies of build foo and artifacts produced by build bar
Related
I am trying to correctly configure julia project.
The project had initially:
One module in its own directory
Another "kind of module" in another directory, but there was no module keyword and it was patched together with help of includes.
A few files in the root to execute it all by just running "julia run.jl" for example.
No packages, no Project.toml no Manifest.toml - there is "packages.jl", which manually calls "Pkg.add" to for preset list of dependencies.
Not all includes were used to put it all together, there was some fiddling with LOAD_PATH
Logically the project contains 3 parts that I see there and something as I would see as "packages" in for example python world.
One "Common" module with basic util functions shared by all interested modules.
One module A, which has dependency on "Common".
Module B, which has dependency on A and "Common".
What I did is, I created 3 modules in their separate directories and to put it all together. These modules make sense internally and there is no real reason to expose them "outside". The whole code is in the end executable and there would be exposed probably just one function, that executes everything. I created loader file, which included all 3 module files. That way I got rid of LOAD_PATH and references in IDE started to work. This works for our purposes, but still isn't ideal. I have been reading quite a lot about Julia structure and possibilities - modules, packages, but still don't understand fully. And Revise doesn't work.
Is it correct to have modules like this? I like modules as they clearly set boundaries between modules using export lines.
I would also like to make the code as compatible with IDE as possible (LOAD_PATH settings didn't work for VS code and references to functions were broken) and also with Revise.
What is the typical structure for this?
How to clearly separate code while make the development easy?
How to make this work with Revise?
I expect it's good idea to make a package for this, but should I make it for the whole project? Then it would mean one "project" module and 3 submodules A, B and Common?
Or should it be 3(4) packages per module?
Thanks for any output. Comparison of some of the principles to Python/Java/kotlin/C#/Javascript module/packages could be helpful.
In CubicEoS.jl I've made a single package which provides two modules: CubicEoS with general algorithms and CubicEoS.BrusilovskyEoS with an implementation of necessary interface to make CubicEoS works on a concrete case. The last module depends on the first. Revise works well. As a user (not developer) of CubicEoS, I have scripts which run some calculations.
So, in your case, I would create a single package with four modules. The forth module is a hood for the others: Common, A and B. The possible file structure maybe like this
src/
TheHood.jl
Common/Common.jl
Module_A/Module_A.jl
Module_B/Module_B.jl
test/
...
examples/ # those may put out of here, but the important examples may be in test/
...
Project.toml
And, the possible module structure may be like this
# TheHood.jl
module TheHood
export bar
include("Common/Common.jl")
include("Module_A/Module_A.jl")
include("Module_B/Module_B.jl")
end
# Common/Common.jl
module Common
export util_1, util_2
using LinearAlgebra
util_1(x) = "util_1's implementation"
util_2(x) = "util_2's implementation"
end
# Module_A/Module_A.jl
module Module_A
import ..Common
export foo
foo(x) = "foo's implementation"
end
# Module_B/Module_B.jl
module Module_B
import ..Common
import ..Module_A
export bar
bar(x) = "bar's implementation"
end
Now, I'll answer the questions
What is the typical structure for this?
If the modules does not use independently, I would use the structure above. For small projects I've found "one package = one module" strategy painful when a core package updates.
How to clearly separate code while make the development easy?
In Julia, a module effectively is a namespace. In inner modules (like A, B, Common), I usually import the dev-modules, but use modules which a dev-module depends on (see above using LinearAlgebra vs import ..Common. That's my preference to clarify names.
How to make this work with Revise?
Turn the code into a package. That's preferable, because Reviseing of standlone modules is buggy (at least, in my experience). I'm usually using Revise like this
% cd where_the_package_lives # actually, where the project.toml is
% julia --project=. # or julia and pkg> activate .
% julia> using Revise
% julia> using ThePackage
After that I usually can edit source code and call the updated methods w/o restarting of the REPL. But, Revise has some limitations.
I expect it's good idea to make a package for this, but should I make it for the whole project? Then it would mean one "project" module and 3 submodules A, B and Common?
Or should it be 3(4) packages per module?
You should separate "scripty" (throughaway or command line scripts) and core (reusable) code. The core I would put in a single package. The scripty files (like examples/, CLI programs) should be alone using the package. The scripty files may define modules or whatever, but their users are endusers, not developers (e.g. running a script involves an i/o operation).
Just to end this.
The main thing I was missing and would save me a lot of time if I knew:
If you have your own package, it means the file src/YourPackage.jl will be "imported/included" once "using" keyword is being used and everything just works. No need to do any kind of import/LOAD_PATH magic.
just doing that fixed 95% of my problems. I ended up following patterns in accepted answer and also from the comment recommending DrWatson.
Only small thing is the fact that "scripts" don't work well with VSCode. "find all references" or "go to definition" and autocomplete don't work. They do work for everything within "YourPackage.jl" and it's imports perfectly and so does Revise. It's really tiny thing since scripts are usually like 3 lines of code.
I need to include a file when I build a package, but the file itself is not required at all at runtime.
I have been told to look at how to do it using a do_ function, but have been unable to find a suitable function in any documentation. I would assume this could be done trivially, but simply specifying which file to add.
Again, I'm not looking for a way to add the files to the final image. This is just for building.
Here is full documentation about tasks. They all start with do_, so you were advised to extend one of them. Also (as written in docs) packages may have additional tasks, that come from classes - files .bbclass, that are applied to your package's recipe with inherit keyword. To get full list of exactly your package tasks use command
bitbake -c listtasks <your package name>
Actually this way you just call task do_listtasks of your package, do_listtasks is also... a task)
So.. first you need to understand to what task you need to append your file. Tasks have pretty straightforward names, so to use your file during compilation, you need task do_compile, and so on.
Now, it is not clear, how you are going to actually add your file to the build, but looks like also somehow by recipe of your package. That means you should place your file in folder files (there are options about this folder naming) next by your recipe, than add file to variable SRC_URI like this:
SRC_URI = " \
.....
file://your-file-in-folder-files.ext \
.....
"
By doing this, you tell bitbake to append this file to WORKDIR during do_unpack task. More on this here, especially in 3.3.5.
Now to actually your question)) Suppose you need to use your file during do_compile task, but before the actual compilation actions. Than in your recipe extend do_compile like this:
do_compile_prepend() {
cp ${WORKDIR}/your-file-in-folder-files.ext ${S}/some/subpath/inside/sources
}
Similarly to do something with your file after some actual task actions, create function with _append suffix. For example, do_configure_append or do_compile_append.
Here WORKDIR is some folder under Yocto build directory, where all your package stuff, needed for build your package, is placed, S is a special folder with downloaded and unpacked source code under WORKDIR and do_compile_prepend (and others) is just a bash-script.
Also consider folder D (it is also locates under WORKDIR). It contains files, that actually shall go into resulting image during image creating. So if somehow your file finds the way to the resulting image, remove it directly from D like this:
do_install_append() {
rm -f ${D}/path/to/your/file/in/resulting/rootfs/your-file-in-folder-files.ext
}
Well... this is more overview, than a exact answer, but hope this helps!
Let's say we have a symbol array of packages packages::Vector{Symbol} = [...] and we want to create a sys image using PackageCompiler.jl. We could simply use
using PackageCompiler
create_sysimage(packages; incremental = false, sysimage_path = "custom_sys.dll"
but without a precompile_execution_file, this isn't going to be worth it.
Note: sysimage_path = "custom_sys.so" on Linux and "custom_sys.dylib" on macOS...
For the precompile_execution_file, I thought running the test for each package might do it so I did something like this:
precompilation.jl
packages = [...]
#assert typeof(packages) == Vector{Symbol}
import Pkg
m = Module()
try Pkg.test.(Base.require.(m, packages)) catch ; end
The try catch is for when some tests give an error and we don't want it to fail.
Then, executing the following in a shell,
using PackageCompiler
packages = [...]
Pkg.add.(String.(packages))
Pkg.update()
Pkg.build.(String.(packages))
create_sysimage(packages; incremental = false,
sysimage_path = "custom_sys.dll",
precompile_execution_file = "precompilation.jl")
produced a sys image dynamic library which loaded without a problem. When I did using Makie, there was no delay so that part's fine but when I did some plotting with Makie, there still was the first time plot delay so I am guessing the precompilation script didn't do what I thought it would do.
Also when using tab to get suggestions in the repl, it would freeze the first time but I am guessing this is an expected side effect.
There are a few problems with your precompilation.jl script, that make tests throw errors which you don't see because of the try...catch.
But, although running the tests for each package might be a good idea to exercise precompilation, there are deeper reasons why I don't think it can work so simply:
Pkg.test spawns a new process in which tests actually run. I don't think that PackageCompiler can see what happens in this separate process.
To circumvent that, you might want to simply include() every package's test/runtests.jl file. But this is likely to fail too, because of missing test-specific dependencies.
So I would say that, for this to work reliably and systematically for all packages, you'd have to re-implement (or re-use, if you can) some of the internal logic of Pkg.test in order to add all test-specific dependencies to the current environment.
That being said, some packages have ready-to-use precompilation scripts helping to do just this. This is the case for Makie, which suggests in its documentation to use the following file to build system images:
joinpath(pkgdir(Makie), "test", "test_for_precompile.jl")
I've got this section in my project's root's meson.build:
if get_option('gen_py_bindings')
message('told to build py bindings')
custom_target('py_bindings',
command: ['env', '_MESON_MODULE_NAME=' + meson.project_name(), ',_MESON_MODULE_VERSION=' + meson.project_version(), './py3_bindings/setup.py', 'build'],
depends: [mainlib],
depend_files: files(['py3_bindings/module.c', 'py3_bindings/setup.py']),
input: ['py3_bindings/setup.py'],
install: false, output: 'sharedextension.so')
endif
It's a custom target that runs a setup.py script to build python bindings for my project's library.
The problem is that it is always seemingly up-to-date. I've used the depends keyword argument to specify that it depends on another build target in the project, and the depend_files keyword argument to specify that it depends on the C source file that the script uses to build the extension, as well the actual script that is being ran as the command. I've also used the input keyword argument even though I don't understand the difference between it and depend_files.
I can only get the custom target to regenerate if I make a change to meson.build (the message() call is displayed successfully).
No other change will do. I've tried updating all files listed in the custom target but it always results in: ninja: no work to do.. Even if other out-of-date targets get rebuilt/relinked/etc...
I'm using ninja 1.9.0 and meson 0.52.1 on linux.
I am also well aware of the build_always_stale keyword argument but I don't want to use it unless necessary. (update: setting it to true still doesn't result in the target rebuilding, looks like there's something more at play here but I can't figure it out).
By default, custom targets don't get built when running plain ninja and thus the build_by_default keyword argument needs to be passed and set to true, e.g.
custom_target('target', build_by_default: true)
I have a library foo that defines a function bar that I would like to use as a custom error handler. In order to replace the default error handler with bar, I would run options(stop = bar) so long as bar has been imported.
I would like bar to automatically be set as the default error handler whenever I begin an R session. To that end, I put the following code in my .Rprofile file:
options(defaultPackages=c(getOption("defaultPackages"), "foo"),
error = Bar)
Unfortunately, this won't work. During start up, R will first run everything in .Rprofile and then runs .First.sys in order to load the default packages. This means that foo will be loaded and bar imported after the options are set, meaning that an error will be thrown stating that bar could not be found.
One work around to this is to manually load foo by instead using the following:
if (require(foo))
options(error = Bar)
Is this a safe solution? It seems that automatically loading foo immediately when starting R might cause unforeseen issues when installing a library.
Are there any other solutions besides this? The documentation on the start-up of R says that .First.sys is the very last thing to be run, meaning the default packages will not be loaded until after all other start-up code has been run.