In hydra, can I interpolate config from a file without using the defaults list? - fb-hydra

Say I have a directory utility_configs that has a bunch of different configurations for different things that are useful in different situations. And then I want to be able to use these different configs in different places. For instance, maybe my model has many different places where I need something that is an "encoder" (really just a bit of network that maps an input to an output). I might have vary different encoders in my utility_configs directory, and I would like to be able to specify any of them anyplace I need an encoder (possibly then adjusting the number of input and output channels or other parameters). I am not seeing how to do this straightforwardly since it seems like the only way you can get data from a different file is using the defaults list. But that's not really a good fit here since I might need multiple different things from utility_configs and in multiple different places (including subconfigs)

You cannot interpolate into a file. Interpolation works on the current config object.
Hydra can compose the config for you, after which you can use interpolation.
You have multiple options:
Have more than one primary config (with a defaults list). You can override which primary config to use via the command line (--config-name|-cn).
Construct your defaults list in an ad-hoc manner via the command line using the +GROUP=OPTION notation (see this).
About using a config in different places, take a look at config packages - which allows you to relocate the content of a config in the composed config object.
I recommend going with 1.

Related

How do you use guards(1) with quilt(1)

One of the ancillary tools bundled with quilt is guards, which processes a list of guards and a configuration file matching guards and files, and outputs a list of files whose guard specifications match the provided guards.
However I can't figure how they're supposed to fit together: quilt(1) doesn't show any way to invoke a command to generate series files, I didn't find examples in the mans or the working copy, and the internets are less than helpful (all the hits talk about bedding).
I feel like guard has to be manually invoked whenever its "dependencies" change and the series file overwritten, is that the case? If so, how is data fed back the other way around e.g. when adding a new patch to the series, does it have to be manually synchronised to the guard file?
Background: a few years back I used mq quite a bit, but it integrates guards natively so the synchronisation back and forth is not an issue at all.

How do I save a dynamically generated Lisp system in external files?

Basically, I want to be able to generate class definitions, compile the system, and save it for reuse. Would that involve a code walker, or is there a simpler option?
(save-lisp-and-die "isn't going to work for me")
Expanding to explain. I'm generating systems based on OpenAPI definitions, so a system roughly corresponds to an API client.
There will be dozens, if not hundreds of these.
The idea is to NOT keep them all in the image, but load at run time as required.
I see two possible routes here, and to some extent, I suspect they mainly differ in "the last mile" (as it were).
The route you seem to have settled on, run-time definition of classes and functions.
A route whereby you generate your function/class forms, but don't go the full way to get them "Live" in the image and instead emit the form(s) to a file.
I suspect that it would be possible to have most of the generating code shared between the two and for the first route have a wrapping macro that effectively returns a PROGN, and in the second calls a function to pretty-print what the macro would have returned on a stream.
Saying that, building a tailored environment and saving it to a "core" file is a pretty good way of getting excellent startup times.

Julia: packaging things into modules vs include()-ing them

I'm building a simulation in Julia and I have my code split across a bunch of files. Are there any benefits to wrapping everything in modules versus simplying include()-ing them in the runscript?
I have something like the following at the top of my runscript right now:
for filename in split(readall(`git ls-files`))
#everywhere include(filename)
end
I'm not planning to use the code outside of this immediate project, but I am running the simulation in parallel. Is there any benefit in creating modules?
I would say that the most important benefit is modularity :)
If you have different files that deal with different things, splitting the code into modules let's you keep track on the dependencies between the modules:
Which functions are purely implementation details of the given module and subject to change?
Which modules depend on which other modules?
It also lets you reuse the same name for different things in the different modules if you need to, if you're a little careful of what you export. (You can still access those names from the outside as qualified names)
For an example of such organisation, you can look at my repo https://github.com/toivoh/Debug.jl

Multiple, aggregate transforms of web.config

In short, is it possible to have this kind of nested transformation hierarchy for transorming web.config at build time?
web.config
|_
web.config.release
|_web.config.release.live
|_web.config.release.stage
So in other words, web.config.release applies xdt:transforms to web.config, and then the last two apply transformations to the output of that.
I'm trying to avoid the repetition that I'd otherwise have in all the transforms. The last two in my example here for instance would do little more than insert keys, connection strings or account passwords.
Have I missed something obvious - is this possible?
So it turns out this sort of works. It does what I want, although I have no idea how it knows which file to use in which circumstance, but given simply these files in VS2013;
web.debug and web.release transforms are applied as normal on build. Seondly, the Wen.IIS Localhost xxxx configs are applied to the output of that when published. The latter files only need to contain the few lines that differ from the previous transformation output.
Publishing is good for debug which is what I wanted it for, but for I don't publish live web sites that way. I'd have thought that perhaps there was some build event task in my csproj that I could edit but it seems either to be built into VS or defined elsewhere.
So I'm getting the multiple transform task that I wanted, however I haven't found a way to build nested transformations in general as outlined in the question.

Build system supports multiple output per target

I work in the field of bioinformatics. My daily work processes several data files (DNA sequences, alignments, etc..) and produce many result files, so I want to use something like Unix make to automate the whole process, especially to resolve dependencies between different data.
However, the Unix make only supports one output per target, as it is designed for software build, which typically generates one object file from several source files, or one executable from several object files. If you use custom virtual targets, it won't benefit from timestamp checking. Is there any build system that supports multiple output file per one target? If there aren't any, I'm going to make the wheel.
Have a look at Drake, which is a replacement of make designed for data workflow management (make for data).
Another option is makepp, which is an improved make. Among other features it supports multiple targets.

Resources