How to call sub-makefiles without hardcoding their paths? - gnu-make

I have a folder structure like so:
./Makefile
./template
./template/Makefile
./template/html/
./template/html/Makefile
./template/latex/
./template/latex/Makefile
./template/latex/foo/
./template/latex/foo/Makefile
./template/latex/bar/
./template/latex/bar/Makefile
./... # a lot more
(The folder structure may be nested even more).
How can I call the ./template/latex/bar/ Makefile (the all target from it) without hard-coding the path to the Makefile?
Best case would be to be able to call it like so:
make latex/bar # for the latex/bar template
make latex # for all latex templates
make html # for all html templates
make # for all templates

Related

Snakemake: wildcards do not expand in script line of rule

I am running a pipeline and was trying to optimize it by declaring the paths in a config file (config.yaml). The config.yaml file contains the path to find the scripts to run inside the pipeline, but when I expand the wildcard of the path, the pipeline does not run the script. The script itself runs fine.
To explain my problem:
rule with_script:
input: someinput
output: someoutput
script: expand("{script_path}/scriptfile", script_path = config[scriptpath])
input, output or rule all do not contain the script's path wildcard, so here is the first time I'm declaring it. The config.yaml line that contains the path looks like this:
scriptpath: /path/to/the/script
is there a way to maintain the wildcard and config file path (to make it easier for others to make changes if needed) and have the script work? Like this snakemake doesn't even enter the script file. Or maybe it is possible to declare global wildcards outside the rule all?
Thank you for your help!
P.S.: I'm sorry if this question has already been answered, but I couldn't find anything to help me with this.
You cannot define a function like expand() in the script section. Snakemake expects a path to your script.
Like the documentation states:
The script path is always relative to the Snakefile containing the directive (in contrast to the input and output file paths, which are relative to the working directory). It is recommended to put all scripts into a subfolder "scripts"
If you need to define different paths to your scripts, you can always do it in python outside of your rules. Don't forget, all python code outside of rules is executed before building the DAG. Thus, you can define all variables you want and use them in your rules.
SCRIPTSPATH = config["scriptpath"]
rule with_script:
input: someinput
output: someoutput
script: "{SCRIPTSPATH}/scriptfile"
Note:
Do not mix wildcards and "variables". In an expand function as
expand("{script_path}/scriptfile", script_path = config[scriptpath])
{script_path} is not a wildcard but just a placeholder for the values given in the second parameter of the function.

Adding custom CSS file to Dash in Julia

For Python there is an option to add custom CSS to a Dash app. The method seems quite straightforward, it says
Just create a folder named assets in the root of your app directory
and include your CSS and JavaScript files in that folder. Dash will
automatically serve all of the files that are included in this folder.
By default the url to request the assets will be /assets but you can
customize this with the assets_url_path argument to dash.Dash
source: https://dash.plotly.com/external-resources
However when I try to do so in Julia, nothing happens.
Is this feature a thing in Julia? If not, how can I do so?
Found a hack, have no idea if this is the correct way though...
Essentially tried to find the inputs to app = dash() by going methods(dash)
julia> methods(dash)
# 1 method for generic function "dash":
[1] dash(; external_stylesheets, external_scripts, url_base_pathname, requests_pathname_prefix, routes_pathname_prefix, assets_folder, assets_url_path, assets_ignore, serve_locally, suppress_callback_exceptions, prevent_initial_callbacks, eager_loading, meta_tags, index_string, assets_external_path, include_assets_files, show_undo_redo, compress, update_title) in Dash at C:\Users\<User>\.julia\packages\Dash\Weukk\src\app\dashapp.jl:291
where I noticed a arg assets_folder. Putting in the absolute path seems to work so the full code looks something like this
using Dash
app = dash(assets_folder="/absolute/path/to/assets")

Avoid rendering of specific .md files from blogdown::serve_site()

I have a file located at
content/post/data_for_posts/my_file.md
I have it there because it's quite easy to do htmltools::includeMarkdown("data_for_posts/my_file.md") and recycle this file in different posts.
My problem is that when I serve_site() this creates a public/post/data_for_posts/index.html, which means, it gets posted to my website (as a January 1 of 0001). I guess I could change the date to year 10000, but I would rather handle it the way I handle the .Rmd and other files, as suggested here
I have tried to modify my config.toml but have not managed to solve the issue.
ignoreFiles = ["\\.Rmd$", "\\.Rmarkdown$", "_files$", "_cache$", "content/post/data_for_posts/my_file.md"]
Here are a couple techniques that I use to do this:
Rename data_for_posts/my_file.md so it uses a file extension that hugo does not interpret as a known markup language, for example change .md to .markd or mdn.[*]
Rename data_for_posts/my_file.md so it includes a string that you will never use in a real content file, for example data_for_posts-UNPUBLISHED/my_file.md. Then add that string (UNPUBLISHED or whatever) to your config ignoreFiles list.[**]
[*] In the content/ directory, a file with one of the following file extensions will be interpreted by hugo as containing a known markup language: .ad, .adoc, .asciidoc, .htm, .html, .markdown, .md, .mdown, .mmark, .pdc, .pandoc, .org, or .rst (this is an excerpt of something I wrote).
[**] The strings listed in ignoreFiles seem to be case sensitive so I like to use all-upper-case characters in my ignored file names (because I never use upper-case chars in real content file names). Also note that there is no need to specify the path and my experience is that path delimiters (/ or \) cause problems.

Is there a way to avoid recursive make with nobase?

I've got the following directory structure:
Makefile.am
src/
mymod/
mod.cc
submod/
submod.cc
inc/
Makefile.am
mymod/
mod.hh
submod/
submod.hh
Using autotools, I'd like to distribute both a library made from src and the headers in inc. The top level Makefile.am looks something like
lib_LTLIBRARIES = mylib.la
mylib_la_SOURCES=./mymod/mod.cc\
./mymod/submod/submod.cc
SUBDIRS=inc
Then inc/Makefile.am has
mymod_includedir=$(includedir)
nobase_mymod_include_HEADERS=mymod/mod.hh\
mymod/submod/submod.hh
This works OK. I end up with whatever library stuff, and my headers get installed appropriately. However, I'd like to eliminate the recursion involved in the Makefile. The problem is that if I move the lines in inc/Makefile.am to the root directory, then I have to update the paths as follows:
mymod_includedir=$(includedir)
nobase_mymod_include_HEADERS=inc/mymod/mod.hh\
inc/mymod/submod/submod.hh
This results in my headers getting dumped as $PREFIX/include/inc/mymod/mod.hh and not $PREFIX/include/mymod/mod.hh like I want. I know I
could do something like
mymodincludedir=$(includedir)/mymod
mymod_HEADERS=inc/mymod/mod.hh
mysubmodincludedir=$(includedir)/mymod/submod
mysubmod_HEADERS=inc/mymod/submod/submod.hh
but that's pretty painful, because there's a lot of subdirectories, and more subdirectories within the subdirectories (we're distributing a 3rd party's code that our own headers need). What I'd like to be able to do is either tell automake to just copy the directories in /inc to $(includepath) along with every subdirectory it encounters within, or tell it to only strip part of the path from the header files I'm listing. Is this possible?
I think the closest you can find is Karel Zak's Makemodule.am approach for which nobase_ would work as you need.

Printing hard copies of code

I have to hand in a software project that requires either a paper or .pdf copy of all the code included.
One solution I have considered is grouping classes by context and doing a cat *.extension > out.txt to provide all the code, then by catting the final text files I should have a single text file that has classes grouped by context. This is not an ideal solution; there will be no page breaks.
Another idea I had was a shell script to inject latex page breaks in between files to be joined, this would be more acceptable. Although I'm not too adept at scripting or latex.
Are there any tools that will do this for me?
Take a look at enscript (or nenscript), which will convert to Postscript, render in columns, add headers/footers and perform syntax highlighting. If you want to print code in a presentable fashion, this works very nicely.
e.g. here's my setting (within a zsh function)
# -2 = 2 columns
# -G = fancy header
# -E = syntax filter
# -r = rotated (landscape)
# syntax is picked up from .enscriptrc / .enscript dir
enscript -2GrE $*
For a quick solution, see a2ps, followed by ps2pdf. For a nicer, more complex solution I would go for a simple script that puts each file in a LaTeX listings environment and combines the result.

Resources