Scons positively refuses to build into a variant_dir - directory

I have been porting a project that used Make to SCons. Generally, I am pleased by how easy it is to use SCons, relatively to make. However, there is one thing that has resisted several hours of attempts.
The files in my projects are contained into a tree which starts at
ProjectHome. The srouces are in several subdirectories contained in ProjectHome/src/
I have a SConstruct file in ProjectHome which defines the build enviroment and then calls a SConscript
(in ProjectHome) which builds the object files, which are then put into a library in ProjectHome/lib
by SConstruct.
Everything works fine, except that I would like to separate where the .o files are kept from
where the source files are.
So here's what I have
#SConstruct.py
...
# The environment is defined above, no issues
cppobj, chfobj=SConscript('./SConscript.py', 'env', variant_dir='build', src_dir='.', duplicate=False)
env.Install('lib/'+str(Dim)+'D', env.SharedLibrary(target='Grade'+str(n), source=cppobj+chfobj))
and this is for the SConscript.py
#SConscript.py
import platform
import os
import sys
def getSubdirs(abs_path_dir) :
""" returns a sorted list with the subdirectoris in abs_path_dir"""
lst=[x[0] for x in os.walk(abs_path_dir)]
lst.sort()
return lst
Dirs=getSubdirs(os.getcwd()+'/src') # gives me list of the directories in src
CppNodes=[]
ChFNodes=[]
Import('env')
for directory in Dirs[2:3]:
CppNodes+=Glob(directory+'/*.cpp')
ChFNodes+=Glob(directory+'/*.ChF')
# env.Object can work on lists
ChFobj=env.SharedObject(ChFNodes)
# This builder likes to work one at a time
# this build an internal representation of _F.H headers
# so that when an #include in encountered, scons look
# at this list too, and not just what specified by the IncDirs
if len(ChFNodes)==1: # this is ridiculous but having only one ChF file causes troubles
os.system('touch dummyF.ChF')
ChFNodes.append('dummyF.ChF')
ChFHeader=[]
for file in ChFNodes:
ChFHeader+=env._H(source=file)
Cppobj=env.SharedObject(CppNodes)
Return('Cppobj ChFobj')
However, for the life of me, build is ignored completely. I have tried different combinations,
even placing SConscript.py in the build dir, cally SConscript('build/SCoscript.py', 'env',...) you name it: Scons stubbornly refuses to do anything with build. Any help is appreciated. To be clear, it works in creating the libraries. It just that it places the intermediate object files in the src dirs.

Related

Return one folder above current directory in Julia

In Julia, I can get the current directory from
#__DIR__
For example, when I run the above in the "Current" folder, it gives me
"/Users/jtheath/Dropbox/Research/Projects/Coding/Current"
However, I want it to return one folder above the present folder; i.e.,
"/Users/jtheath/Dropbox/Research/Projects/Coding"
Is there an easy way to do this in a Julia script?
First, please note that #__DIR__ generally expands to the directory of the current source file (it does however return the current working directory if there are no source files involved, e.g when run from the REPL). In order to reliably get the current working directory, you should rather use pwd().
Now to your real question: I think the easiest way to get the path to the parent directory would be to simply use dirname:
julia> dirname("/Users/jtheath/Dropbox/Research/Projects/Coding/Current")
"/Users/jtheath/Dropbox/Research/Projects/Coding"
Note that AFAIU this only uses string manipulations, and does not care whether the paths involved actually exist in the filesystem (which is why the example above works on my system although I do not have the same filesystem structure as you). dirname is also relatively sensitive to the presence/absence of a trailing slash (which shouldn't be a problem if you feed it something that comes directly from pwd() or #__DIR__).
I sometimes also use something like this, in the hope that it might be more robust when I want to work with paths that actually exist in the filesystem:
julia> curdir = pwd()
"/home/francois"
julia> abspath(joinpath(curdir, ".."))
"/home/"

(ASDF 3) Is it possible to recursively load systems in subdirectories?

I know about using :modules, but what about when systems get nested? Suppose I have the following structure, relative to some unknown user directory:
foo/
-foo.asd
-bar/
--bar.asd
This could arise, for example, when using Git submodules. How shall I configure the (defsystem) call in foo.asd to load bar as a dependency, without modifying a config file outside of foo/ or demanding particular placement for the foo/ tree itself? Feels like it should be simple.
3 Feb. 2020: From #Svante's answer, it sounds like my question is really 'How do I dynamically ensure that foo/ and bar/ both get into the *source-registry*?' The ASDF manual makes me think this should do the trick:
(asdf:initialize-source-registry
'(:source-registry
(:tree "«absolute-path-to-foo»/")
:inherit-configuration))
though I have not seen an example of that usage.
26 Mar. 2020: The technique above seems to work fine, so I'm closing this question. ASDF 3 is excellent.
ASDF doesn't care about relative locations of .asd files. ASDF systems and their dependencies are completely orthogonal to file/directory structure and oblivious to any source version control.
It just looks in several locations for .asd files. Each such file then may contain definitions for systems. It will generally recurse into the configured folders, so any .asd file in a git submodule would usually also be found.
The definitions, e. g. of components, inside of an .asd file then work relatively from the location of that file.
In your example, if you give a :depends-on ("bar") option to the "foo" system, it would just work, no matter where bar.asd resides (as long as it is somewhere where ASDF finds it).
A bit more awareness would be required if you have several versions of a library. This might happen if you work on "foo" and "bar" at the same time, while a stable version of "bar" is also available, e. g. in a quicklisp dist. Then the lookup order comes into play, but usually your “personal” directories have precedence over “system” directories, so again, it would just work. For more control, you might want to look into qlot.

Is there a way to avoid recursive make with nobase?

I've got the following directory structure:
Makefile.am
src/
mymod/
mod.cc
submod/
submod.cc
inc/
Makefile.am
mymod/
mod.hh
submod/
submod.hh
Using autotools, I'd like to distribute both a library made from src and the headers in inc. The top level Makefile.am looks something like
lib_LTLIBRARIES = mylib.la
mylib_la_SOURCES=./mymod/mod.cc\
./mymod/submod/submod.cc
SUBDIRS=inc
Then inc/Makefile.am has
mymod_includedir=$(includedir)
nobase_mymod_include_HEADERS=mymod/mod.hh\
mymod/submod/submod.hh
This works OK. I end up with whatever library stuff, and my headers get installed appropriately. However, I'd like to eliminate the recursion involved in the Makefile. The problem is that if I move the lines in inc/Makefile.am to the root directory, then I have to update the paths as follows:
mymod_includedir=$(includedir)
nobase_mymod_include_HEADERS=inc/mymod/mod.hh\
inc/mymod/submod/submod.hh
This results in my headers getting dumped as $PREFIX/include/inc/mymod/mod.hh and not $PREFIX/include/mymod/mod.hh like I want. I know I
could do something like
mymodincludedir=$(includedir)/mymod
mymod_HEADERS=inc/mymod/mod.hh
mysubmodincludedir=$(includedir)/mymod/submod
mysubmod_HEADERS=inc/mymod/submod/submod.hh
but that's pretty painful, because there's a lot of subdirectories, and more subdirectories within the subdirectories (we're distributing a 3rd party's code that our own headers need). What I'd like to be able to do is either tell automake to just copy the directories in /inc to $(includepath) along with every subdirectory it encounters within, or tell it to only strip part of the path from the header files I'm listing. Is this possible?
I think the closest you can find is Karel Zak's Makemodule.am approach for which nobase_ would work as you need.

Gulp — how to get lazy, ‘make’-like building?

I am using gulp for css and js processing. Sometimes I am missing the good old lazyness of the unix make command:
only generate transformed (whatsover, e.g. compilation) files from original files, that have actually changed (based on time stamps).
this is true from stage 1 to 2 (.cpp -> .o), stage 2 to 3 (linking or other stuff) whatever your dependency graph gives...
Make is not limited to source code: You can do image manipulation in several steps (efficiently ‘lazy’ generation of downscaled thumbs for example) or much else. All based on the fairly simple rule: „is at least one of the source files newer in respect to the current output file(s)?“
Unlike gulp, every step generates (more or less temporary) files, not a continuous pipe.
Is there a way, to get the same kind of lazyness in gulp**, i.e. when generating css?
only transform those (less|sass|stylus) files➝css if something changed (on the very respective file)
same for adding in browser prefixes, concat, minify
Admittedly, beyond the first 1 or 2 steps, the output is most likely already a single stream. So any change means ‘touched’. Still, when playing for example with minify options, I'd rather be lazy about the early transpile, prefixing and concat stages (drawing prior results from a temp file). Also on the javascript side ( typeScript, ... )
lazypipe and gulp-cache sound tempting but are something else, if I understand correctly. Saying .watch() is also only a partial answer, for the very first stage.
Is there a more generic approach?
If you're set on using Gulp, then this would seem to be the way to do it. It involves the gulp-cached and gulp-remember plugins.

cmake: Working with multiple output configurations

I'm busy porting my build process from msbuild to cmake, to better be able to deal with the gcc toolchain (which generates much faster code for some of the numeric stuff I'm doing).
Now, I'd like cmake to generate several versions of the output, stuff like one version with sse2, another with x64, and so on. However, cmake seems to work most naturally if you simply have a bunch of flags (say, "sse2_enable", and "platform") and then generate one output based on those platforms.
What's the best way to work with multiple output configurations like this? Intuitively, I'd like to iterate over a large number of flag combinations and rerun the same CMakeLists.txt files for each combination - but of course, you can't express that within the CMakeLists.txt files (AFAIK).
The recommended way to do this is to simply have multiple build directories. From each one you simply call cmake with the required settings.
For example you could do, starting in the base source directory (using Linux shell syntax but the idea is the same):
mkdir build-sse2 && cd build-sse2
cmake .. -DENABLE_SSE2 # or whatever to enable it in your CMakeLists.txt
make
cd ..
mkdir build-x64 && cd build-x64
cmake .. -DENABLE_X64 # or whatever again...
make
This way, each build directory is completely separated from each other.
This allows you to have one directory for Debug, another for Release and another for cross-compiling.
There hasn't been much activity here, so I've come up with a workable solution myself. It's probably not ideal, so if you have a better idea, please do add it!
Now, it's hard to iterate over build configs in cmake because cmake's crucial variables don't live in function scope - so, for instance, that means if you do include_directories(X) the X directory will remain in the include list even after the function exits.
Directories do have scope - and while normally each input directory corresponds to one output directory, you can have multiple output directories.
So, my solution looks like this:
project(FooAllConfigs)
set(FooVar 2)
set(FooAnotherVar b)
add_subdirectory("project_dir" "out-2b")
set(FooVar 5)
set(FooAnotherVar c)
add_subdirectory("project_dir" "out-5c")
set(FooVar 3)
set(FooAnotherVar b)
add_subdirectory("project_dir" "out-3b")
set(FooVar 3)
set(FooAnotherVar c)
add_subdirectory("project_dir" "out-3c")
The normal project dir then contains a CMakeLists.txt file with code to set up the appropriate includes and compiler options given the global variables set in the FooAllConfigs project, and it also determines a build suffix that's appended to all build outputs - any even indirectly included output (e.g. as generated by add_executable) must have a unique name.
This works fine for me.

Resources