Randomize Make goals for a target - gnu-make

I have a C++ library and it has a few of C++ static objects. The library could suffer from C++ static initialization fiasco. I'm trying to vet unforeseen translation unit dependencies by randomizing the order of the *.o files during a build.
I visited 2.3 How make Processes a Makefile in the GNU manual and it tells me:
Goals are the targets that make strives ultimately to update. You can override this behavior using the command line (see Arguments to Specify the Goals) ...
I also followed to 9.2 Arguments to Specify the Goals, but a treatment was not provided. It did not surprise me.
Is it possible to have Make randomize its goals? If so, then how do I do it?
If not, are there any alternatives? This is in a test environment, so I have more tools available to me than just GNUmake.
Thanks in advance.

This is really implementation-defined, but GNU Make will process targets from left to right.
Say you have an OBJS variable with the objects you want to randomize, you could write something like (using e.g. shuf):
RAND_OBJS := $(shell shuf -e -- $(OBJS))
random_build: $(RAND_OBJS)
This holds as long as you're not using parallel make (-j option). If you are the order will still be randomized, but it will also depend on number of jobs, system load, current phase of the moon, etc.

Next release of GNU make will have --shuffle mode. It will allow you to execute prerequisites in random order to shake out missing dependencies by running $ make --shuffle.
The feature was recently added in https://savannah.gnu.org/bugs/index.php?62100 and so far is available only in GNU make's git tree.

Related

clearmake doesn't like my MAKEFLAGS=j12 values

I use both GNU Make and - woe is me - ClearCase' clearmake.
Now, GNU make respect a flag named MAKEFLAGS, which for me is set to j20 on this multi-core machine I'm on. Unfortunately, clearmake also recognizes this option, yet doesn't except this value. It tells me:
clearmake: Error: Bad option (j)
clearmake: Error: Bad option (2)
clearmake: Error: Bad option (0)
Questions:
Why is this happening? Should ClearMake accommodate GNU Make's usage?
How can I get around it, other then turning the flag off an on repeatedly?
It's been 15 years or so since I used clearmake, but assuming it doesn't support the GNU make-specific GNUMAKEFLAGS variable you can use:
export GNUMAKEFLAGS=-j20
and leave MAKEFLAGS unset.
The "BUILDING SOFTWARE WITH CLEARCASE" clearly states in its T"unsupported Gnu make features" that this option is indeed not supported.
–j [JOBS]
--jobs=[JOBS]
Maybe a clearmake -C -J can help (for testing): there should then be no limit to the number of parallel builds.
Are you calling GNU make from a clearmake build script? Or are you trying to create a single makefile that will support both build tools? I think the GNUMAKEFLAGS EV is safer for GNU make specific values. I would also use
CCASE_MAKEFLAGS for any makeflags that are specific to clearmake.
CCASE_CONC to set the concurrency value. While clearmake no longer passes the -J in MAKEFLAGS, it used to, and if you're using an older clearmake (somewhere in the 7's as I recall), you could upset "child" GNU make sessions since they like -J about as much as clearmake likes -j.
Finally, check the env_ccase man page for the behavior mentioned in CCASE_MAKEFLAGS_V6_OBSOLETE. If you pass MAKEFLAGS explicitly in the build script like
$(MAKE) $(MAKEFLAGS) TARGET=x
And had started clearmake like this:
clearmake -C gnu TARGET=Y
You'll actually get both TARGET macro definitions in the command line. Setting the mentioned EV (at all) avoids the "pass defined macros in MAKEFLAGS" behavior. The switch exists because some people have makefiles that DEPEND on this behavior, while others have ones BROKEN BY this behavior...
Assuming for the sake of argument that your company has a support agreement with either IBM or HCL, this is a good time to use your support channels to bring up clearmake concerns.

Default sequence of debhelper

I am trying to get a better understanding of debhelper's dh tool. As I understand it, dh is a frontend for various dh_* helper tools. These helper tools can both be called standalone or automatically from the dh tool. Usually a debian/rules file is created which somehow invokes dh and possibly overrides certain dh_* invocations. dh then seems to know which of the dh_* tools it needs to invoke and in which sequence.
The example under /usr/share/doc/debhelper/examples/rules.tiny contains the following as an example for a debian/rules file:
#!/usr/bin/make -f
%:
dh $#
What is the sequence of dh_* helper tools that gets executed by dh as result of this rules file? And more importantly, how does dh determine this sequence and where is this documented.
The sequence of helper tools that will get executed depends on a few things:
what build target is being passed. these include: build-arch, build-indep, build, clean, install-indep, install-arch, install, binary-arch, binary-indep, and binary. The meanings of (most of) these are discussed in Debian Policy §4.9.
the Debhelper compat level (as found in the debian/compat file)
your version of Debhelper (although an effort is made to make different versions work the same given the same compat level)
what helper commands have already been run since the last clean (in debhelper compat levels 9 and lower)
what addons are being used (the --with and --without options)
what override targets exist in the makefile (e.g. override_dh_auto_test)
As you can see, it could be confusing to document which commands are run, in which order, for all the possible build targets and configuration arrangements (or even just for the most common ones). The way to know, therefore, is to use the --no-act argument to dh, with your build directory set up the way you want it.
Here is an example run with the binary target in a dummy build directory I've just made using dh_make, put into compat level 10. The exact commands or the exact ordering you will see will likely be slightly different:
~/dh-demo$ dh binary --no-act
dh_testdir
dh_update_autotools_config
dh_autoreconf
dh_auto_configure
dh_auto_build
dh_auto_test
dh_testroot
dh_prep
dh_installdirs
dh_auto_install
dh_install
dh_installdocs
dh_installchangelogs
dh_installexamples
dh_installman
dh_installcatalogs
dh_installcron
dh_installdebconf
dh_installemacsen
dh_installifupdown
dh_installinfo
dh_systemd_enable
dh_installinit
dh_systemd_start
dh_installmenu
dh_installmime
dh_installmodules
dh_installlogcheck
dh_installlogrotate
dh_installpam
dh_installppp
dh_installudev
dh_installgsettings
dh_bugfiles
dh_ucf
dh_lintian
dh_gconf
dh_icons
dh_perl
dh_usrlocal
dh_link
dh_installwm
dh_installxfonts
dh_strip_nondeterminism
dh_compress
dh_fixperms
dh_strip
dh_makeshlibs
dh_shlibdeps
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb

AFP Dijkstra's Shortest Path Algorithm

For the AFP entry Dijkstra's Shortest Path Algorithm, both the proof outline and proof document were nonexistent *. Unfortunately, I did not find an IsaMakefile either to build those documents locally. What is the best way to get those documents?
Another question, as the Dijkstra.thy depends on a lot of other theories, is there a way to load everything faster?
*) It is fixed now.
(There seems to be something broken at AFP right now, please tell the editors about it.)
In general, you can download the sources of AFP entries and produce the documents yourself like this:
Get and unpack all AFP sources -- downloading separate entries is offered as well, but then you have to disentangle dependencies manually.
Invoke isabelle build like this:
isabelle build -d afp-2013-03-02 -o document=pdf -v Dijkstra_Shortest_Path
Here afp-2013-03-02 is the directory that was obtained by unpacking the current AFP sources.
See also the Isabelle System manual about "Isabelle sessions and build management", which is all new in Isabelle2013.
See isabelle build -b there to make things load faster, by producing persistent heap images from sessions.
The links in the AFP entry were indeed broken and should now be fixed again, sorry about that.
As Makarius writes, the AFP new uses Isabelle's new build system, i.e. has a ROOT file for each entry that can be used to check the associated theories and build the document.
Makarius' answer is pretty much the official way to do it, although I would additionally recommend setting up the AFP as a component. This gives you the following steps:
Download the AFP to e.g. ~/afp
Set it up as component e.g. by adding ~/afp to ~/.isabelle/Isabelle2013/components (see also AFP as a component)
build the entry with
isabelle afp_build Dijkstra_Shortest_Path
You can also have jEdit build the heap image for you. If the AFP is setup as a component (see the other answers for that), just start jEdit with
isabelle jedit -d '$AFP' -l Dijkstra_Shortest_Path
and jEdit will select Dijkstra_Shortest_Path as base logic and (re)build it if necessary.
If you make regular use of the AFP, it might be useful to add the AFP path by default. For this, create a file ROOTS in $ISABELLE_HOME_USER with the line $AFP in it (or add this line, if the file already exists).

how to compile MPI and non-MPI version of the same program with automake?

I have a C++ code that can be compiled with MPI support depending on a
certain preprocessor flag; missing the appropriate flag, the sources
compile to a non-parallel version.
I would like to setup the Makefile.am so that it compiles both the
MPI-parallel and the sequential version, if an option to
./configure is given.
Here's the catch: MPI has its own C++ compiler wrapper, and insists
that sources are compiled and linked using it rather than the standard
C++ compiler. If I were to write the Makefile myself, I would have to
do something like this:
myprog.seq: myprog.cxx
$(CXX) ... myprog.cxx
myprog.mpi: myprog.cxx
$(MPICXX) -DWITH_MPI ... myprog.cxx
Is there a way to tell automake that it has to use $(MPICXX) instead
of $(CXX) when compiling the MPI-enabled version of the program?
I have the same problem, and I've found that there's no really good way to get autotools to conditionally use MPI compilers for particular targets. Autotools is good at figuring out which compiler to use based on what language your source is written in (CC, CXX, FC, F77, etc.), but it really isn't good at figuring out whether or not to use MPI compiler for a particular target. You can set MPICC, MPICXX, etc., but you essentially have to rewrite all your Makefile rules for your target (as you've done above) if you use the compiler this way. If you do that, what's the point of writing an automake file?
Someone else suggested using MPI like an external library, and this is the approach I'd advocate, but you shouldn't do it by hand, because different MPI installations have different sets of flags they pass to the compiler, and they can depend on the language you're compiling.
The good thing is that all the currently shipping MPI compilers that I know of support introspection arguments, like -show, -show-compile or -show-link. You can automatically extract the arguments from the scripts.
So, what I did to deal with this was to make an m4 script that extracts the defines, includes, library paths, libs, and linker flags from the MPI compilers, then assigns them to variables you can use in your Makefile.am. Here's the script:
lx_find_mpi.m4
This makes MPI work the way automake expects it to. Incidentally, this is the approach CMake uses in their FindMPI module, and I find it works quite well there. It makes the build much more convenient because you can just do something like this for your targets:
bin_PROGRAMS = mpi_exe seq_exe
# This is all you need for a sequential program
seq_exe_SOURCES = seq_exe.C
# For an MPI program you need special LDFLAGS and INCLUDES
mpi_exe_SOURCES = mpi_exe.C
mpi_exe_LDFLAGS = $(MPI_CXXLDFLAGS)
INCLUDES = $(MPI_CXXFLAGS)
There are similar flags for the other languages since, like I said, the particular flags and libraries can vary depending on which language's MPI compiler you use.
lx_find_mpi.m4 also sets some shell variables so that you can test in your configure.ac file whether MPI was found. e.g., if you are looking for MPI C++ support, you can test $have_CXX_mpi to see if the macro found it.
I've tested this macro with mvapich and OpenMPI, as well as the custom MPICH2 implementation on BlueGene machines (though it does not address all the cross-compiling issues you'll see there). Let me know if something doesn't work. I'd like to keep the macro as robust as possible.
I am sorry that having automake use MPI is so difficult. I have been struggling with this for many months trying to find a good solution. I have a source tree that have one library and then many programs in sub-folders that use the library. Some of the folders are mpi programs, but when I try to replace CXX with the MPI compiler using in Makefile.am.
if USE_MPI
MPIDIR = $(MPICOMPILE)
MPILIB = $(MPILINK)
CXX=#MPICXX#
F77=#MPIF77#
MPILIBS=$(MPILINK)
endif
I get
CXX was already defined in condition TRUE, which includes condition USE_MPI ...
configure.ac:12: ... `CXX' previously defined here
I don't have a rule that specifies the compiler, so maybe there is a way to do that.
SUBDIRS = .
bin_PROGRAMS = check.cmr
check_ccmr_SOURCES = check_gen.cpp
check_ccmr_CXXFLAGS = -I$(INCLUDEDIR) $(MPIDIR)
check_ccmr_LDADD = -L$(LIBDIR)
check_ccmr_LDFLAGS = $(MPILIB)
If you have disabled the subdir-objects option to automake, something like this might work:
configure.ac:
AC_ARG_ENABLE([seq], ...)
AC_ARG_ENABLE([mpi], ...)
AM_CONDITIONAL([ENABLE_SEQ], [test $enable_seq = yes])
AM_CONDITIONAL([ENABLE_MPI], [test $enable_mpi = yes])
AC_CONFIG_FILES([Makefile seq/Makefile mpi/Makefile])
Makefile.am:
SUBDIRS =
if ENABLE_SEQ
SUBDIRS += seq
endif
if ENABLE_MPI
SUBDIRS += mpi
endif
sources.am:
ALL_SOURCES = src/foo.c src/bar.cc src/baz.cpp
seq/Makefile.am:
include $(top_srcdir)/sources.am
bin_PROGRAMS = seq
seq_SOURCES = $(ALL_SOURCES)
mpi/Makefile.am:
include $(top_srcdir)/sources.am
CXX = $(MPICXX)
AM_CPPFLAGS = -DWITH_MPI
bin_PROGRAMS = mpi
mpi_SOURCES = $(ALL_SOURCES)
The only thing stopping you from doing both of these in the same directory is the override of $(CXX). You could, for instance, set mpi_CPPFLAGS and automake would handle that gracefully, but the compiler switch makes it a no-go here.
A possible workaround for not using different sources could be:
myprog.seq: myprog.cxx
$(CXX) ... myprog.cxx
myprog-mpi.cxx: myprog.cxx
#cp myprog.cxx myprog-mpi.cxx
myprog.mpi: myprog-mpi.cxx
$(MPICXX) -DWITH_MPI ... myprog-mpi.cxx
#rm -f myprog-mpi.cxx
for Automake:
myprog-bin_PROGRAMS = myprog-seq myprog-mpi
myprog_seq_SOURCES = myprog.c
myprog-mpi.c: myprog.c
#cp myprog.c myprog-mpi.c
myprog_mpi_SOURCES = myprog-mpi.c
myprog_mpi_LDFLAGS = $(MPI_CXXLDFLAGS)
INCLUDES = $(MPI_CXXFLAGS)
BUILT_SOURCES = myprog-mpi.c
CLEANFILES = myprog-mpi.c
Here is the solution that I came up with for building a two static libraries - one with MPI (libmylib_mpi.a) and one without (libmylib.a). The advantage of this method is that there is no need for duplicate source files, a single Makefile.am for both variants, and capability to use subdirs. You should be able to modify this as needed to produce a binary instead of a library. I build the non-MPI library as normal, then for the MPI variant, I leave _SOURCES empty and use _LIBADD instead, specifying an extension of .mpi.o for the object files. I then specify a rule to generate the MPI object files using the MPI compiler.
Overall file / directory structure is something like
configure.ac
Makefile.am
src
mylib1.cpp
mylib2.cpp
...
include
mylib.h
...
configure.ac:
AC_INIT()
AC_PROG_RANLIB
AC_LANG(C++)
AC_PROG_CXX
# test for MPI, define MPICXX, etc. variables, and define HAVE_MPI as a condition that will evaluate to true if MPI is available and false otherwise.
AX_MPI([AM_CONDITIONAL([HAVE_MPI], [test "1" = "1"])],[AM_CONDITIONAL([HAVE_MPI], [test "1" = "2"])]) #MPI optional for xio
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
There is probably a more efficient way to do the conditional check than I have listed here (I'm welcome to suggestions).
Makefile.am:
AUTOMAKE_OPTIONS = subdir-objects
lib_LIBRARIES = libmylib.a
libmylib_a_SOURCES = src/mylib_1.cpp src/mylib_2.cpp ...
#conditionally generate libmylib_mpi.a if MPI is available
if HAVE_MPI
lib_LIBRARIES += libmylib_mpi.a
libmylib_mpi_a_SOURCES = #no sources listed here
#use LIBADD to specify objects to add - use the basic filename with a .mpi.o extension
libmylib_mpi_a_LIBADD = src/mylib_1.mpi.o src/mylib_2.mpi.o ...
endif
AM_CPPFLAGS = -I${srcdir}/include
include_HEADERS = include/mylib.h
# define a rule to compile the .mpi.o objects from the .cpp files with the same name
src/%.mpi.o: ${srcdir}/src/%.cpp ${srcdir}/include/mylib.h
$(MPICXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -DWITH_MPI=1 -c $(patsubst %.mpi.o,$(srcdir)/%.cpp,$#) -o $#
#define a rule to clean the .mpi.o files
clean-local:
-rm -f src/*.mpi.o
MPI installations do (usually) ship with compiler wrappers, but there is no requirement that you use them -- MPI does not insist on it. If you want to go your own way you can write your own makefile to ensure that the C++ compiler gets the right libraries (etc). To figure out what the right libraries (etc) are, inspect the compiler wrapper which is, on all the systems I've used, a shell script.
At first sight the compiler wrappers which ship with products such as the Intel compilers are a little daunting but stop and think about what is going on -- you are simply compiling a program which makes use of an external library or two. Writing a makefile to use the MPI libraries is no more difficult than writing a makefile to use any other library.

cmake: Working with multiple output configurations

I'm busy porting my build process from msbuild to cmake, to better be able to deal with the gcc toolchain (which generates much faster code for some of the numeric stuff I'm doing).
Now, I'd like cmake to generate several versions of the output, stuff like one version with sse2, another with x64, and so on. However, cmake seems to work most naturally if you simply have a bunch of flags (say, "sse2_enable", and "platform") and then generate one output based on those platforms.
What's the best way to work with multiple output configurations like this? Intuitively, I'd like to iterate over a large number of flag combinations and rerun the same CMakeLists.txt files for each combination - but of course, you can't express that within the CMakeLists.txt files (AFAIK).
The recommended way to do this is to simply have multiple build directories. From each one you simply call cmake with the required settings.
For example you could do, starting in the base source directory (using Linux shell syntax but the idea is the same):
mkdir build-sse2 && cd build-sse2
cmake .. -DENABLE_SSE2 # or whatever to enable it in your CMakeLists.txt
make
cd ..
mkdir build-x64 && cd build-x64
cmake .. -DENABLE_X64 # or whatever again...
make
This way, each build directory is completely separated from each other.
This allows you to have one directory for Debug, another for Release and another for cross-compiling.
There hasn't been much activity here, so I've come up with a workable solution myself. It's probably not ideal, so if you have a better idea, please do add it!
Now, it's hard to iterate over build configs in cmake because cmake's crucial variables don't live in function scope - so, for instance, that means if you do include_directories(X) the X directory will remain in the include list even after the function exits.
Directories do have scope - and while normally each input directory corresponds to one output directory, you can have multiple output directories.
So, my solution looks like this:
project(FooAllConfigs)
set(FooVar 2)
set(FooAnotherVar b)
add_subdirectory("project_dir" "out-2b")
set(FooVar 5)
set(FooAnotherVar c)
add_subdirectory("project_dir" "out-5c")
set(FooVar 3)
set(FooAnotherVar b)
add_subdirectory("project_dir" "out-3b")
set(FooVar 3)
set(FooAnotherVar c)
add_subdirectory("project_dir" "out-3c")
The normal project dir then contains a CMakeLists.txt file with code to set up the appropriate includes and compiler options given the global variables set in the FooAllConfigs project, and it also determines a build suffix that's appended to all build outputs - any even indirectly included output (e.g. as generated by add_executable) must have a unique name.
This works fine for me.

Resources