I am trying to get a better understanding of debhelper's dh tool. As I understand it, dh is a frontend for various dh_* helper tools. These helper tools can both be called standalone or automatically from the dh tool. Usually a debian/rules file is created which somehow invokes dh and possibly overrides certain dh_* invocations. dh then seems to know which of the dh_* tools it needs to invoke and in which sequence.
The example under /usr/share/doc/debhelper/examples/rules.tiny contains the following as an example for a debian/rules file:
#!/usr/bin/make -f
%:
dh $#
What is the sequence of dh_* helper tools that gets executed by dh as result of this rules file? And more importantly, how does dh determine this sequence and where is this documented.
The sequence of helper tools that will get executed depends on a few things:
what build target is being passed. these include: build-arch, build-indep, build, clean, install-indep, install-arch, install, binary-arch, binary-indep, and binary. The meanings of (most of) these are discussed in Debian Policy §4.9.
the Debhelper compat level (as found in the debian/compat file)
your version of Debhelper (although an effort is made to make different versions work the same given the same compat level)
what helper commands have already been run since the last clean (in debhelper compat levels 9 and lower)
what addons are being used (the --with and --without options)
what override targets exist in the makefile (e.g. override_dh_auto_test)
As you can see, it could be confusing to document which commands are run, in which order, for all the possible build targets and configuration arrangements (or even just for the most common ones). The way to know, therefore, is to use the --no-act argument to dh, with your build directory set up the way you want it.
Here is an example run with the binary target in a dummy build directory I've just made using dh_make, put into compat level 10. The exact commands or the exact ordering you will see will likely be slightly different:
~/dh-demo$ dh binary --no-act
dh_testdir
dh_update_autotools_config
dh_autoreconf
dh_auto_configure
dh_auto_build
dh_auto_test
dh_testroot
dh_prep
dh_installdirs
dh_auto_install
dh_install
dh_installdocs
dh_installchangelogs
dh_installexamples
dh_installman
dh_installcatalogs
dh_installcron
dh_installdebconf
dh_installemacsen
dh_installifupdown
dh_installinfo
dh_systemd_enable
dh_installinit
dh_systemd_start
dh_installmenu
dh_installmime
dh_installmodules
dh_installlogcheck
dh_installlogrotate
dh_installpam
dh_installppp
dh_installudev
dh_installgsettings
dh_bugfiles
dh_ucf
dh_lintian
dh_gconf
dh_icons
dh_perl
dh_usrlocal
dh_link
dh_installwm
dh_installxfonts
dh_strip_nondeterminism
dh_compress
dh_fixperms
dh_strip
dh_makeshlibs
dh_shlibdeps
dh_installdeb
dh_gencontrol
dh_md5sums
dh_builddeb
Related
I use both GNU Make and - woe is me - ClearCase' clearmake.
Now, GNU make respect a flag named MAKEFLAGS, which for me is set to j20 on this multi-core machine I'm on. Unfortunately, clearmake also recognizes this option, yet doesn't except this value. It tells me:
clearmake: Error: Bad option (j)
clearmake: Error: Bad option (2)
clearmake: Error: Bad option (0)
Questions:
Why is this happening? Should ClearMake accommodate GNU Make's usage?
How can I get around it, other then turning the flag off an on repeatedly?
It's been 15 years or so since I used clearmake, but assuming it doesn't support the GNU make-specific GNUMAKEFLAGS variable you can use:
export GNUMAKEFLAGS=-j20
and leave MAKEFLAGS unset.
The "BUILDING SOFTWARE WITH CLEARCASE" clearly states in its T"unsupported Gnu make features" that this option is indeed not supported.
–j [JOBS]
--jobs=[JOBS]
Maybe a clearmake -C -J can help (for testing): there should then be no limit to the number of parallel builds.
Are you calling GNU make from a clearmake build script? Or are you trying to create a single makefile that will support both build tools? I think the GNUMAKEFLAGS EV is safer for GNU make specific values. I would also use
CCASE_MAKEFLAGS for any makeflags that are specific to clearmake.
CCASE_CONC to set the concurrency value. While clearmake no longer passes the -J in MAKEFLAGS, it used to, and if you're using an older clearmake (somewhere in the 7's as I recall), you could upset "child" GNU make sessions since they like -J about as much as clearmake likes -j.
Finally, check the env_ccase man page for the behavior mentioned in CCASE_MAKEFLAGS_V6_OBSOLETE. If you pass MAKEFLAGS explicitly in the build script like
$(MAKE) $(MAKEFLAGS) TARGET=x
And had started clearmake like this:
clearmake -C gnu TARGET=Y
You'll actually get both TARGET macro definitions in the command line. Setting the mentioned EV (at all) avoids the "pass defined macros in MAKEFLAGS" behavior. The switch exists because some people have makefiles that DEPEND on this behavior, while others have ones BROKEN BY this behavior...
Assuming for the sake of argument that your company has a support agreement with either IBM or HCL, this is a good time to use your support channels to bring up clearmake concerns.
For running GNAT metric (for Windows, GPL 2017 or CE 2018) I'd like to include the RTL sources as well. There is a "-a" switch but it seems to be ineffective. When I'm forcing visibility of RTL sources, only ada.ads and system.ads are processed. Guessing it is a "crunched name" issue (RTL file names forced to 8 character names) I've tried other tricks without success.
My question is: is there a way to get the RTL source metrics (of the source files actually used) with GNAT Metric?
I'm using the command
gnatmetric -a -xs -nt -j0 -Pmyproj.gpr -U somemain.adb
TIA
In the meantime I've found a workaround by using the gnathtml.pl script.
I've customized the script a bit by removing the H1 headers.
The result is a few hundreds of HTML files with the sources of units actually used: the script does find all dependencies, recursively, through the .ali files - including the RTL.
Then I group the HTML files together, convert them back to text files, pass them through Adalog's Normalize tool for removing comments and empty lines, count lines with the wc command, and the job is done.
I have a C++ library and it has a few of C++ static objects. The library could suffer from C++ static initialization fiasco. I'm trying to vet unforeseen translation unit dependencies by randomizing the order of the *.o files during a build.
I visited 2.3 How make Processes a Makefile in the GNU manual and it tells me:
Goals are the targets that make strives ultimately to update. You can override this behavior using the command line (see Arguments to Specify the Goals) ...
I also followed to 9.2 Arguments to Specify the Goals, but a treatment was not provided. It did not surprise me.
Is it possible to have Make randomize its goals? If so, then how do I do it?
If not, are there any alternatives? This is in a test environment, so I have more tools available to me than just GNUmake.
Thanks in advance.
This is really implementation-defined, but GNU Make will process targets from left to right.
Say you have an OBJS variable with the objects you want to randomize, you could write something like (using e.g. shuf):
RAND_OBJS := $(shell shuf -e -- $(OBJS))
random_build: $(RAND_OBJS)
This holds as long as you're not using parallel make (-j option). If you are the order will still be randomized, but it will also depend on number of jobs, system load, current phase of the moon, etc.
Next release of GNU make will have --shuffle mode. It will allow you to execute prerequisites in random order to shake out missing dependencies by running $ make --shuffle.
The feature was recently added in https://savannah.gnu.org/bugs/index.php?62100 and so far is available only in GNU make's git tree.
For the AFP entry Dijkstra's Shortest Path Algorithm, both the proof outline and proof document were nonexistent *. Unfortunately, I did not find an IsaMakefile either to build those documents locally. What is the best way to get those documents?
Another question, as the Dijkstra.thy depends on a lot of other theories, is there a way to load everything faster?
*) It is fixed now.
(There seems to be something broken at AFP right now, please tell the editors about it.)
In general, you can download the sources of AFP entries and produce the documents yourself like this:
Get and unpack all AFP sources -- downloading separate entries is offered as well, but then you have to disentangle dependencies manually.
Invoke isabelle build like this:
isabelle build -d afp-2013-03-02 -o document=pdf -v Dijkstra_Shortest_Path
Here afp-2013-03-02 is the directory that was obtained by unpacking the current AFP sources.
See also the Isabelle System manual about "Isabelle sessions and build management", which is all new in Isabelle2013.
See isabelle build -b there to make things load faster, by producing persistent heap images from sessions.
The links in the AFP entry were indeed broken and should now be fixed again, sorry about that.
As Makarius writes, the AFP new uses Isabelle's new build system, i.e. has a ROOT file for each entry that can be used to check the associated theories and build the document.
Makarius' answer is pretty much the official way to do it, although I would additionally recommend setting up the AFP as a component. This gives you the following steps:
Download the AFP to e.g. ~/afp
Set it up as component e.g. by adding ~/afp to ~/.isabelle/Isabelle2013/components (see also AFP as a component)
build the entry with
isabelle afp_build Dijkstra_Shortest_Path
You can also have jEdit build the heap image for you. If the AFP is setup as a component (see the other answers for that), just start jEdit with
isabelle jedit -d '$AFP' -l Dijkstra_Shortest_Path
and jEdit will select Dijkstra_Shortest_Path as base logic and (re)build it if necessary.
If you make regular use of the AFP, it might be useful to add the AFP path by default. For this, create a file ROOTS in $ISABELLE_HOME_USER with the line $AFP in it (or add this line, if the file already exists).
I'm busy porting my build process from msbuild to cmake, to better be able to deal with the gcc toolchain (which generates much faster code for some of the numeric stuff I'm doing).
Now, I'd like cmake to generate several versions of the output, stuff like one version with sse2, another with x64, and so on. However, cmake seems to work most naturally if you simply have a bunch of flags (say, "sse2_enable", and "platform") and then generate one output based on those platforms.
What's the best way to work with multiple output configurations like this? Intuitively, I'd like to iterate over a large number of flag combinations and rerun the same CMakeLists.txt files for each combination - but of course, you can't express that within the CMakeLists.txt files (AFAIK).
The recommended way to do this is to simply have multiple build directories. From each one you simply call cmake with the required settings.
For example you could do, starting in the base source directory (using Linux shell syntax but the idea is the same):
mkdir build-sse2 && cd build-sse2
cmake .. -DENABLE_SSE2 # or whatever to enable it in your CMakeLists.txt
make
cd ..
mkdir build-x64 && cd build-x64
cmake .. -DENABLE_X64 # or whatever again...
make
This way, each build directory is completely separated from each other.
This allows you to have one directory for Debug, another for Release and another for cross-compiling.
There hasn't been much activity here, so I've come up with a workable solution myself. It's probably not ideal, so if you have a better idea, please do add it!
Now, it's hard to iterate over build configs in cmake because cmake's crucial variables don't live in function scope - so, for instance, that means if you do include_directories(X) the X directory will remain in the include list even after the function exits.
Directories do have scope - and while normally each input directory corresponds to one output directory, you can have multiple output directories.
So, my solution looks like this:
project(FooAllConfigs)
set(FooVar 2)
set(FooAnotherVar b)
add_subdirectory("project_dir" "out-2b")
set(FooVar 5)
set(FooAnotherVar c)
add_subdirectory("project_dir" "out-5c")
set(FooVar 3)
set(FooAnotherVar b)
add_subdirectory("project_dir" "out-3b")
set(FooVar 3)
set(FooAnotherVar c)
add_subdirectory("project_dir" "out-3c")
The normal project dir then contains a CMakeLists.txt file with code to set up the appropriate includes and compiler options given the global variables set in the FooAllConfigs project, and it also determines a build suffix that's appended to all build outputs - any even indirectly included output (e.g. as generated by add_executable) must have a unique name.
This works fine for me.