How can I install the ZFP (Zero Foot Print) RTS (Run Time System) for AVR with the Alire package manager for Ada? - ada

How can I install the ZFP (Zero Foot Print) RTS (Run Time System) for AVR with the Alire package manager for Ada?
My project file, I think correctly, contains:
project Avr is
for Runtime("Ada") use "zfp";
for Target use "avr-elf";
end Avr;
alire.toml hopefully correction contains:
[[depends-on]]
gnat_avr_elf = ">=11.2.4"
Unfortunately, when running alr build, I get:
gprconfig: can't find a toolchain for the following configuration:
gprconfig: language 'ada', target 'avr-elf', runtime 'zfp'
I found documentation for programming AVR with Ada, but this assumes that I build the tool-chain myself and not have a package manager at least providing the GNU tool-chain.
The same applies to Programming Arduino with Ada.

Currently, there is no maintained runtime for AVR. You can get the compiler but no runtime through the alr toolchain command. (which is a bit useless then)
I ran the AVR-Ada project until sometimes around 2013 and we had a RTS which was close to today's ZFP with some additional routines for fast integer to string conversion and delay until commands. I haven't built the runtime myself since a long time. Didn't know that someone is interested.

https://github.com/Fabien-Chouteau/bare_runtime is very adaptable and can at least be brought to build.
However, the following changes are required:
diff --git a/bare_runtime.gpr b/bare_runtime.gpr
index 1f6059c..69b4dd6 100644
--- a/bare_runtime.gpr
+++ b/bare_runtime.gpr
## -13,7 +13,7 ## project Bare_Runtime is
for Library_Dir use "adalib";
for Object_Dir use "obj";
- for Source_Dirs use ("src");
+ for Source_Dirs use ("src", "config");
package Naming is
for Spec_Suffix ("Asm_CPP") use ".inc";
## -39,6 +39,26 ## project Bare_Runtime is
& ("-g");
for Switches ("raise-gcc.c") use Target_Options.ALL_CFLAGS
& ("-fexceptions");
+ for Switches ("s-stoele.adb") use Target_Options.ALL_ADAFLAGS
+ & ("-gnatwn");
+ for Switches ("s-secsta.adb") use Target_Options.ALL_ADAFLAGS
+ & ("-gnatwn");
+ for Switches ("s-memtyp.ads") use Target_Options.ALL_ADAFLAGS
+ & ("-gnatwn");
+ for Switches ("s-memory.adb") use Target_Options.ALL_ADAFLAGS
+ & ("-gnatwn");
+ for Switches ("s-fatsfl.ads") use Target_Options.ALL_ADAFLAGS
+ & ("-gnatwn");
+ for Switches ("s-fatllf.ads") use Target_Options.ALL_ADAFLAGS
+ & ("-gnatwn");
+ for Switches ("s-fatlfl.ads") use Target_Options.ALL_ADAFLAGS
+ & ("-gnatwn");
+ for Switches ("s-fatflt.ads") use Target_Options.ALL_ADAFLAGS
+ & ("-gnatwn");
+ for Switches ("a-tags.adb") use Target_Options.ALL_ADAFLAGS
+ & ("-gnatwn");
+ for Switches ("a-elchha.adb") use Target_Options.ALL_ADAFLAGS
+ & ("-gnatwn");
-- Don't inline System.Machine_Reset otherwise we can loose our common
-- exit system.
diff --git a/src/s-memory.adb b/src/s-memory.adb
index bcb4494..a3d3f93 100644
--- a/src/s-memory.adb
+++ b/src/s-memory.adb
## -31,7 +31,7 ##
-- Simple implementation for use with ZFP
-pragma Restrictions (No_Elaboration_Code);
+-- pragma Restrictions (No_Elaboration_Code);
-- This unit may be linked without being with'ed, so we need to ensure
-- there is no elaboration code (since this code might not be executed).
The issue seems to be that the GCC/ GNAT toolchain for AVR seems to have unsupported combinations of word sizes for addresses and/ or basic datatypes.
So far I did not investigate that further, but still wanted to document the status.
FYI, I did not do any testing so far, so this is really just compiling and most probably will not run without issues.
To include that, call alr with --use=./bare_runtime bare_runtime, where I assume that you have the code for the bare_runtime as a sub-folder in your project.

Related

How to make GPR accept multiple sources in the same project having the same file name?

I set out to rewrite an OSS project in C piece by piece to Ada. First stop being to replace the build system with GPR. On doing so I stumbled upon a problem: it does not allow for multiple sources in the repository with the same name:
duplicate source file name "slp_debug.h"
The file exists in two different directories; one for implementation, and one for test (stub I reckon). This should be just fine since the preprocessor will deterministically choose the source relative the source including it, or according to the order of include directories. But GPR would appear not fond of that idea.
How to make GPR accept it?
I had a go with openslp-2.0.0 and managed to make one GPR that builds the library and another, which withs the first, to build the tests - no complaints about the duplicate file.
Both project files are at the top level.
As common for projects like openslp, the configure process generates a lengthy config.h, at the top level, with a shedload of #define HAVE_FOO 1’s.
This is the 'common' project file - I think there may also be an executable in there (slptool), which might have to be a separate GPR since this one builds a library:
project Openslp is
for Languages use ("c");
for Source_Dirs use ("common", "libslp", "libslpattr");
for Excluded_Source_Files use
(
"slp_win32.c", -- not on Darwin! or Linux, ofc
"libslpattr_tiny.c" -- I think this must be an alternate version
);
for Library_Name use "slp";
for Library_Kind use "static";
for Library_Dir use "lib";
for Object_Dir use "obj-openslp";
for Create_Missing_Dirs use "true";
package Compiler is
for Switches ("c") use
(
"-DDARWIN",
"-DHAVE_CONFIG_H",
"-I" & project'Project_Dir, -- for config.h, created by configure
"-I" & project'Project_Dir & "/common",
"-I" & project'Project_Dir & "/libslp",
"-I" & project'Project_Dir & "/libslpattr",
"-DETCDIR=""" & project'Project_Dir & "/etc"""
);
end Compiler;
end Openslp;
and this is the 'test' one:
with "openslp";
project Openslp_Test is
for Languages use ("c");
for Source_Dirs use ("test/**");
for Object_Dir use "obj-openslp_test";
for Exec_Dir use "test/bin";
for Main use ("SLPFindAttrs.c"); -- etc, etc
for Create_Missing_Dirs use "true";
package Compiler is
for Switches ("c") use
("-I" & project'Project_Dir & "/test") -- for the local slp_debug.h
& Openslp.Compiler'Switches ("c");
end Compiler;
end Openslp_Test;

New 2020 GNAT Ada installation failing to build an old 2015 project

I recently installed GNAT Ada (2020) and GNAVI GWindows on a new PC.
On trying to build an old project developed under the 2015 version, I get this:
No candidate interpretations match the actuals:
Too many arguments in call to "Put"
expected private type "Printer_Canvas_Type" defined at gwindows-drawing.ads:603
found private type "Canvas_Type" defined at gwindows-drawing.ads:96
This is one of the several similar pieces of code that produce the same result (MapCanvas is declared elsewhere as Canvas_Type):
Put (MapCanvas, (DATUM_BASE_X +
(x * 10 * GRID_MONAD_SPACING)) +
(5 * GRID_MONAD_SPACING),
y - (GRID_MONAD_SPACING + 5),
Integer'Image (x));
Possibly related, I also get this in regard to the last argument in the call to Put - Integer'Image(x)
expected type "Standard.Wide_String"
found type"Standard.String"
As an experiment, I tried converting the last argument using To_Wide_String (Integer'Image (x)) but the result was the same.
Elsewhere, similar code with a literal compiles ok:
Put (MapCanvas, (DATUM_BASE_X - 1 +
(GRID_MONAD_SPACING / 2) +
(x * 10 * GRID_MONAD_SPACING)),
y + 20 + (60 * GRID_MONAD_SPACING),
"0 2 4 6 8 ");
On the previous PC with GNAT Ada 2015, everything compiled. I've compared the declarations
of Put in gwindows-drawing.ads in the old and new installations of GWindows and they are identical.
Unfortunately I can no longer build on the old PC without a lot of backtracking - the old project drive
is in use elsewhere, though I do have it all backed up.
I'd appreciate any ideas on where to look for the cause of the problem.
GWindows has two string modes matching Windows API: ANSI (8-bit character) and Unicode.
The type GString is either a String or a Wide_String.
You can switch the framework's mode with ansi.cmd and unicode.cmd .
Obviously your old project was in ANSI mode.

Randomize Make goals for a target

I have a C++ library and it has a few of C++ static objects. The library could suffer from C++ static initialization fiasco. I'm trying to vet unforeseen translation unit dependencies by randomizing the order of the *.o files during a build.
I visited 2.3 How make Processes a Makefile in the GNU manual and it tells me:
Goals are the targets that make strives ultimately to update. You can override this behavior using the command line (see Arguments to Specify the Goals) ...
I also followed to 9.2 Arguments to Specify the Goals, but a treatment was not provided. It did not surprise me.
Is it possible to have Make randomize its goals? If so, then how do I do it?
If not, are there any alternatives? This is in a test environment, so I have more tools available to me than just GNUmake.
Thanks in advance.
This is really implementation-defined, but GNU Make will process targets from left to right.
Say you have an OBJS variable with the objects you want to randomize, you could write something like (using e.g. shuf):
RAND_OBJS := $(shell shuf -e -- $(OBJS))
random_build: $(RAND_OBJS)
This holds as long as you're not using parallel make (-j option). If you are the order will still be randomized, but it will also depend on number of jobs, system load, current phase of the moon, etc.
Next release of GNU make will have --shuffle mode. It will allow you to execute prerequisites in random order to shake out missing dependencies by running $ make --shuffle.
The feature was recently added in https://savannah.gnu.org/bugs/index.php?62100 and so far is available only in GNU make's git tree.

How can i fine-tune qt5 for an embedded system?

I am using yocto, and already have a build of qt5 in my image and it works. The issue is that it is HUGE.
So, I tried to use a .bbappend recipe in my layer for qt where I experimented with using
PACKAGECONFIG_remove = " qtnetworking qtdeclarative sql-mysql qtscript...etc";
and
EXTRA_OECONF = " -no-accessibility -no-feature-MDIAREA -no-feature-DRAGANDDROP ...etc";
I even removed all the feature disablement config params in EXTRA_OECONF and just added -qconfig minimal or -qconfig medium.
====> the result is always the same: compilation failure in the qt corelib or qtwidgets.
I want to disable networking, printing, and mdi support. How can I do that?
Much appreciated!

how to compile MPI and non-MPI version of the same program with automake?

I have a C++ code that can be compiled with MPI support depending on a
certain preprocessor flag; missing the appropriate flag, the sources
compile to a non-parallel version.
I would like to setup the Makefile.am so that it compiles both the
MPI-parallel and the sequential version, if an option to
./configure is given.
Here's the catch: MPI has its own C++ compiler wrapper, and insists
that sources are compiled and linked using it rather than the standard
C++ compiler. If I were to write the Makefile myself, I would have to
do something like this:
myprog.seq: myprog.cxx
$(CXX) ... myprog.cxx
myprog.mpi: myprog.cxx
$(MPICXX) -DWITH_MPI ... myprog.cxx
Is there a way to tell automake that it has to use $(MPICXX) instead
of $(CXX) when compiling the MPI-enabled version of the program?
I have the same problem, and I've found that there's no really good way to get autotools to conditionally use MPI compilers for particular targets. Autotools is good at figuring out which compiler to use based on what language your source is written in (CC, CXX, FC, F77, etc.), but it really isn't good at figuring out whether or not to use MPI compiler for a particular target. You can set MPICC, MPICXX, etc., but you essentially have to rewrite all your Makefile rules for your target (as you've done above) if you use the compiler this way. If you do that, what's the point of writing an automake file?
Someone else suggested using MPI like an external library, and this is the approach I'd advocate, but you shouldn't do it by hand, because different MPI installations have different sets of flags they pass to the compiler, and they can depend on the language you're compiling.
The good thing is that all the currently shipping MPI compilers that I know of support introspection arguments, like -show, -show-compile or -show-link. You can automatically extract the arguments from the scripts.
So, what I did to deal with this was to make an m4 script that extracts the defines, includes, library paths, libs, and linker flags from the MPI compilers, then assigns them to variables you can use in your Makefile.am. Here's the script:
lx_find_mpi.m4
This makes MPI work the way automake expects it to. Incidentally, this is the approach CMake uses in their FindMPI module, and I find it works quite well there. It makes the build much more convenient because you can just do something like this for your targets:
bin_PROGRAMS = mpi_exe seq_exe
# This is all you need for a sequential program
seq_exe_SOURCES = seq_exe.C
# For an MPI program you need special LDFLAGS and INCLUDES
mpi_exe_SOURCES = mpi_exe.C
mpi_exe_LDFLAGS = $(MPI_CXXLDFLAGS)
INCLUDES = $(MPI_CXXFLAGS)
There are similar flags for the other languages since, like I said, the particular flags and libraries can vary depending on which language's MPI compiler you use.
lx_find_mpi.m4 also sets some shell variables so that you can test in your configure.ac file whether MPI was found. e.g., if you are looking for MPI C++ support, you can test $have_CXX_mpi to see if the macro found it.
I've tested this macro with mvapich and OpenMPI, as well as the custom MPICH2 implementation on BlueGene machines (though it does not address all the cross-compiling issues you'll see there). Let me know if something doesn't work. I'd like to keep the macro as robust as possible.
I am sorry that having automake use MPI is so difficult. I have been struggling with this for many months trying to find a good solution. I have a source tree that have one library and then many programs in sub-folders that use the library. Some of the folders are mpi programs, but when I try to replace CXX with the MPI compiler using in Makefile.am.
if USE_MPI
MPIDIR = $(MPICOMPILE)
MPILIB = $(MPILINK)
CXX=#MPICXX#
F77=#MPIF77#
MPILIBS=$(MPILINK)
endif
I get
CXX was already defined in condition TRUE, which includes condition USE_MPI ...
configure.ac:12: ... `CXX' previously defined here
I don't have a rule that specifies the compiler, so maybe there is a way to do that.
SUBDIRS = .
bin_PROGRAMS = check.cmr
check_ccmr_SOURCES = check_gen.cpp
check_ccmr_CXXFLAGS = -I$(INCLUDEDIR) $(MPIDIR)
check_ccmr_LDADD = -L$(LIBDIR)
check_ccmr_LDFLAGS = $(MPILIB)
If you have disabled the subdir-objects option to automake, something like this might work:
configure.ac:
AC_ARG_ENABLE([seq], ...)
AC_ARG_ENABLE([mpi], ...)
AM_CONDITIONAL([ENABLE_SEQ], [test $enable_seq = yes])
AM_CONDITIONAL([ENABLE_MPI], [test $enable_mpi = yes])
AC_CONFIG_FILES([Makefile seq/Makefile mpi/Makefile])
Makefile.am:
SUBDIRS =
if ENABLE_SEQ
SUBDIRS += seq
endif
if ENABLE_MPI
SUBDIRS += mpi
endif
sources.am:
ALL_SOURCES = src/foo.c src/bar.cc src/baz.cpp
seq/Makefile.am:
include $(top_srcdir)/sources.am
bin_PROGRAMS = seq
seq_SOURCES = $(ALL_SOURCES)
mpi/Makefile.am:
include $(top_srcdir)/sources.am
CXX = $(MPICXX)
AM_CPPFLAGS = -DWITH_MPI
bin_PROGRAMS = mpi
mpi_SOURCES = $(ALL_SOURCES)
The only thing stopping you from doing both of these in the same directory is the override of $(CXX). You could, for instance, set mpi_CPPFLAGS and automake would handle that gracefully, but the compiler switch makes it a no-go here.
A possible workaround for not using different sources could be:
myprog.seq: myprog.cxx
$(CXX) ... myprog.cxx
myprog-mpi.cxx: myprog.cxx
#cp myprog.cxx myprog-mpi.cxx
myprog.mpi: myprog-mpi.cxx
$(MPICXX) -DWITH_MPI ... myprog-mpi.cxx
#rm -f myprog-mpi.cxx
for Automake:
myprog-bin_PROGRAMS = myprog-seq myprog-mpi
myprog_seq_SOURCES = myprog.c
myprog-mpi.c: myprog.c
#cp myprog.c myprog-mpi.c
myprog_mpi_SOURCES = myprog-mpi.c
myprog_mpi_LDFLAGS = $(MPI_CXXLDFLAGS)
INCLUDES = $(MPI_CXXFLAGS)
BUILT_SOURCES = myprog-mpi.c
CLEANFILES = myprog-mpi.c
Here is the solution that I came up with for building a two static libraries - one with MPI (libmylib_mpi.a) and one without (libmylib.a). The advantage of this method is that there is no need for duplicate source files, a single Makefile.am for both variants, and capability to use subdirs. You should be able to modify this as needed to produce a binary instead of a library. I build the non-MPI library as normal, then for the MPI variant, I leave _SOURCES empty and use _LIBADD instead, specifying an extension of .mpi.o for the object files. I then specify a rule to generate the MPI object files using the MPI compiler.
Overall file / directory structure is something like
configure.ac
Makefile.am
src
mylib1.cpp
mylib2.cpp
...
include
mylib.h
...
configure.ac:
AC_INIT()
AC_PROG_RANLIB
AC_LANG(C++)
AC_PROG_CXX
# test for MPI, define MPICXX, etc. variables, and define HAVE_MPI as a condition that will evaluate to true if MPI is available and false otherwise.
AX_MPI([AM_CONDITIONAL([HAVE_MPI], [test "1" = "1"])],[AM_CONDITIONAL([HAVE_MPI], [test "1" = "2"])]) #MPI optional for xio
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
There is probably a more efficient way to do the conditional check than I have listed here (I'm welcome to suggestions).
Makefile.am:
AUTOMAKE_OPTIONS = subdir-objects
lib_LIBRARIES = libmylib.a
libmylib_a_SOURCES = src/mylib_1.cpp src/mylib_2.cpp ...
#conditionally generate libmylib_mpi.a if MPI is available
if HAVE_MPI
lib_LIBRARIES += libmylib_mpi.a
libmylib_mpi_a_SOURCES = #no sources listed here
#use LIBADD to specify objects to add - use the basic filename with a .mpi.o extension
libmylib_mpi_a_LIBADD = src/mylib_1.mpi.o src/mylib_2.mpi.o ...
endif
AM_CPPFLAGS = -I${srcdir}/include
include_HEADERS = include/mylib.h
# define a rule to compile the .mpi.o objects from the .cpp files with the same name
src/%.mpi.o: ${srcdir}/src/%.cpp ${srcdir}/include/mylib.h
$(MPICXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -DWITH_MPI=1 -c $(patsubst %.mpi.o,$(srcdir)/%.cpp,$#) -o $#
#define a rule to clean the .mpi.o files
clean-local:
-rm -f src/*.mpi.o
MPI installations do (usually) ship with compiler wrappers, but there is no requirement that you use them -- MPI does not insist on it. If you want to go your own way you can write your own makefile to ensure that the C++ compiler gets the right libraries (etc). To figure out what the right libraries (etc) are, inspect the compiler wrapper which is, on all the systems I've used, a shell script.
At first sight the compiler wrappers which ship with products such as the Intel compilers are a little daunting but stop and think about what is going on -- you are simply compiling a program which makes use of an external library or two. Writing a makefile to use the MPI libraries is no more difficult than writing a makefile to use any other library.

Resources