Consider the code below. No error is shown when I compile and run it with address sanitizer. But there should be an error right i.e assigning/accessing out of bounds memory location? Why doesn't address sanitizer detect it?
int arr[30];
int main(){
arr[40] = 34;
printf(“%d”, arr[40]);
}
Thanks!
clang -fsanitize=address -fno-omit-frame-pointer test.c
./a.out
This is described by the following entry in FAQ:
Q: Why didn't ASan report an obviously invalid memory access in my code?
A1: If your errors is too obvious, compiler might have already optimized it
out by the time Asan runs.
A2: Another, C-only option is accesses to global common symbols which are
not protected by Asan (you can use -fno-common to disable generation of
common symbols and hopefully detect more bugs).
I got this error below and I can not find a solution. Does anybody know how
to fix this error?
rafael#ubuntu:~/avr/projeto$ clang -fsyntax-only -Os -I /usr/lib/avr/include -D__AVR_ATmega328P__ -DARDUINO=100 -Wno-ignored-attributes -Wno-attributes serial_tree.c
In file included from serial_tree.c:3:
In file included from /usr/lib/avr/include/util/delay.h:43:
/usr/lib/avr/include/util/delay_basic.h:108:5: error: invalid output
constraint
'=w' in asm
: "=w" (__count)
^
1 error generated.
util/delay.h is a Header from avr-libc, and the feature which you are using (some of the delay stuff) is using inline asm to implement it. avr-libc is written to be used with avr-gcc and uses features from avr-gcc like machine dependent constrains in inline asm. llvm does not recognize constraint "w", thus you have to use a different approach like using avr-gcc or porting that code to llvm.
avr-gcc also implements built-in function __builtin_avr_delay_cycles to implement wasting a specified number of clock cycles. If llvm is properly mimicking avr-gcc, then you can use that function as a starting point.
Is there a way to set the compiler warnings to be interpreted as an error in the Arduino IDE?
Or any generic way to set GCC compiler options?
I have looked at the ~/.arduino/preferences.txt file, but I found nothing that indicates fine-tuned control. I also looked if I could set GCC options via environment variables, but I did not find anything.
I don't want to have verbose compiler output (which you can specify using the IDE) that is way too much distracting non-essential information, and I don't want to waste my time on reading through it.
I want for a compilation to stop on a warning, so code can be cleaned up. My preference would be to be able to set -Werror= options, but a generic -Werror will do for the small code size of .ino projects.
Addendum:
Based on the suggestion in the selected answer, I implemented an avr-g++ script and put that in the path before the normal avr-g++. For that I changed the Arduino command as follows:
-export PATH="${APPDIR}/java/bin:${PATH}"
+export ORGPATH="${APPDIR}/java/bin:${PATH}"
+export PATH="${APPDIR}/extra:${ORGPATH}"
And in the new directory extra in the APPSDIR (where the Arduino command is located), I have
an avr-g++ which is a Python script:
#!/usr/bin/env python
import os
import sys
import subprocess
werr = '-Werror'
wall = '-Wall'
cmd = ['avr-g++'] + sys.argv[1:]
os.environ['PATH'] = os.environ['ORGPATH']
fname = sys.argv[-2][:]
if cmd[-2].startswith('/tmp'):
#print fname, list(fname) # this looks strange
for i, c in enumerate(cmd):
if c == '-w':
cmd[i] = wall
break
cmd.insert(1, werr)
subprocess.call(cmd)
So you replace the first command with the original compiler name and reset the environment used to exclude the extra directory.
The fname is actually strange. If you print it, it is only abc.cpp, but its length is much larger and it actually starts with /tmp. So I check for that to decide whether to add/update the compile options.
It looks like you are on Linux. Arduino is a script, so you can set PATH in the script to include a directory at the beginning to a directory containing a program, avr-g++. Then the Java stuff should take the compiler from there, should it not?
That program then calls the normal /usr/bin/avr-g++ with the extra options.
One option you have is to compile your sketches from the command line. Take a look at this makefile.
I'm (once again) struggling with the creation of precompiled headers in conjunction with gcc and Qt on the Apple platform.
When now creating my precompiled header I use a code section (based on good old "PCHSupport_26.cmake") to extract the compile flags as follows:
STRING(TOUPPER "CMAKE_CXX_FLAGS_${CMAKE_BUILD_TYPE}" _flags_var_name)
SET(_args ${CMAKE_CXX_FLAGS} ${${_flags_var_name}})
GET_DIRECTORY_PROPERTY(DIRINC INCLUDE_DIRECTORIES )
FOREACH(_item ${DIRINC})
LIST(APPEND _args "-I${_item}")
ENDFOREACH(_item)
GET_DIRECTORY_PROPERTY(_defines_global COMPILE_DEFINITIONS)
LIST(APPEND defines ${_defines_global})
STRING(TOUPPER "COMPILE_DEFINITIONS_${CMAKE_BUILD_TYPE}" _defines_for_build_name) GET_DIRECTORY_PROPERTY(defines_build ${_defines_for_build_name})
LIST(APPEND _defines ${_defines_build})
FOREACH(_item ${_defines})
LIST(APPEND _args "-D${_item}")
ENDFOREACH(_item ${_defines})
LIST(APPEND _args -c ${CMAKE_CURRENT_SOURCE_DIR}/${PRECOMPILED_HEADER} -o ${_gch_filename})
SEPARATE_ARGUMENTS(_args)
Unfortunately the above compiler flags miss two important parameter that CMake does generate when using the build-in compiler rules:
-DQT_DEBUG
and when compiling with the generated precompiled header, I get errors as follows:
file.h: not used because QT_DEBUG is defined.
I would need your help with the following:
Is the above way to retrieve the compiler flags correct ?
Is there a better, easier, simpler way to do this ?
Why does -DQT_DEBUG not show up in COMPILE_DEFINITIONS or COMPILE_DEFINITIONS_${CMAKE_BUILD_TYPE}
If you are using XCode simply:
SET_TARGET_PROPERTIES(${target} PROPERTIES XCODE_ATTRIBUTE_GCC_PRECOMPILE_PREFIX_HEADER YES)
SET_TARGET_PROPERTIES(${target} PROPERTIES XCODE_ATTRIBUTE_GCC_PREFIX_HEADER "${target}/std.h")
I'm trying myself to use precompiled headers with raw gcc on linux through CMake but I haven't still figured it out. It look like it's working but I don't see any speed improvements.
Edit:
I managed to use pch in gcc finally with the macro you can find here: http://www.mail-archive.com/cmake#cmake.org/msg04394.html
I have a C++ code that can be compiled with MPI support depending on a
certain preprocessor flag; missing the appropriate flag, the sources
compile to a non-parallel version.
I would like to setup the Makefile.am so that it compiles both the
MPI-parallel and the sequential version, if an option to
./configure is given.
Here's the catch: MPI has its own C++ compiler wrapper, and insists
that sources are compiled and linked using it rather than the standard
C++ compiler. If I were to write the Makefile myself, I would have to
do something like this:
myprog.seq: myprog.cxx
$(CXX) ... myprog.cxx
myprog.mpi: myprog.cxx
$(MPICXX) -DWITH_MPI ... myprog.cxx
Is there a way to tell automake that it has to use $(MPICXX) instead
of $(CXX) when compiling the MPI-enabled version of the program?
I have the same problem, and I've found that there's no really good way to get autotools to conditionally use MPI compilers for particular targets. Autotools is good at figuring out which compiler to use based on what language your source is written in (CC, CXX, FC, F77, etc.), but it really isn't good at figuring out whether or not to use MPI compiler for a particular target. You can set MPICC, MPICXX, etc., but you essentially have to rewrite all your Makefile rules for your target (as you've done above) if you use the compiler this way. If you do that, what's the point of writing an automake file?
Someone else suggested using MPI like an external library, and this is the approach I'd advocate, but you shouldn't do it by hand, because different MPI installations have different sets of flags they pass to the compiler, and they can depend on the language you're compiling.
The good thing is that all the currently shipping MPI compilers that I know of support introspection arguments, like -show, -show-compile or -show-link. You can automatically extract the arguments from the scripts.
So, what I did to deal with this was to make an m4 script that extracts the defines, includes, library paths, libs, and linker flags from the MPI compilers, then assigns them to variables you can use in your Makefile.am. Here's the script:
lx_find_mpi.m4
This makes MPI work the way automake expects it to. Incidentally, this is the approach CMake uses in their FindMPI module, and I find it works quite well there. It makes the build much more convenient because you can just do something like this for your targets:
bin_PROGRAMS = mpi_exe seq_exe
# This is all you need for a sequential program
seq_exe_SOURCES = seq_exe.C
# For an MPI program you need special LDFLAGS and INCLUDES
mpi_exe_SOURCES = mpi_exe.C
mpi_exe_LDFLAGS = $(MPI_CXXLDFLAGS)
INCLUDES = $(MPI_CXXFLAGS)
There are similar flags for the other languages since, like I said, the particular flags and libraries can vary depending on which language's MPI compiler you use.
lx_find_mpi.m4 also sets some shell variables so that you can test in your configure.ac file whether MPI was found. e.g., if you are looking for MPI C++ support, you can test $have_CXX_mpi to see if the macro found it.
I've tested this macro with mvapich and OpenMPI, as well as the custom MPICH2 implementation on BlueGene machines (though it does not address all the cross-compiling issues you'll see there). Let me know if something doesn't work. I'd like to keep the macro as robust as possible.
I am sorry that having automake use MPI is so difficult. I have been struggling with this for many months trying to find a good solution. I have a source tree that have one library and then many programs in sub-folders that use the library. Some of the folders are mpi programs, but when I try to replace CXX with the MPI compiler using in Makefile.am.
if USE_MPI
MPIDIR = $(MPICOMPILE)
MPILIB = $(MPILINK)
CXX=#MPICXX#
F77=#MPIF77#
MPILIBS=$(MPILINK)
endif
I get
CXX was already defined in condition TRUE, which includes condition USE_MPI ...
configure.ac:12: ... `CXX' previously defined here
I don't have a rule that specifies the compiler, so maybe there is a way to do that.
SUBDIRS = .
bin_PROGRAMS = check.cmr
check_ccmr_SOURCES = check_gen.cpp
check_ccmr_CXXFLAGS = -I$(INCLUDEDIR) $(MPIDIR)
check_ccmr_LDADD = -L$(LIBDIR)
check_ccmr_LDFLAGS = $(MPILIB)
If you have disabled the subdir-objects option to automake, something like this might work:
configure.ac:
AC_ARG_ENABLE([seq], ...)
AC_ARG_ENABLE([mpi], ...)
AM_CONDITIONAL([ENABLE_SEQ], [test $enable_seq = yes])
AM_CONDITIONAL([ENABLE_MPI], [test $enable_mpi = yes])
AC_CONFIG_FILES([Makefile seq/Makefile mpi/Makefile])
Makefile.am:
SUBDIRS =
if ENABLE_SEQ
SUBDIRS += seq
endif
if ENABLE_MPI
SUBDIRS += mpi
endif
sources.am:
ALL_SOURCES = src/foo.c src/bar.cc src/baz.cpp
seq/Makefile.am:
include $(top_srcdir)/sources.am
bin_PROGRAMS = seq
seq_SOURCES = $(ALL_SOURCES)
mpi/Makefile.am:
include $(top_srcdir)/sources.am
CXX = $(MPICXX)
AM_CPPFLAGS = -DWITH_MPI
bin_PROGRAMS = mpi
mpi_SOURCES = $(ALL_SOURCES)
The only thing stopping you from doing both of these in the same directory is the override of $(CXX). You could, for instance, set mpi_CPPFLAGS and automake would handle that gracefully, but the compiler switch makes it a no-go here.
A possible workaround for not using different sources could be:
myprog.seq: myprog.cxx
$(CXX) ... myprog.cxx
myprog-mpi.cxx: myprog.cxx
#cp myprog.cxx myprog-mpi.cxx
myprog.mpi: myprog-mpi.cxx
$(MPICXX) -DWITH_MPI ... myprog-mpi.cxx
#rm -f myprog-mpi.cxx
for Automake:
myprog-bin_PROGRAMS = myprog-seq myprog-mpi
myprog_seq_SOURCES = myprog.c
myprog-mpi.c: myprog.c
#cp myprog.c myprog-mpi.c
myprog_mpi_SOURCES = myprog-mpi.c
myprog_mpi_LDFLAGS = $(MPI_CXXLDFLAGS)
INCLUDES = $(MPI_CXXFLAGS)
BUILT_SOURCES = myprog-mpi.c
CLEANFILES = myprog-mpi.c
Here is the solution that I came up with for building a two static libraries - one with MPI (libmylib_mpi.a) and one without (libmylib.a). The advantage of this method is that there is no need for duplicate source files, a single Makefile.am for both variants, and capability to use subdirs. You should be able to modify this as needed to produce a binary instead of a library. I build the non-MPI library as normal, then for the MPI variant, I leave _SOURCES empty and use _LIBADD instead, specifying an extension of .mpi.o for the object files. I then specify a rule to generate the MPI object files using the MPI compiler.
Overall file / directory structure is something like
configure.ac
Makefile.am
src
mylib1.cpp
mylib2.cpp
...
include
mylib.h
...
configure.ac:
AC_INIT()
AC_PROG_RANLIB
AC_LANG(C++)
AC_PROG_CXX
# test for MPI, define MPICXX, etc. variables, and define HAVE_MPI as a condition that will evaluate to true if MPI is available and false otherwise.
AX_MPI([AM_CONDITIONAL([HAVE_MPI], [test "1" = "1"])],[AM_CONDITIONAL([HAVE_MPI], [test "1" = "2"])]) #MPI optional for xio
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
There is probably a more efficient way to do the conditional check than I have listed here (I'm welcome to suggestions).
Makefile.am:
AUTOMAKE_OPTIONS = subdir-objects
lib_LIBRARIES = libmylib.a
libmylib_a_SOURCES = src/mylib_1.cpp src/mylib_2.cpp ...
#conditionally generate libmylib_mpi.a if MPI is available
if HAVE_MPI
lib_LIBRARIES += libmylib_mpi.a
libmylib_mpi_a_SOURCES = #no sources listed here
#use LIBADD to specify objects to add - use the basic filename with a .mpi.o extension
libmylib_mpi_a_LIBADD = src/mylib_1.mpi.o src/mylib_2.mpi.o ...
endif
AM_CPPFLAGS = -I${srcdir}/include
include_HEADERS = include/mylib.h
# define a rule to compile the .mpi.o objects from the .cpp files with the same name
src/%.mpi.o: ${srcdir}/src/%.cpp ${srcdir}/include/mylib.h
$(MPICXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -DWITH_MPI=1 -c $(patsubst %.mpi.o,$(srcdir)/%.cpp,$#) -o $#
#define a rule to clean the .mpi.o files
clean-local:
-rm -f src/*.mpi.o
MPI installations do (usually) ship with compiler wrappers, but there is no requirement that you use them -- MPI does not insist on it. If you want to go your own way you can write your own makefile to ensure that the C++ compiler gets the right libraries (etc). To figure out what the right libraries (etc) are, inspect the compiler wrapper which is, on all the systems I've used, a shell script.
At first sight the compiler wrappers which ship with products such as the Intel compilers are a little daunting but stop and think about what is going on -- you are simply compiling a program which makes use of an external library or two. Writing a makefile to use the MPI libraries is no more difficult than writing a makefile to use any other library.