Is there a way to set the compiler warnings to be interpreted as an error in the Arduino IDE?
Or any generic way to set GCC compiler options?
I have looked at the ~/.arduino/preferences.txt file, but I found nothing that indicates fine-tuned control. I also looked if I could set GCC options via environment variables, but I did not find anything.
I don't want to have verbose compiler output (which you can specify using the IDE) that is way too much distracting non-essential information, and I don't want to waste my time on reading through it.
I want for a compilation to stop on a warning, so code can be cleaned up. My preference would be to be able to set -Werror= options, but a generic -Werror will do for the small code size of .ino projects.
Addendum:
Based on the suggestion in the selected answer, I implemented an avr-g++ script and put that in the path before the normal avr-g++. For that I changed the Arduino command as follows:
-export PATH="${APPDIR}/java/bin:${PATH}"
+export ORGPATH="${APPDIR}/java/bin:${PATH}"
+export PATH="${APPDIR}/extra:${ORGPATH}"
And in the new directory extra in the APPSDIR (where the Arduino command is located), I have
an avr-g++ which is a Python script:
#!/usr/bin/env python
import os
import sys
import subprocess
werr = '-Werror'
wall = '-Wall'
cmd = ['avr-g++'] + sys.argv[1:]
os.environ['PATH'] = os.environ['ORGPATH']
fname = sys.argv[-2][:]
if cmd[-2].startswith('/tmp'):
#print fname, list(fname) # this looks strange
for i, c in enumerate(cmd):
if c == '-w':
cmd[i] = wall
break
cmd.insert(1, werr)
subprocess.call(cmd)
So you replace the first command with the original compiler name and reset the environment used to exclude the extra directory.
The fname is actually strange. If you print it, it is only abc.cpp, but its length is much larger and it actually starts with /tmp. So I check for that to decide whether to add/update the compile options.
It looks like you are on Linux. Arduino is a script, so you can set PATH in the script to include a directory at the beginning to a directory containing a program, avr-g++. Then the Java stuff should take the compiler from there, should it not?
That program then calls the normal /usr/bin/avr-g++ with the extra options.
One option you have is to compile your sketches from the command line. Take a look at this makefile.
Related
I have a daily use case where I need to work with projects on different version of Java (8, 11, ...).
I would like to have it displayed in the right side prompt in my shell (ZSH with Oh-My-Zsh). I know of a dummy way (computationally expensive) to do it (just java --version to var and display it). I would like it to have it cached until I don't source a file (which is a specific project file that sets the new env vars for different java versions).
Do you have any ideas how to do this efficiently?
Br,
Stjepan
The PROMPT and RPROMPT variables can have both static and dynamic parts, so you can set the version there when you source the project file, and it will only be calculated one time. The trick is to get the quoting right.
This line goes in the project file that sets the env variables, somewhere after setting PATH to include the preferred java executable:
RPROMPT="${${=$(java --version)}[1,3]}"
The pieces:
RPROMPT= - variable for the right-side prompt.
"..." - the critical part. Variables in double quotes will be expanded then and there, so the commands within this will only be executed when the project file is sourced.
${...[1,3]} - selects the first three words of the enclosed expression. On my system, java --version returns three lines of data, which is way too big for a prompt; this reduces it to something manageable.
${=...} - splits the enclosed value into words.
$(java --version) - jre version info.
I would like to know how in the Julia language, I can determine if a file.jl is run as script, such as in the call:
bash$ julia file.jl
It must only in this case start a function main, for example. Thus I could use include('file.jl'), without actually executing the function.
To be specific, I am looking for something similar answered already in a python question:
def main():
# does something
if __name__ == '__main__':
main()
Edit:
To be more specific, the method Base.isinteractive (see here) is not solving the problem, when using include('file.jl') from within a non-interactive (e.g. script) environment.
The global constant PROGRAM_FILE contains the script name passed to Julia from the command line (it does not change when include is called).
On the other hand #__FILE__ macro gives you a name of the file where it is present.
For instance if you have a files:
a.jl
println(PROGRAM_FILE)
println(#__FILE__)
include("b.jl")
b.jl
println(PROGRAM_FILE)
println(#__FILE__)
You have the following behavior:
$ julia a.jl
a.jl
D:\a.jl
a.jl
D:\b.jl
$ julia b.jl
b.jl
D:\b.jl
In summary:
PROGRAM_FILE tells you what is the file name that Julia was started with;
#__FILE__ tells you in what file actually the macro was called.
tl;dr version:
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
Explanation:
There seems to be some confusion. Python and Julia work very differently in terms of their "modules" (even though the two use the same term, in principle they are different).
In python, a source file is either a module or a script, depending on how you chose to "load" / "run" it: the boilerplate exists to detect the environment in which the source code was run, by querying the __name__ of the embedding module at the time of execution. E.g. if you have a file called mymodule.py, it you import it normally, then within the module definition the variable __name__ automatically gets set to the value mymodule; but if you ran it as a standalone script (effectively "dumping" the code into the "main" module), the __name__ variable is that of the global scope, namely __main__. This difference gives you the ability to detect how a python file was ran, so you could act slightly differently in each case, and this is exactly what the boilerplate does.
In julia, however, a module is defined explicitly as code. Running a file that contains a module declaration will load that module regardless of whether you did using or include; however in the former case, the module will not be reloaded if it's already on the workspace, whereas in the latter case it's as if you "redefined" it.
Modules can have initialisation code via the special __init__() function, whose job is to only run the first time a module is loaded (e.g. when imported via a using statement). So one thing you could do is have a standalone script, which you could either include directly to run as a standalone script, or include it within the scope of a module definition, and have it detect the presence of module-specific variables such that it behaves differently in each case. But it would still have to be a standalone file, separate from the main module definition.
If you want the module to do stuff, that the standalone script shouldn't, this is easy: you just have something like this:
module MyModule
__init__() = # do module specific initialisation stuff here
include("MyModule_Implementation.jl")
end
If you want the reverse situation, you need a way to detect whether you're running inside the module or not. You could do this, e.g. by detecting the presence of a suitable __init__() function, belonging to that particular module. For example:
### in file "MyModule.jl"
module MyModule
export fun1, fun2;
__init__() = print("Initialising module ...");
include("MyModuleImplementation.jl");
end
### in file "MyModuleImplementation.jl"
fun1(a,b) = a + b;
fun2(a,b) = a * b;
main() = print("Demo of fun1 and fun2. \n" *
" fun1(1,2) = $(fun1(1,2)) \n" *
" fun2(1,2) = $(fun2(1,2)) \n");
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
If MyModule is loaded as a module, the main function in MyModuleImplementation.jl will not run.
If you run MyModuleImplementation.jl as a standalone script, the main function will run.
So this is a way to achieve something close to the effect you want; but it's very different to saying running a module-defining file as either a module or a standalone script; I don't think you can simply "strip" the module instruction from the code and run the module's "contents" in such a manner in julia.
The answer is available at the official Julia docs FAQ. I am copy/pasting it here because this question comes up as the first hit on some search engines. It would be nice if people found the answer on the first-hit site.
How do I check if the current file is being run as the main script?
When a file is run as the main script using julia file.jl one might want to activate extra functionality like command line argument handling. A way to determine that a file is run in this fashion is to check if abspath(PROGRAM_FILE) == #__FILE__ is true.
I want to use R's mathematical functions as provided in libRmath from Ocaml. I successfully installed the library via brew tap homebrew science && brew install --with-librmath-only r. I end up with a .dylib in /usr/local/lib and a .h in /usr/local/include. Following the Ocaml ctypes tutorial, i do this in utop
#require "ctypes.foreign";;
open Ctypes;;
open Foreign;;
let test_pow = foreign "pow_di" (float #-> int #-> returning float);;
this complains that it can't find the symbol. What am I doing wrong? Do I need to open the dynamic library first? Set some environment variables? After googling, I also did this:
nm -gU /usr/local/lib/libRmath.dylib
which gives a bunch of symbols all with a leading underscore including 00000000000013ff T _R_pow_di. In the header file, pow_di is defined via some #define directive from _R_pow_di. I did try variations of the name like "R_pow_di" etc.
Edit: I tried compiling a simple C program using Rmath using Xcode. After setting the include path manually to include /usr/local/include, Xcode can find the header file Rmath.h. However, inside the header file, there is an include of R_ext/Boolean.h which does not seem to exist. This error is flagged by Xcode and compilation stops.
Noob alert: this may be totally obvious to a C programmer...
In order to use external library you still need to link. There're at least two different ways, either link using compiler, or link even more dynamically using dlopen.
For the first method use the following command (as an initial approximation):
ocamlbuild -pkg ctypes.foreign -lflags -cclib,-lRmath yourapp.native
under premise that your code is put into yourapp.ml file.
The second method is to use ctypes interface to dlopen to open the library. Using the correct types and name for the C function call, this goes like this:
let library = Dl.dlopen ~filename:"libRmath.dylib" ~flags:[]
let test_pow = foreign ~from:library "R_pow_di" (double #-> int #-> returning double)
I'm newbie to nacl. And I find out there are so many 0 byte files in the directory (nacl_sdk/pepper_38/toolchain/win_*/bin).
When I change the project platform to NaCl64 and compile(hello_nacl_cpp), there comes out an error
(error MSB6006: “D:\nacl_sdk\pepper_38\toolchain\win_x86_newlib\bin\x86_64-nacl-gcc.exe”已退出,代码为 -1)
But I can debug the example "hello_world_gles" with PPAPI platform, so I'm not sure the environment is ok.
Anyone can tell me something? Thanks!
Answer my question.
As #binji says we should use cygtar.py(which is in the dirctory sdk_tools) to extract the file.
Here we go:
Open cygtar.py with your text editor, you will find a class named CygTar who is the real worker.
Move dwon, and insert a snippet of code below Main function.
def MyLogic():
os.chdir('D:\\nacl_sdk\\sdk')
# tar = CygTar('naclports.tar.bz2', 'r', True) #here must use linux file path
tar = CygTar('naclsdk_win.tar.bz2', 'r', True)
tar.Extract()
Then replace sys.exit(Main(sys.argv)) with sys.exit(MyLogic()) at last of file.That all.
Note: If you have learned python, you will know code indent is very important in python, be careful.
And the final code should looks like this:
I have a C++ code that can be compiled with MPI support depending on a
certain preprocessor flag; missing the appropriate flag, the sources
compile to a non-parallel version.
I would like to setup the Makefile.am so that it compiles both the
MPI-parallel and the sequential version, if an option to
./configure is given.
Here's the catch: MPI has its own C++ compiler wrapper, and insists
that sources are compiled and linked using it rather than the standard
C++ compiler. If I were to write the Makefile myself, I would have to
do something like this:
myprog.seq: myprog.cxx
$(CXX) ... myprog.cxx
myprog.mpi: myprog.cxx
$(MPICXX) -DWITH_MPI ... myprog.cxx
Is there a way to tell automake that it has to use $(MPICXX) instead
of $(CXX) when compiling the MPI-enabled version of the program?
I have the same problem, and I've found that there's no really good way to get autotools to conditionally use MPI compilers for particular targets. Autotools is good at figuring out which compiler to use based on what language your source is written in (CC, CXX, FC, F77, etc.), but it really isn't good at figuring out whether or not to use MPI compiler for a particular target. You can set MPICC, MPICXX, etc., but you essentially have to rewrite all your Makefile rules for your target (as you've done above) if you use the compiler this way. If you do that, what's the point of writing an automake file?
Someone else suggested using MPI like an external library, and this is the approach I'd advocate, but you shouldn't do it by hand, because different MPI installations have different sets of flags they pass to the compiler, and they can depend on the language you're compiling.
The good thing is that all the currently shipping MPI compilers that I know of support introspection arguments, like -show, -show-compile or -show-link. You can automatically extract the arguments from the scripts.
So, what I did to deal with this was to make an m4 script that extracts the defines, includes, library paths, libs, and linker flags from the MPI compilers, then assigns them to variables you can use in your Makefile.am. Here's the script:
lx_find_mpi.m4
This makes MPI work the way automake expects it to. Incidentally, this is the approach CMake uses in their FindMPI module, and I find it works quite well there. It makes the build much more convenient because you can just do something like this for your targets:
bin_PROGRAMS = mpi_exe seq_exe
# This is all you need for a sequential program
seq_exe_SOURCES = seq_exe.C
# For an MPI program you need special LDFLAGS and INCLUDES
mpi_exe_SOURCES = mpi_exe.C
mpi_exe_LDFLAGS = $(MPI_CXXLDFLAGS)
INCLUDES = $(MPI_CXXFLAGS)
There are similar flags for the other languages since, like I said, the particular flags and libraries can vary depending on which language's MPI compiler you use.
lx_find_mpi.m4 also sets some shell variables so that you can test in your configure.ac file whether MPI was found. e.g., if you are looking for MPI C++ support, you can test $have_CXX_mpi to see if the macro found it.
I've tested this macro with mvapich and OpenMPI, as well as the custom MPICH2 implementation on BlueGene machines (though it does not address all the cross-compiling issues you'll see there). Let me know if something doesn't work. I'd like to keep the macro as robust as possible.
I am sorry that having automake use MPI is so difficult. I have been struggling with this for many months trying to find a good solution. I have a source tree that have one library and then many programs in sub-folders that use the library. Some of the folders are mpi programs, but when I try to replace CXX with the MPI compiler using in Makefile.am.
if USE_MPI
MPIDIR = $(MPICOMPILE)
MPILIB = $(MPILINK)
CXX=#MPICXX#
F77=#MPIF77#
MPILIBS=$(MPILINK)
endif
I get
CXX was already defined in condition TRUE, which includes condition USE_MPI ...
configure.ac:12: ... `CXX' previously defined here
I don't have a rule that specifies the compiler, so maybe there is a way to do that.
SUBDIRS = .
bin_PROGRAMS = check.cmr
check_ccmr_SOURCES = check_gen.cpp
check_ccmr_CXXFLAGS = -I$(INCLUDEDIR) $(MPIDIR)
check_ccmr_LDADD = -L$(LIBDIR)
check_ccmr_LDFLAGS = $(MPILIB)
If you have disabled the subdir-objects option to automake, something like this might work:
configure.ac:
AC_ARG_ENABLE([seq], ...)
AC_ARG_ENABLE([mpi], ...)
AM_CONDITIONAL([ENABLE_SEQ], [test $enable_seq = yes])
AM_CONDITIONAL([ENABLE_MPI], [test $enable_mpi = yes])
AC_CONFIG_FILES([Makefile seq/Makefile mpi/Makefile])
Makefile.am:
SUBDIRS =
if ENABLE_SEQ
SUBDIRS += seq
endif
if ENABLE_MPI
SUBDIRS += mpi
endif
sources.am:
ALL_SOURCES = src/foo.c src/bar.cc src/baz.cpp
seq/Makefile.am:
include $(top_srcdir)/sources.am
bin_PROGRAMS = seq
seq_SOURCES = $(ALL_SOURCES)
mpi/Makefile.am:
include $(top_srcdir)/sources.am
CXX = $(MPICXX)
AM_CPPFLAGS = -DWITH_MPI
bin_PROGRAMS = mpi
mpi_SOURCES = $(ALL_SOURCES)
The only thing stopping you from doing both of these in the same directory is the override of $(CXX). You could, for instance, set mpi_CPPFLAGS and automake would handle that gracefully, but the compiler switch makes it a no-go here.
A possible workaround for not using different sources could be:
myprog.seq: myprog.cxx
$(CXX) ... myprog.cxx
myprog-mpi.cxx: myprog.cxx
#cp myprog.cxx myprog-mpi.cxx
myprog.mpi: myprog-mpi.cxx
$(MPICXX) -DWITH_MPI ... myprog-mpi.cxx
#rm -f myprog-mpi.cxx
for Automake:
myprog-bin_PROGRAMS = myprog-seq myprog-mpi
myprog_seq_SOURCES = myprog.c
myprog-mpi.c: myprog.c
#cp myprog.c myprog-mpi.c
myprog_mpi_SOURCES = myprog-mpi.c
myprog_mpi_LDFLAGS = $(MPI_CXXLDFLAGS)
INCLUDES = $(MPI_CXXFLAGS)
BUILT_SOURCES = myprog-mpi.c
CLEANFILES = myprog-mpi.c
Here is the solution that I came up with for building a two static libraries - one with MPI (libmylib_mpi.a) and one without (libmylib.a). The advantage of this method is that there is no need for duplicate source files, a single Makefile.am for both variants, and capability to use subdirs. You should be able to modify this as needed to produce a binary instead of a library. I build the non-MPI library as normal, then for the MPI variant, I leave _SOURCES empty and use _LIBADD instead, specifying an extension of .mpi.o for the object files. I then specify a rule to generate the MPI object files using the MPI compiler.
Overall file / directory structure is something like
configure.ac
Makefile.am
src
mylib1.cpp
mylib2.cpp
...
include
mylib.h
...
configure.ac:
AC_INIT()
AC_PROG_RANLIB
AC_LANG(C++)
AC_PROG_CXX
# test for MPI, define MPICXX, etc. variables, and define HAVE_MPI as a condition that will evaluate to true if MPI is available and false otherwise.
AX_MPI([AM_CONDITIONAL([HAVE_MPI], [test "1" = "1"])],[AM_CONDITIONAL([HAVE_MPI], [test "1" = "2"])]) #MPI optional for xio
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
There is probably a more efficient way to do the conditional check than I have listed here (I'm welcome to suggestions).
Makefile.am:
AUTOMAKE_OPTIONS = subdir-objects
lib_LIBRARIES = libmylib.a
libmylib_a_SOURCES = src/mylib_1.cpp src/mylib_2.cpp ...
#conditionally generate libmylib_mpi.a if MPI is available
if HAVE_MPI
lib_LIBRARIES += libmylib_mpi.a
libmylib_mpi_a_SOURCES = #no sources listed here
#use LIBADD to specify objects to add - use the basic filename with a .mpi.o extension
libmylib_mpi_a_LIBADD = src/mylib_1.mpi.o src/mylib_2.mpi.o ...
endif
AM_CPPFLAGS = -I${srcdir}/include
include_HEADERS = include/mylib.h
# define a rule to compile the .mpi.o objects from the .cpp files with the same name
src/%.mpi.o: ${srcdir}/src/%.cpp ${srcdir}/include/mylib.h
$(MPICXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -DWITH_MPI=1 -c $(patsubst %.mpi.o,$(srcdir)/%.cpp,$#) -o $#
#define a rule to clean the .mpi.o files
clean-local:
-rm -f src/*.mpi.o
MPI installations do (usually) ship with compiler wrappers, but there is no requirement that you use them -- MPI does not insist on it. If you want to go your own way you can write your own makefile to ensure that the C++ compiler gets the right libraries (etc). To figure out what the right libraries (etc) are, inspect the compiler wrapper which is, on all the systems I've used, a shell script.
At first sight the compiler wrappers which ship with products such as the Intel compilers are a little daunting but stop and think about what is going on -- you are simply compiling a program which makes use of an external library or two. Writing a makefile to use the MPI libraries is no more difficult than writing a makefile to use any other library.