We're using Boost-Build to build our software. To help facilitate this, we've written a library of rules and actions. Boost-Build allows passing in command line arguments and will pass along any argument prefixed with --. Currently, to get a hold of the arguments and check for flags we're doing something like:
import modules ;
local args = [ modules.peek : ARGV ] ;
# or like this
if "--my-flag" in [ modules.peek : ARGV ]
Which works to get and check values. However, developers that use Boost-Build and our jam libraries have no idea that these flags are available and would like to see some help on these flags whenever they run either bjam -h or bjam --help. I see that BB has a help module, but I don't see any way to register arguments with the help system.
Is there a way to register command line flags, complete with short documentation, that the help system will pick up?
In response to my own question, I have added the very feature that I was asking about: https://github.com/boostorg/build/pull/23
It uses the existing options plugin system. Whenever you want to create a new command-line option create a new module within the options directory. Within that new module, create a variable that holds a value or is empty. Above the variable create a comment. The comment is used as the command-line documentation and the value (if given) is used to describe the default value of that variable.
Create a process rule within the new module and register your option with the system. This allows importing the option module and getting the value from a single source. This requires that each variable name is prefixed with the name of the module.
Create a variable named .option-description. Its value is the section separator and its comment is the description of the section.
For example:
# within options/cool.jam
# These are the options for the Cool option plugin
.option-description = Cool Options
# This enables option1
.option.--cool-option1 ?= ;
# This enables something cool
.option.--cool-default-option ?= "my default value" ;
rule process (
command # The option.
: values * # The values, starting after the "=".
)
{
option.set $(command) : $(values) ;
}
When running bjam with the --help-options flag, it will output all modules that follow the above patterns within the options directory. It will have an output similar to the following:
Boost.Build Usage:
b2 [ options... ] targets...
* -a; Build all targets, even if they are current.
* -fx; Read 'x' as the Jamfile for building instead of searching for the
Boost.Build system.
* -jx; Run up to 'x' commands concurrently.
* -n; Do not execute build commands. Instead print out the commands as they
would be executed if building.
* -ox; Output the used build commands to file 'x'.
* -q; Quit as soon as a build failure is encountered. Without this option
Boost.Jam will continue building as many targets as it can.
* -sx=y; Sets a Jam variable 'x' to the value 'y', overriding any value that
variable would have from the environment.
* -tx; Rebuild the target 'x', even if it is up-to-date.
* -v; Display the version of b2.
* --x; Any option not explicitly handled by Boost.Build remains available to
build scripts using the 'ARGV' variable.
* --abbreviate-paths; Use abbreviated paths for targets.
* --hash; Shorten target paths by using an MD5 hash.
* -dn; Enables output of diagnostic messages. The debug level 'n' and all
below it are enabled by this option.
* -d+n; Enables output of diagnostic messages. Only the output for debug
level 'n' is enabled.
Debug Levels:
Each debug level shows a different set of information. Usually with higher
levels producing more verbose information. The following levels are supported:
* 0; Turn off all diagnostic output. Only errors are reported.
* 1; Show the actions taken for building targets, as they are executed.
* 2; Show quiet actions and display all action text, as they are executed.
* 3; Show dependency analysis, and target/source timestamps/paths.
* 4; Show arguments of shell invocations.
* 5; Show rule invocations and variable expansions.
* 6; Show directory/header file/archive scans, and attempts at binding to
targets.
* 7; Show variable settings.
* 8; Show variable fetches, variable expansions, and evaluation of 'if'
expressions.
* 9; Show variable manipulation, scanner tokens, and memory usage.
* 10; Show execution times for rules.
* 11; Show parsing progress of Jamfiles.
* 12; Show graph for target dependencies.
* 13; Show changes in target status (fate).
Cool Options:
These are the options for the Cool option plugin
* --cool-option1: This enables option1. Default is disabled.
* --cool-default-option: This enables something cool. "my default value".
Later on in your own Jam code, you can then get a hold of values from the registered options by doing:
import option ;
option1 = option.get '--cool-option1' ;
if $(option1) {
# do_something ;
}
There is no way to "register" individual options with the B2 help system. As generally that's just not how the help system works. The help system documents B2 modules (i.e. classes) and projects (i.e. jamfiles) only. What you see when you do "b2 --help" is a collection of the project help information and module information. All the data is extracted from the parse jamfiles.
For modules you add comments to classes and arguments and they get formatted for output. For example take a look at the "container.jam" source. In that the second comment on the file is a module help doc, the comment before "class node" is a class help doc, the comment before "rule set" is a method help doc, and the comment after the "value ?" argument is a help doc.
For projects the second comment in the project's jamfile is taken as the help doc. For example the Boost Jamroot has a large help doc comment for the usage information. This is printed out if you:
cd <boost-root>
b2 --help
If you such a comment to one of your project jamfiles (note: it assumes the first comment is a copyright notice and hence skips it and uses the second comment block for the help doc), and cd to that project directory and do b2 --help you should see it printed.
Which type of help doc you want is of course dependent on your project and your intentions.
Related
I am very new to Frama-c and I got an issue when I am trying to open a C source file.
The error shows as
"fatal error: event.h: No such file or directory. Compilation terminated".
[kernel] Parsing FRAMAC_SHARE/libc/__fc_builtin_for_normalization.i (no preprocessing)
[kernel] Parsing WorkSpace/bipbuffer.c (with preprocessing)
[kernel] user error: failed to run: gcc -E -C -I. -dD -D__FRAMAC__ -nostdinc -D__FC_MACHDEP_X86_32 -I/usr/share/frama-c/libc -o '/tmp/bipbuffer.ce6d077.i' '/home/xxx/WorkSpace/bipbuffer.c' you may set the CPP environment variable to select the proper preprocessor command or use the option "-cpp-command".
[kernel] user error: stopping on file "/home/xxx/WorkSpace/bipbuffer.c" that has errors. Add'-kernel-msg-key pp' for preprocessing command.
So bascially I am trying to open a C source file but it returns an error like this. I aslo tried other very simple C files like hello world and other slicing functions, it works well.
I thought it was because I didn't have the dependencies of 'event.h' but it still return these errors after I installed the libevent dependencies. I am not sure if I need to manually set some path of the dependencies for frama-c
Here is part of the C file (Source link: https://memcached.org/) that I would like to open:
#include "stdio.h"
#include <stdlib.h>
/* for memcpy */
#include <string.h>
#include "bipbuffer.h"
static size_t bipbuf_sizeof(const unsigned int size)
{
return sizeof(bipbuf_t) + size;
}
int bipbuf_unused(const bipbuf_t* me)
{
if (1 == me->b_inuse)
/* distance between region B and region A */
return me->a_start - me->b_end;
else
return me->size - me->a_end;
}
......
Thanks,
Compilers and other tools working with C source code need to know where to find header files. There are some standard places where they look automatically, but Frama-C has fewer of those than (and different ones from) a normal compiler.
You need to find out where event.h is installed, then pass something like -cpp-extra-args "-I /path/to/directory/" to Frama-C. Pass the directory name only, not including the name event.h itself.
In addition to Isabelle Newbie's answer, I'd like to point out that the Chlorine version of Frama-C, whose beta has been recently announced, features a new option -json-compilation-database that attempts to read the arguments to be passed to the pre-processor from a compilation database.
Such database can be generated directly by cmake, but there are solutions for make-based project such as the one you refer to, in particular bear, which intercepts the commands launched by make to build the database.
Here's a detailed summary of how you could proceed, using the new -json-compilation-database option from Frama-C 17 Chlorine, plus an extra script list_files.py (which is not in the beta, but will be available in the final 17 release, and can be downloaded here):
Get the source files you want to analyze with Frama-C, run ./configure, and if possible try to disable optional dependencies from external libraries; for instance, some code bases include optional dependencies based on availability of libraries/system features, but have fallback options (resorting to standard C library or POSIX functions). The more you give Frama-C, the better the chances of analyzing it well, so if such external libraries are not essential, excluding them might help get a more "POSIXy" code, which should help. This is typically visible in config.h files, in macros commonly named HAVE_*.
Compile and install Build EAR or some equivalent tool to obtain a compile_commands.json file.
Run bear make (or cmake with flag CMAKE_EXPORT_COMPILE_COMMANDS) to get the compile_commands.json file.
Run the aforementioned list_files.py in the directory containing compile_commands.json to obtain the list of C sources used during compilation.
Run Frama-C (17 Chlorine or newer), giving it the list of sources found in the previous step, plus option -json-compilation-database . to parse the compile_commands.json and, hopefully, get the appropriate preprocessing flags.
Ideally, this should suffice, but in practice, this is rarely enough. In particular due to the presence of external libraries and non-C99, non-POSIX functions, the following steps are always needed.
6. Inclusion of external libraries
At this step, Frama-C will complain about the lack of event.h. You'll have to include the headers of this library yourself. Note: copying headers directly from your /usr/include is not likely to work, due to several architecture-specific definitions, especially files such as bits/*.h..
Instead, consider downloading the external libraries and preparing them (e.g. running ./configure at least). Then manually add the extra include directory via -cpp-extra-args="-I <path/to/your/sources/for/libevent.h>/include".
7. Inclusion of missing non-POSIX headers
Some other headers may be missing, in particular GNU- or BSD-specific sources (e.g. sysexits.h). Get these headers and add them when necessary. The error message in this case comes from the preprocessor (gcc) and is similar to this:
memcached.c:51:10: fatal error: sysexits.h: No such file or directory
#include <sysexits.h>
^~~~~~~~~~~~
compilation terminated.
8. Definition of missing non-POSIX types and constants
At this point, all necessary headers should be available, but parsing with Frama-C may still fail. This is due to usage of non-POSIX type definitions (e.g. caddr_t, struct ling), non-POSIX constants (e.g. MAXPATHLEN, SOCK_NONBLOCK, NI_MAXSERV). Error messages typically resemble the following:
[kernel] memcached.c:3261: Failure: Cannot resolve variable MAXPATHLEN
Constants are often easy to provide manually, by grepping what's available in your /usr/include.
Type definitions, on the other hand, may require some copy-pasting at the right places, especially if they depend on other types which are also missing. This step is hardly automatizable, but relatively straightforward once you get used to some specific error messages.
For instance, the following error message is related to a missing type definition (caddr_t):
[kernel] Parsing memcached.c (with preprocessing)
[kernel] memcached.c:1074:
syntax error:
Location: line 1074, between columns 38 and 47, before or at token: c
1072 *hdr++ = 0;
1073 *hdr++ = 0;
1074 assert((void *) hdr == (caddr_t)c->msglist[i].msg_iov[0].iov_base + UDP_HEADER_SIZE);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1075 }
1076
Note that the token just before c is (caddr_t), which has never been defined (it is often defined as either void * or char *).
The following error message is related to an incomplete type, i.e., a struct used somewhere but never defined:
[kernel] memcached.c:5811: User Error:
variable `ling' has initializer but incomplete type
It means that variable ling's type, which is struct linger (non-POSIX), has never been defined. In this case, we can copy it from our /usr/include/bits/socket.h:
struct linger
{
int l_onoff; /* Nonzero to linger on close. */
int l_linger; /* Time to linger. */
};
Note: if there are POSIX constants/definitions missing from Frama-C's libc, consider notifying its developers, or proposing pull requests in Frama-C's Github.
9. Fixing incompatible and missing function prototypes
Parsing is likely to succeed after the previous step, but it may still fail due to incompatible function prototypes. For instance, you may get:
[kernel] User Error: Incompatible declaration for usleep:
different integer types int and unsigned int
First declaration was at assoc.c:238
Current declaration is at items.c:1573
This is the consequence of a warning emitted earlier:
[kernel:typing:implicit-function-declaration] slabs.c:1150: Warning:
Calling undeclared function usleep. Old style K&R code?
It means that function usleep is called, but it does not have a prototype, therefore Frama-C uses the pre-C99 convention of "implicit int": it generates such a prototype, but later in the code, an actual declaration of usleep is found, and its type is not int. Hence the error.
To prevent this, you need to ensure usleep's prototype is properly included. Since it is not POSIX.1-2008, you need to either define/undefine the appropriate macros (see unistd.h), or add your own prototype.
At the end, this should allow Frama-C to parse the files and build an AST.
However, there are several missing prototypes yet; we were just lucky that none conflicted with actual declarations. Ideally, you'll consider the parsing stage done when there are no more messages such as implicit-function-declaration and similar warnings.
Some of the missing prototypes in memcached, such as getsubopt, are POSIX and should be integrated into Frama-C's standard library. Others might make part of a small library of non-standard stubs, to be reused for other software.
Contributing with results for future reuse
Successful conclusion of the parsing stage for such open source libraries is enough to consider them for integration into this repository of open source case studies, so that future users can start their analyses without having to redo all of these steps. (The repository is oriented towards Eva, but not exclusively: parsing is useful for all of Frama-C plug-ins.)
I would like to know how in the Julia language, I can determine if a file.jl is run as script, such as in the call:
bash$ julia file.jl
It must only in this case start a function main, for example. Thus I could use include('file.jl'), without actually executing the function.
To be specific, I am looking for something similar answered already in a python question:
def main():
# does something
if __name__ == '__main__':
main()
Edit:
To be more specific, the method Base.isinteractive (see here) is not solving the problem, when using include('file.jl') from within a non-interactive (e.g. script) environment.
The global constant PROGRAM_FILE contains the script name passed to Julia from the command line (it does not change when include is called).
On the other hand #__FILE__ macro gives you a name of the file where it is present.
For instance if you have a files:
a.jl
println(PROGRAM_FILE)
println(#__FILE__)
include("b.jl")
b.jl
println(PROGRAM_FILE)
println(#__FILE__)
You have the following behavior:
$ julia a.jl
a.jl
D:\a.jl
a.jl
D:\b.jl
$ julia b.jl
b.jl
D:\b.jl
In summary:
PROGRAM_FILE tells you what is the file name that Julia was started with;
#__FILE__ tells you in what file actually the macro was called.
tl;dr version:
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
Explanation:
There seems to be some confusion. Python and Julia work very differently in terms of their "modules" (even though the two use the same term, in principle they are different).
In python, a source file is either a module or a script, depending on how you chose to "load" / "run" it: the boilerplate exists to detect the environment in which the source code was run, by querying the __name__ of the embedding module at the time of execution. E.g. if you have a file called mymodule.py, it you import it normally, then within the module definition the variable __name__ automatically gets set to the value mymodule; but if you ran it as a standalone script (effectively "dumping" the code into the "main" module), the __name__ variable is that of the global scope, namely __main__. This difference gives you the ability to detect how a python file was ran, so you could act slightly differently in each case, and this is exactly what the boilerplate does.
In julia, however, a module is defined explicitly as code. Running a file that contains a module declaration will load that module regardless of whether you did using or include; however in the former case, the module will not be reloaded if it's already on the workspace, whereas in the latter case it's as if you "redefined" it.
Modules can have initialisation code via the special __init__() function, whose job is to only run the first time a module is loaded (e.g. when imported via a using statement). So one thing you could do is have a standalone script, which you could either include directly to run as a standalone script, or include it within the scope of a module definition, and have it detect the presence of module-specific variables such that it behaves differently in each case. But it would still have to be a standalone file, separate from the main module definition.
If you want the module to do stuff, that the standalone script shouldn't, this is easy: you just have something like this:
module MyModule
__init__() = # do module specific initialisation stuff here
include("MyModule_Implementation.jl")
end
If you want the reverse situation, you need a way to detect whether you're running inside the module or not. You could do this, e.g. by detecting the presence of a suitable __init__() function, belonging to that particular module. For example:
### in file "MyModule.jl"
module MyModule
export fun1, fun2;
__init__() = print("Initialising module ...");
include("MyModuleImplementation.jl");
end
### in file "MyModuleImplementation.jl"
fun1(a,b) = a + b;
fun2(a,b) = a * b;
main() = print("Demo of fun1 and fun2. \n" *
" fun1(1,2) = $(fun1(1,2)) \n" *
" fun2(1,2) = $(fun2(1,2)) \n");
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
If MyModule is loaded as a module, the main function in MyModuleImplementation.jl will not run.
If you run MyModuleImplementation.jl as a standalone script, the main function will run.
So this is a way to achieve something close to the effect you want; but it's very different to saying running a module-defining file as either a module or a standalone script; I don't think you can simply "strip" the module instruction from the code and run the module's "contents" in such a manner in julia.
The answer is available at the official Julia docs FAQ. I am copy/pasting it here because this question comes up as the first hit on some search engines. It would be nice if people found the answer on the first-hit site.
How do I check if the current file is being run as the main script?
When a file is run as the main script using julia file.jl one might want to activate extra functionality like command line argument handling. A way to determine that a file is run in this fashion is to check if abspath(PROGRAM_FILE) == #__FILE__ is true.
I'm getting this on the console in a QML app:
QFont::setPointSizeF: Point size <= 0 (0.000000), must be greater than 0
The app is not crashing so I can't use the debugger to get a backtrace for the exception. How do I see where the error originates from?
If you know the function the warning occurs in (in this case, QFont::setPointSizeF()), you can put a breakpoint there. Following the stack trace will lead you to the code that calls that function.
If the warning doesn't include the name of the function and you have the source code available, use git grep with part of the warning to get an idea of where it comes from. This approach can be a bit of trial and error, as the code may span more than one line, etc, and so you might have to try different parts of the string.
If the warning doesn't include the name of the function, you don't have the source code available and/or you don't like the previous approach, use the QT_MESSAGE_PATTERN environment variable:
QT_MESSAGE_PATTERN="%{function}: %{message}"
For the full list of variables at your disposal, see the qSetMessagePattern() docs:
%{appname} - QCoreApplication::applicationName()
%{category} - Logging category
%{file} - Path to source file
%{function} - Function
%{line} - Line in source file
%{message} - The actual message
%{pid} - QCoreApplication::applicationPid()
%{threadid} - The system-wide ID of current thread (if it can be obtained)
%{qthreadptr} - A pointer to the current QThread (result of QThread::currentThread())
%{type} - "debug", "warning", "critical" or "fatal"
%{time process} - time of the message, in seconds since the process started (the token "process" is literal)
%{time boot} - the time of the message, in seconds since the system boot if that can be determined (the token "boot" is literal). If the time since boot could not be obtained, the output is indeterminate (see QElapsedTimer::msecsSinceReference()).
%{time [format]} - system time when the message occurred, formatted by passing the format to QDateTime::toString(). If the format is not specified, the format of Qt::ISODate is used.
%{backtrace [depth=N] [separator="..."]} - A backtrace with the number of frames specified by the optional depth parameter (defaults to 5), and separated by the optional separator parameter (defaults to "|"). This expansion is available only on some platforms (currently only platfoms using glibc). Names are only known for exported functions. If you want to see the name of every function in your application, use QMAKE_LFLAGS += -rdynamic. When reading backtraces, take into account that frames might be missing due to inlining or tail call optimization.
On an unrelated note, the %{time [format]} placeholder is quite useful to quickly "profile" code by qDebug()ing before and after it.
I think you can use qInstallMessageHandler (Qt5) or qInstallMsgHandler (Qt4) to specify a callback which will intercept all qDebug() / qInfo() / etc. messages (example code is in the link). Then you can just add a breakpoint in this callback function and get a nice callstack.
Aside from the obvious, searching your code for calls to setPointSize[F], you can try the following depending on your environment (which you didn't disclose):
If you have the debugging symbols of the Qt libs installed and are using a decent debugger, you can set a conditional breakpoint on the first line in QFont::setPointSizeF() with the condition set to pointSize <= 0. Even if conditional breakpoints don't work you should still be able to set one and step through every call until you've found the culprit.
On Linux there's the tool ltrace which displays all calls of a binary into shared libs, and I suppose there's something similar in the M$ VS toolbox. You can grep the output for calls to setPointSize directly, but of course this won't work for calls within the lib itself (which I guess could be the case when it handles the QML internally).
I have got a 16-bit MPU which is different from x86_16 in size of size_t, ptrdiff_t etc. Can anybody give me details and clear instructions about how to customize machine dependency in Frama-C for my MPU?
There is currently no way to do that directly from the command line: you have to write a small OCaml script that will essentially define a new Cil_types.mach (a record containing the necessary information about your architecture) and register it through File.new_machdep. Assuming you have a file my_machdep.ml looking like that:
let my_machdep = {
Cil_types.sizeof_short = 2;
sizeof_int = 2;
sizeof_long = 4;
(* ... See `cil_types.mli` for the complete list of fields to define *)
}
let () = File.new_machdep "my_machdep" my_machdep
You will then be able to launch Frama-C that way to use the new machdep:
frama-c -load-script my_machdep.ml -machdep my_machdep [normal options]
If you want to have the new machdep permanently available, you can make it a Frama-C plugin. For that, you need a Makefile of the following form:
FRAMAC_SHARE:=$(shell frama-c -print-share-path)
PLUGIN_NAME=Custom_machdep
PLUGIN_CMO=my_machdep
include $(FRAMAC_SHARE)/Makefile.dynamic
my_machdep must be the name of your .ml file. Be sure to choose a different name for PLUGIN_NAME. Then, create an empty Custom_machdep.mli file (touch Custom_machdep.mli should do the trick). Afterwards, make && make install should compile and install the plug-in so that it will be automatically loaded by Frama-C. You can verify this by launching frama-c -machdep help that should output my_machdep among the list of known machdeps.
UPDATE
If you are using some headers from Frama-C's standard library, you will also have to update $(frama-c -print-share-path)/libc/__fc_machdep.h in order to define appropriate macros (related to limits.h and stdint.h mostly).
We are moving into Scala/SBT from a Java/Gradle stack. Our gradle builds were leveraging a task called processResources and some Ant filter thing named ReplaceTokens to dynamically replace tokens in a checked-in .properties file without actually changing the .properties file (just changing the output). The gradle task looks like:
processResources {
def whoami = System.getProperty( 'user.name' );
def hostname = InetAddress.getLocalHost().getHostName()
def buildTimestamp = new Date().format('yyyy-MM-dd HH:mm:ss z')
filter ReplaceTokens, tokens: [
"buildsig.version" : project.version,
"buildsig.classifier" : project.classifier,
"buildsig.timestamp" : buildTimestamp,
"buildsig.user" : whoami,
"buildsig.system" : hostname,
"buildsig.tag" : buildTag
]
}
This task locates all the template files in the src/main/resources directory, performs the requisite substitutions and outputs the results at build/resources/main. In other words it transforms src/main/resources/buildsig.properties from...
buildsig.version=#buildsig.version#
buildsig.classifier=#buildsig.classifier#
buildsig.timestamp=#buildsig.timestamp#
buildsig.user=#buildsig.user#
buildsig.system=#buildsig.system#
buildsig.tag=#buildsig.tag#
...to build/resources/main/buildsig.properties...
buildsig.version=1.6.5
buildsig.classifier=RELEASE
buildsig.timestamp=2013-05-06 09:46:52 PDT
buildsig.user=jenkins
buildsig.system=bobk-mbp.local
buildsig.tag=dev
Which, ultimately, finds its way into the WAR file at WEB-INF/classes/buildsig.properties. This works like a champ to record build specific information in a Properties file which gets loaded from the classpath at runtime.
What do I do in SBT to get something like this done? I'm new to Scala / SBT so please forgive me if this seems a stupid question. At the end of the day what I need is a means of pulling some information from the environment on which I build and placing that information into a properties file that is classpath loadable at runtime. Any insights you can give to help me get this done are greatly appreciated.
The sbt-buildinfo is a good option. The README shows an example of how to define custom mappings and mappings that should run on each compile. In addition to the straightforward addition of normal settings like version shown there, you want a section like this:
buildInfoKeys ++= Seq[BuildInfoKey](
"hostname" -> java.net.InetAddress.getLocalHost().getHostName(),
"whoami" -> System.getProperty("user.name"),
BuildInfoKey.action("buildTimestamp") {
java.text.DateFormat.getDateTimeInstance.format(new java.util.Date())
}
)
Would the following be what you're looking for:
sbt-editsource: An SBT plugin for editing files
sbt-editsource is a text substitution plugin for SBT 0.11.x and
greater. In a way, it’s a poor man’s sed(1), for SBT. It provides the
ability to apply line-by-line substitutions to a source text file,
producing an edited output file. It supports two kinds of edits:
Variable substitution, where ${var} is replaced by a value. sed-like
regular expression substitution.
This is from Community Plugins.