Satisfying Proof Obligations for memcpy? [Frama-C] - frama-c

We've been using Frama-C for 'experimental' static analysis on a commercial project (integrated into our CI, with a few selective blocking checks, on a small section of the overall codebase).
One of the snags that comes up relates to satisfying the proof obligations that the wp plugin generates anytime it encounters a memcpy call. Specifically, the three obligations below:
From the 'goal' notes, it looks like Frama-C is trying to prove that the destination and source memory are valid, .
I've tried adding requires \valid() preconditions, but that doesn't seem to help. In these instances, the memcpy call within the function under test is copying data from an input parameter to the function, and placing that data into a local variable (scoped within the function).
To further complicate matters, the local variable where the data is being copied is an attribute within a packed struct.
Concretely, I'm hoping that someone out there is able to share some real examples of memcpy uses where the goals introduced by wp can be satisfied (e.g. what preconditions must I add to make it provable?)
If it matters, I'm running Frama-C Magnesium-20151002 (according to apt-get on Ubuntu 16, this is 'up to date'), and invoking with the following parameters:
frama-c -wp -wp-split -wp-dynamic -lib-entry -wp-proof alt-ergo -wp-report
Also related, but missing a clear working example: Frama-c : Trouble understanding WP memory models

As you mentioned in you comment, the proper solution is to use -wp-model "Typed+Cast" in order to let WP accept casts to/from void* (more precisely, it will consider that p and (void*)p are the same thing for any pointer, which will be sufficient for proving the requires of memcpy). Now, as mentioned in the answer to the question you linked to, the main issue of this memory model (and the reason why it is not the default) is that it is inherently unsafe: it relies on hypotheses that by definition cannot be assessed by WP itself. Here is a small example that highlights this issue:
int x;
char* c;
/*# assigns c;
ensures c == ((char *)&x);
*/
void g(void) {
c = &x;
}
/*# assigns \nothing;
ensures \separated(&x,c);
*/
void f() {
}
void main () {
g();
f();
//# assert \false;
}
Basically, the default Typed memory model ensures the separation between the location pointed to by c and x (i.e. the post-condition of f), because int and char are different, and you neither can prove the post-condition of g or use it as an hypothesis to derive \false in main, because the equality can't be expressed in the model at all.
Now, if you use Typed+Cast, the post-condition of g is now properly understood, and completely trivial to prove. WP won't let you prove at the same time that &x and c are separated, because they are involved together in an assignment. However, in f no such assignment exists, and the post-condition is also easily proved, leading to proving \false in main since we have two contradictory statements about &x and c. More generally, WP relies on a local alias analysis to track potential aliases between pointers of different types (a global analysis would defeat the purpose of having a modular analyzer). Passing option -wp-model +Cast can thus be seen as way to tell WP "Trust me, the program won't create miss-typed aliases". It is however possible to pass alias information by hand (or with the help of e.g. a yet-to-be-written global alias detection plug-in). For instance, with option -wp-alias-vars x,c the post-condition of f becomes Unknown (i.e. the separation between &x and c is not an assumption anymore, even for f).

Related

How to prove integers are finite in Frama-C

I have a little snippet of c that looks like this:
double sum(double a, double b) {
return a+b; }
When I run this with eva-wp, I get the error:
non-finite double value. assert \is_finite(\add_double(a, b));
But when I add :
requires \is_finite(\add_double(a, b));
then it gets status unknown.
I feel like I'm making a very simple mistake, but I don't know what it is!
If we consider file.c:
/*# requires \is_finite(\add_double(a,b)); */
double sum(double a, double b) {
return a+b;
}
and launch Eva with frama-c -eva -main sum file.c, we indeed have an Unknown status, and the alarm is still there. In fact, these are two separate issues.
First, when there is a requires clause in the main entry point, Eva tries to evaluate it against the generic initial state that it computes anyway, in which double values can be any finite double (or even infinite or NaN, depending on the value of the kernel option -warn-special-float). In this generic state, the requires does not hold, hence the Unknown status.
Second, Eva is not able to take advantage of the requires clause to reduce this initial state to an abstract state in which the property holds: this would require maintaining some kind of relation between a and b, which is not in the scope of the abstract domains currently implemented within Eva. Hence, the analysis is done with exactly the same abstract state as without the requires, and leads to the same alarm.
In order to validate the requires (and make the alarm disappear), you can use a wrapper function, that will build a more appropriate initial state and call the function under analysis, as in e.g.
#include <float.h>
#include "__fc_builtin.h"
/*# requires \is_finite(\add_double(a,b)); */
double sum(double a, double b) {
return a+b;
}
void wrapper() {
double a = Frama_C_double_interval(-DBL_MAX/2, DBL_MAX/2);
double b = Frama_C_double_interval(-DBL_MAX/2, DBL_MAX/2);
double res = sum(a,b);
}
Frama_C_double_interval, declared in __fc_builtin.h in $(frama-c -print-share-path)/libc together with similar functions for other scalar types, returns a random double between the bounds given as argument. Thus, from the point of view of Eva, the result is in the corresponding interval. By launching frama-c -eva -main wrapper file.c, we don't have any alarm (and the requires is valid). More generally, such wrappers are usually the easiest way to build an initial state for conducting an analysis with Eva.

What's a cterm?

The Isabelle implementation manual says:
Types ctyp and cterm represent certified types and terms, respectively. These are abstract datatypes that guarantee that its values have passed the full well-formedness (and well-typedness) checks, relative to the declarations of type constructors, constants etc. in the background theory.
My understanding is: when I write cterm t, Isabelle checks that the term is well-built according to the theory where it lives in.
The abstract types ctyp and cterm are part of the same inference kernel that is mainly responsible for thm. Thus syntactic operations on ctyp and cterm are located in the Thm module, even though theorems
are not yet involved at that stage.
My understanding is: if I want to modify a cterm at the ML level I will use operations of the Thm module (where can I find that module?)
Furthermore, it looks like cterm t is an entity that converts a term of the theory level to a term of the ML-level. So I inspect the code of cterm in the declaration:
ML_val ‹
some_simproc #{context} #{cterm "some_term"}
›
and get to ml_antiquotations.ML:
ML_Antiquotation.value \<^binding>‹cterm› (Args.term >> (fn t =>
"Thm.cterm_of ML_context " ^ ML_Syntax.atomic (ML_Syntax.print_term t))) #>
This line of code is unreadable to me with my current knowledge.
I wonder if someone could give a better low-level explanation of cterm. What is the meaning of the code below? Where are located the checks that cterm performs on theory terms? Where are located the manipulations that we can do on cterms (the module Thm above)?
The ‘c’ stands for ‘certified’ (Or ‘checked’? Not sure). A cterm is basically a term that has been undergone checking. The #{cterm …} antiquotation allows you to simply write down terms and directly get a cterm in various contexts (in this case probably the context of ML, i.e. you directly get a cterm value with the intended content). The same works for regular terms, i.e. #{term …}.
You can manipulate cterms directly using the functions from the Thm structure (which, incidentally, can be found in ~~/src/Pure/thm.ML; most of these basic ML files are in the Pure directory). However, in my experience, it is usually easier to just convert the cterm to a regular term (using Thm.term_of – unlike Thm.cterm_of, this is a very cheap operation) and then work with the term instead. Directly manipulating cterms only really makes sense if you need another cterm in the end, because re-certifying terms is fairly expensive (still, unless your code is called very often, it probably isn't really a performance problem).
In most cases, I would say the workflow is like this: If you get a cterm as an input, you turn it into a regular term. This you can easily inspect/take apart/whatever. At some point, you might have to turn it into a cterm again (e.g. because you want to instantiate some theorem with it or use it in some other way that involves the kernel) and then you just use Thm.cterm_of to do that.
I don't know exactly what the #{cterm …} antiquotation does internally, but I would imagine that at the end of the day, it just parses its parameter as an Isabelle term and then certifies it with the current context (i.e. #{context}) using something like Thm.cterm_of.
To gather my findings with cterms, I post an answer.
This is how a cterm looks like in Pure:
abstype cterm =
Cterm of {cert: Context.certificate,
t: term, T: typ,
maxidx: int,
sorts: sort Ord_List.T}
(To be continued)

In (Free) Pascal, can a function return a value that can be modified without dereference?

In Pascal, I understand that one could create a function returning a pointer which can be dereferenced and then assign a value to that, such as in the following (obnoxiously useless) example:
type ptr = ^integer;
var d: integer;
function f(x: integer): ptr;
begin
f := #x;
end;
begin
f(d)^ := 4;
end.
And now d is 4.
(The actual usage is to access part of a quite complicated array of records data structure. I know that a class would be better than an array of nested records, but it isn't my code (it's TeX: The Program) and was written before Pascal implementations supported object-orientation. The code was written using essentially a language built on top of Pascal that added macros which expand before the compiler sees them. Thus you could define some macro m that takes an argument x and expands into thearray[x + 1].f1.f2 instead of writing that every time; the usage would be m(x) := somevalue. I want to replicate this functionality with a function instead of a macro.)
However, is it possible to achieve this functionality without the ^ operator? Can a function f be written such that f(x) := y (no caret) assigns the value y to x? I know that this is stupid and the answer is probably no, but I just (a) don't really like the look of it and (b) am trying to mimic exactly the form of the macro I mentioned above.
References are not first class objects in Pascal, unlike languages such as C++ or D. So the simple answer is that you cannot directly achieve what you want.
Using a pointer as you illustrated is one way to achieve the same effect although in real code you'd need to return the address of an object whose lifetime extends beyond that of the function. In your code that is not the case because the argument x is only valid until the function returns.
You could use an enhanced record with operator overloading to encapsulate the pointer, and so encapsulate the pointer dereferencing code. That may be a good option, but it very much depends on your overall problem, of which we do not have sight.

Reflecting on a Type parameter

I am trying to create a function
import Language.Reflection
foo : Type -> TT
I tried it by using the reflect tactic:
foo = proof
{
intro t
reflect t
}
but this reflects on the variable t itself:
*SOQuestion> foo
\t => P Bound (UN "t") (TType (UVar 41)) : Type -> TT
Reflection in Idris is a purely syntactic, compile-time only feature. To predict how it will work, you need to know about how Idris converts your program to its core language. Importantly, you won't be able to get ahold of reflected terms at runtime and reconstruct them like you would with Lisp. Here's how your program is compiled:
Internally, Idris creates a hole that will expect something of type Type -> TT.
It runs the proof script for foo in this state. We start with no assumptions and a goal of type Type -> TT. That is, there's a term being constructed which looks like ?rhs : Type => TT . rhs. The ?foo : ty => body syntax shows that there's a hole called foo whose eventual value will be available inside of body.
The step intro t creates a function whose argument is t : Type - this means that we now have a term like ?foo_body : TT . \t : Type => foo_body.
The reflect t step then fills the current hole by taking the term on its right-hand side and converting it to a TT. That term is in fact just a reference to the argument of the function, so you get the variable t. reflect, like all other proof script steps, only has access to the information that is available directly at compile time. Thus, the result of filling in foo_body with the reflection of the term t is P Bound (UN "t") (TType (UVar (-1))).
If you could do what you are wanting here, it would have major consequences both for understanding Idris code and for running it efficiently.
The loss in understanding would come from the inability to use parametricity to reason about the behavior of functions based on their types. All functions would effectively become potentially ad-hoc polymorphic, because they could (say) run differently on lists of strings than on lists of ints.
The loss in performance would come from representing enough type information to do the reflection. After Idris code is compiled, there is no type information left in it (unlike in a system such as the JVM or .NET or a dynamically typed system such as Python, where types have a runtime representation that code can access). In Idris, types can be very large, because they can contain arbitrary programs - this means that far more information would have to be maintained, and computation occurring at the type level would also have to be preserved and repeated at runtime.
If you're wanting to reflect on the structure of a type for further proof automation at compile time, take a look at the applyTactic tactic. Its argument should be a function that takes a reflected context and goal and gives back a new reflected tactic script. An example can be seen in the Data.Vect source.
So I suppose the summary is that Idris can't do what you want, and it probably never will be able to, but you might be able to make progress another way.

Why is code unreachable in Frama-C Value Analysis?

When running Frama-C value analysis with some benchmarks, e.g. susan in http://www.eecs.umich.edu/mibench/automotive.tar.gz, we noticed that a lot of blocks are considered dead code or unreachable. However, in practice, these code is executed as we printed out some debug information from these blocks. Is there anybody noticed this issue? How can we solve this?
Your code has a peculiarity which is not in Pascal's list, and which explains some parts of the dead code. Quite a few functions are declared as such
f(int x, int y);
The return type is entirely missing. The C standard indicates that such functions should return int, and Frama-C follows this convention. When parsing those function, it indicates that they never return anything on some of their paths
Body of function f falls-through. Adding a return statement.
On top on the return statement, Frama-C also adds an /*# assert \false; annotation, to indicate that the execution paths of the functions that return nothing should be dead code. In your code, this annotation is always false: those functions are supposed to return void, and not int. You should correct your code with the good return type.
Occurrences of dead code in the results of Frama-C's value analysis boil down to two aspects, and even these two aspects are only a question of human intentions and are indistinguishable from the point of view of the analyzer.
Real bugs that occur with certainty everytime a particular statement is reached. For instance the code after y = 0; x = 100 / y; is unreachable because the program stops at the division everytime. Some bugs that should be run-time errors do not always stop execution, for instance, writing to an invalid address. Consider yourself lucky that they do in Frama-C's value analysis, not the other way round.
Lack of configuration of the analysis context, including not having provided an informative main() function that sets up variation ranges of the program's inputs with such built-in functions as Frama_C_interval(), missing library functions for which neither specifications nor replacement code are provided, assembly code inside the C program, missing option -absolute-valid-range when one would be appropriate, ...

Resources