Consider the following C code:
#include <assert.h>
//# requires p < q;
void f(char *p, char *q)
{
assert(p <= q-1);
}
//# requires a < b;
void g(int a, int b)
{
assert(a <= b-1);
}
Using alt-ergo, frama-c successfully proves that the assertion in g() holds but fail to prove the same with f(). Why?
Formally, pointers and integers are two very different things. In particular, C semantics states that pointer comparison is well defined only for pointers that points in the same allocated block (or one offset past the end of said allocated block) . This is reflected in the model used by the WP plugin of Frama-C in the definition of addr_le and friends (see $(frama-c -print-share-path)/wp/why3/Memory.why), where the pointers are checked to have the same address before the comparison is done on their offset.
Related
The closest answer I found maybe related to -absolute-valid-range for the Eva plugin but is that it? Do I have to come up with read/write ACSL predicates to do dummy read/write?
Sample code:
#include <stdint.h>
#define BASE_ADDR 0x0e000000
#define BASE_LIMIT 0x0e001000
#define TEST_REG 0x10
/*# requires BASE_ADDR <= addr < BASE_LIMIT;
# assigns \nothing;
*/
static inline uint32_t mmio_read32(volatile uintptr_t addr)
{
volatile uint32_t *ptr = (volatile uint32_t *)addr;
return *ptr;
}
/*#
# requires 0 <= offset <= 0x1000;
# assigns \nothing;
*/
static inline uint32_t read32(uintptr_t offset)
{
return mmio_read32((uintptr_t)BASE_ADDR + offset);
}
void main(){
uint32_t test;
test = read32(TEST_REG);
return;
}
Frama-c command and output:
[frama -absolute-valid-range 0x0e000000-0x0e001000 -wp mmio2.c
[kernel] Parsing mmio2.c (with preprocessing)
[wp] Warning: Missing RTE guards
[wp] 6 goals scheduled
[wp] [Alt-Ergo] Goal typed_read32_call_mmio_read32_pre : Unknown (Qed:4ms) (51ms)
[wp] Proved goals: 5 / 6
Qed: 5
Alt-Ergo: 0 (unknown: 1)][1]
How to discharge goal "typed_read32_call_mmio_read32_pre" or is this expected?
The fact that the proof fails is related to two independent issues, but none of them is related to using absolute addresses.
First, as the argument of mmio_read32 is marked as volatile, WP consider that its value can be anything. In particular, none of the hypotheses that are made on offset are known to hold when evaluating addr. You can see that in the GUI by looking at the generated goal (go in the WP Goals tab at the bottom and double click in the Script colon and the line of the failed proof attempt):
Goal Instance of 'Pre-condition'
(call 'mmio_read32'):
Assume {
Type: is_uint32(offset_0).
(* Pre-condition *)
Have: (0 <= offset_0) /\ (offset_0 <= 4095).
}
Prove: (234881024 <= w) /\ (w_1 <= 234885119).
w and w_1 correspond to two read accesses to the volatile content of addr. I'm not sure if you really intended the addr parameter to be volatile (as opposed to a non-volatile pointer to a volatile location), as this would require a pretty weird execution environment.
Assuming that the volatile qualifier should not present in the declaration of addr, a remaining issue is that the constraint on offset in read32 is too weak: it should read offset < 0x1000 with a strict inequality (or the precondition of mmio_read32 be made non-strict, but again this would be quite uncommon).
Regarding the initial question about physical adressing and volatile, in ACSL (see section 2.12.1 of the manual), you have a specific volatile clause that allows you to specify (C, usually ghost) functions to represent read and write accesses to volatile locations. Unfortunately, support for these clauses is currently only accessible through a non-publicly distributed plug-in.
I'm afraid that if you want to use WP on code with physical adressing, you need indeed to use an encoding (e.g. with a ghost array of appropriate size), and/or model volatile accesses with appropriate functions.
I'm using Frama-C version Silicon-20161101. Every time a reference a pointer value *x in an ensures clause the preprocessor inserts *\old(x) unnecessarily. For example
// File swap.c:
/*# requires \valid(a) && \valid(b);
# ensures A: *a == \old(*b) ;
# ensures B: *\at(\old(b),Post) == \old(*a) ;
# assigns *a,*b ;
#*/
void swap(int *a,int *b)
{
int tmp = *a ;
*a = *b ;
*b = tmp ;
return ;
}
when processed with frama-c swap.c -print outputs
/* Generated by Frama-C */
/*# requires \valid(a) ∧ \valid(b);
ensures A: *\old(a) ≡ \old(*b);
ensures B: *\old(b) ≡ \old(*a);
assigns *a, *b;
*/
void swap(int *a, int *b)
{
int tmp;
tmp = *a;
*a = *b;
*b = tmp;
return;
}
Interestingly enough, this is still verified as correct by the WP plugin! I assume this is because *\old(a) is the post value of \old(a) (which is still the same pointer since it hasn't been changed)? Is this a bug? Is there any quick user end fix for this?
There are two points in your question. First, *\old(a) should not be confused with \old(*a): in both cases, we evaluate the pointer a in the Old state, but in *\old(a), we dereference it in the current state, while in \old(*a), the dereference is also done in the Old state.
Second, formal parameters are a bit special: as mentioned in the ACSL manual, in the post-state of the function they have the same the value as in the pre-state, even if they have been modified by the function, such as in the following:
/*#
requires \valid(a + 0 .. 1);
ensures *a == \old(*a) + 1;
ensures a[1] == \old(a[1]) + 1;
*/
void f(int *a) {
*a = *a + 1;
a++;
*a = *a + 1;
}
The rationale here is that since C has a call-by-value semantics, the outside world cannot see any internal modification made to the formal (which acts as a local variable for this purpose). In other words, as long as the contract is concerned, the value tied to a in the example above is the one of the actual argument passed to it, no matter what happens in the implementation of the function. Note that in the case of a pointer, this of course applies only to the value of the pointer itself, not to the values stored in the memory block it points to.
In order to make this semantics explicit, the type-checker wraps all occurrences of a formal parameter in an ensures clause inside \old.
You may also want to see this answer to a related, albeit not completely similar question.
I've got the following struct:
struct Param
{
double** K_RP;
};
And I wanna perform the following operations on "K_RP" in CUDA
__global__ void Test( struct Param prop)
{
int ix = threadIdx.x;
int iy = threadIdx.y;
prop.K_RP[ix][iy]=2.0;
}
If "prop" has the following form, how should I do my "cudaMalloc" and "cudaMemcpy" operations?
int main( )
{
Param prop;
Param cuda_prop;
prop.K_RP=alloc2D(Imax,Jmax);
//cudaMalloc cuda_prop ?
//cudaMemcpyH2D prop to cuda_prop ?
Test<<< (1,1), (Imax,Jmax)>>> ( cuda_prop);
//cudaMemcpyD2H cuda_prop to prop ?
return (0);
}
Questions like this get asked from time to time. If you search on the cuda tag, you'll find a variety of examples with answers. Here's one example.
In general, dynamically allocated data contained within structures or other objects requires special handling. This question/answer explains why and how to do it for the single pointer (*) case.
Handling double pointers (**) is difficult enough that most people would recommend "flattening" the storage so that it can be handled by reference with a single pointer (*). If you really want to see how the double pointer (**) method works, review this question/answer. It's not trivial.
I need to write an ACSL specification for the dependencies of a function that takes a pointer as input, and uses its content when the pointer in not NULL.
I think that this specification is correct:
/*# behavior p_null:
assumes p == \null;
assigns \result from \nothing;
behavior p_not_null:
assumes p != \null;
assigns \result from *p;
*/
int f (int * p);
but I would rather avoid the behaviors, but I don't know if this is correct (and equivalent) :
//# assigns \result from p, *p;
int f (int * p);
Can I use *p in the right part of the \from even if p might be NULL ?
Yes you can. the right part of the from indicates which locations might contribute to the value of \result, but f does not need to read from *p to compute \result (and it should better avoid it if p is NULL of course).
As a side note, the question could have also been raised for the assigns clause of the p_not_null behavior, as p!=\null does
not imply \valid(p) in C.
Nevertheless, the contract without behavior is slightly less precise than the one with behaviors, which states that when p is NULL a constant value is returned, and when p is not null, only the content of *p is used (the assigns clause without behavior allows to return e.g. *p+(int)p, or even (int)p).
Do recursive lambda functions induce any overhead comparing to regular recursive functions (since we have to capture them into a std::function) ?
What is the difference between this function and a similar one using only regular functions ?
int main(int argc, const char *argv[])
{
std::function<void (int)> helloworld = [&helloworld](int count) {
std::cout << "Hello world" << std::endl;
if (count > 1) helloworld(--count);
};
helloworld(2);
return 0;
}
There is overhead using lambdas recursively by storing it as a std::function, although they are itself basically functors. It seems that gcc is not able to optimize well which can be seen in a direct comparison.
Implementing the behaviour of a lambda, i.e. creating a functor, enables gcc of optimizing again. Your specific example of a lambda could be implemented as
struct HelloWorldFunctor
{
void operator()(int count) const
{
std::cout << "Hello world" << std::endl;
if ( count > 1 )
{
this->operator()(count - 1);
}
}
};
int main()
{
HelloWorldFunctor functor;
functor(2);
}
For the example I've created the functor would look like in this second demo.
Even if one introduces calls to impure functions such as std::rand, the performance without a recursive lambda or with a custom functor is still better. Here's a third demo.
Conclusion: With the usage of a std::function, there's overhead, although it might be negligible depending on the use case. Since this usage prevents some compiler optimizations, this shouldn't be used extensively.
So std::function is implemented polymorphically. Meaning your code is roughly equivalent to:
struct B
{
virtual void do(int count) = 0;
};
struct D
{
B* b;
virtual void do(int count)
{
if (count > 1)
b->do(count--);
}
};
int main()
{
D d;
d.b = &d;
d.do(10);
}
It is rare to have a tight enough recursion such that a virtual method lookup is a significant overhead, but depending on your application area it is certainly possible.
Lambdas in C++ are equivalent to functors so it would be the same that calling the operator() of some class created automatically by the compiler. When you capture the environment what's happening behind the scenes is that those captured variables are passed to the constructor of the class and stored as member variables.
So in short, the performance difference should be very close to zero.
Here there is some further explanation:
Jump to the section "How are Lambda Closures Implemented?"
http://www.cprogramming.com/c++11/c++11-lambda-closures.html
EDIT:
After some research and thanks to Stefan answer and code, it turned out that there is an overhead on recursive lambdas because of the std::function idiom. Since the lambda has to be wrapped on a std::function in order to call itself, it involves to call a virtual function which does add an overhead.
The subject is treated on the comments of this answer:
https://stackoverflow.com/a/14591727/1112142