A part of input is:
tuple copie_macchine {
int macchina;
int copia1;
int copia2;
int copia3;
}
{copie_macchine} copie = ...;
int macc [I1][J] = ...;
{int} s = {1,2,4,5,7,11,12,13,14,15,16,17,18,19};
macc = [[1, 10],
[1, 1],
[3, 3],
[0, 4]];
copie = {<3,3,4,0>,
<6,7,8,9>,
<8,11,12,0>,
<9,13,14,0>,
<10,15,16,0>,
<20,26,27,28>};
dvar boolean y[I][J][M];
and I write in Cplex this code: the algorithm assigns to the variable y 1 if the value macc [i] [j] is in the set s otherwise it must choose to assign the value 1 either to copy 1 or to copy 2 or to copy
forall (j in J)
forall (i in I1 : macc [i][j] in s)
forall (i1 in I : i1==i)
forall (m in M)
V22: y[i1][j][m] == 1;
forall (j in J)
forall (c in copie : c.copia3!=0)
forall (i in I1 : macc[i][j] == c.macchina)
forall (i1 in I : i1==i)
forall (m in M)
V23: y[i1][j][c.copia1] == 1 || y[i1][j][c.copia2] == 1 || y[i1][j][c.copia3] == 1;
but I have the error "V23 has never been used". How can I solve?
your line that contains V23 is never used.
Let me share a smaller example which replicates this warning:
dvar int+ x;
subject to
{
x<=10;
forall(i in 1..2:i>=5)
{
ct:(x==2) ;
}
}
gives
ct has never been used
Related
I am working on the following proof and the invariant result_val is proved with an induction strategy on i using begin as the base case.
The sup case is trying to prove true which holds trivially using Frama-C 24.0. But when I switch to 25.0, it tries to prove a seemingly more complicated condition, which looks closer to a correct inductive inference because it did the weakest precondition computation explicitly.
However, all SMT solvers I tried cannot prove the condition generated by Frama-C 25.0.
I am a bit worried about the correctness of version 24.0's result because using true as the inductive proof goal seems to be unlikely. Can anyone hint to me at what happened? Is that a bug in 24.0 or just some difference in the implementation?
#include <stdbool.h>
#define SIZE 1000
bool data[SIZE] ;
/*#
logic integer count(integer begin, integer end)=
begin >= end ? 0 : (data[begin]==true) ? count(begin+1, end)+1 : count(begin+1, end);
*/
/*#
requires SIZE > begin >= 0;
requires SIZE >= end >= 0;
requires begin <= end;
assigns \nothing;
ensures \result == count(begin, end);
*/
unsigned int occurrences_of(int begin, int end)
{
unsigned int result = 0;
/*#
loop invariant i_bound: begin <= i <= end;
loop invariant result_bound: 0 <= result <= i-begin;
loop invariant result_val: result == count(begin, i);
loop assigns i, result;
loop variant end-i;
*/
for (unsigned int i = begin; i < end; ++i){
result += (data[i] == true) ? 1 : 0;
}
return result;
}
Below is the result from Frama-c 24.0
Proof:
Goal Invariant 'result_val' (preserved) (Induction: proved)
+ Goal Induction (Base) (proved)
+ Goal Induction (Induction (sup)) (proved)
+ Goal Induction (Induction (inf)) (proved)
Qed.
--------------------------------------------------------------------------------
Goal Induction (Induction (sup)):
Prove: true.
Below is the result from Frama-c 25.0
--------------------------------------------------------------------------------
Proof:
Goal Invariant 'result_val' (preserved) (Induction: pending)
+ Goal Induction (Base) (proved)
+ Goal Induction (Induction (sup)) (pending)
+ Goal Induction (Induction (inf)) (proved)
End.
--------------------------------------------------------------------------------
Goal Induction (Induction (sup)):
Let x_0 = to_uint32(end#L1).
Let x_1 = to_uint32(tmp#L12).
Let x_2 = data#L1[i#L6].
Let x_3 = result#L6.
Let x_4 = result#L13.
Let x_5 = to_uint32(1 + i#L6).
Assume {
Have: begin#L1 < i#L6.
Have: i#L6 <= end#L1.
Have: i#L6 < x_0.
Have: 0 <= x_3.
Have: x_5 <= end#L1.
Have: begin#L1 <= x_5.
Have: (begin#L1 + x_3) <= i#L6.
Have: (begin#L1 + x_4) <= x_5.
Have: is_uint32(i#L6).
Have: is_bool(x_2).
Have: is_uint32(x_3).
Have: if (x_2 = 1) then (tmp#L12 = 1) else (tmp#L12 = 0).
Have: forall i_0 : Z. let x_6 = L_count(data#L1, begin#L1, i_0) in
let x_7 = to_uint32(1 + i_0) in let x_8 = to_uint32(x_1 + x_6) in
let x_9 = data#L1[i_0] in ((i_0 <= end#L1) -> ((begin#L1 <= i_0) ->
((i_0 < i#L6) -> ((i_0 < x_0) -> ((0 <= x_6) -> ((x_7 <= end#L1) ->
((begin#L1 <= x_7) -> (((begin#L1 + x_6) <= i_0) ->
(((begin#L1 + x_8) <= x_7) -> (is_uint32(i_0) -> (is_bool(x_9) ->
(is_uint32(x_6) ->
((if (x_9 = 1) then (tmp#L12 = 1) else (tmp#L12 = 0)) ->
(L_count(data#L1, begin#L1, x_7) = x_8)))))))))))))).
[...]
Stmt { L6: }
Stmt { tmp = tmp_0; }
Stmt { L12: result = x_4; }
Stmt { L13: }
}
Prove: L_count(data#L1, begin#L1, x_5) = x_4.
Goal id: typed_occurrences_of_loop_invariant_result_val_preserved
Short id: occurrences_of_loop_invariant_result_val_preserved
--------------------------------------------------------------------------------
Prover Alt-Ergo 2.4.2: Timeout (Qed:52ms) (10s).
A bug on the typing of the induction tactic was indeed fixed between Frama-C 24 and 25 (https://git.frama-c.com/pub/frama-c/-/commit/6058453cce2715f7dcf9027767559f95fb3b1679). And the symptom was indeed that the tactic could generate ill-typed formulas with true instead of a term.
Proving this example in not that easy. For two main reasons:
the function and the definition work in the opposite directions,
the definition does not have an optimal expression for reasoning.
However, one can write a lemma function to solve the problem:
#include <stdbool.h>
#define SIZE 1000
bool data[SIZE] ;
/*#
logic integer count(integer begin, integer end)=
begin >= end ? 0 : ((data[begin]==true) ? count(begin+1, end)+1 : count(begin+1, end));
*/
/*# ghost
/# requires begin < end ;
assigns \nothing ;
ensures count(begin, end) == ((data[end-1]==true) ? count(begin, end-1)+1 : count(begin, end-1));
#/
void lemma(bool* d, int begin, int end){
/# loop invariant begin <= i < end ;
loop invariant count(i, end) == ((data[end-1]==true) ? count(i, end-1)+1 : count(i, end-1));
loop assigns i ;
loop variant i - begin ;
#/
for(int i = end-1 ; i > begin ; i--);
}
*/
/*#
requires SIZE > begin >= 0;
requires SIZE >= end >= 0;
requires begin <= end;
assigns \nothing;
ensures \result == count(begin, end);
*/
unsigned int occurrences_of(int begin, int end)
{
unsigned int result = 0;
/*#
loop invariant i_bound: begin <= i <= end;
loop invariant result_bound: 0 <= result <= i-begin;
loop invariant result_val: result == count(begin, i);
loop assigns i, result;
loop variant end-i;
*/
for (unsigned int i = begin; i < end; ++i){
result += (data[i] == true) ? 1 : 0;
//# ghost lemma(data, begin, i+1);
}
return result;
}
I'd suggest to use the following definition:
/*#
logic integer count(integer begin, integer end)=
begin >= end ? 0 : ((data[end-1]==true) ? 1 : 0) + count(begin, end-1);
*/
It works in the same direction as the function and avoids the duplication of the term count(begin, end-1) which makes reasoning easier.
I'm having trouble showing how to ensure recursively decreasing functions on a tree class in Dafny. I have the following definitions which verify.
class RoseTree {
var NodeType: int
var id: string
var children: array<RoseTree>
ghost var nodeSet: set<RoseTree>
constructor(nt: int, id: string, children: array<RoseTree>)
ensures forall x :: 0 <= x < children.Length ==> children[x].nodeSet <= this.nodeSet
ensures forall x :: 0 <= x < this.children.Length ==> this.children[x].nodeSet <= this.nodeSet
{
this.NodeType := nt;
this.id := id;
this.children := children;
if children.Length == 0 {
this.nodeSet := {this};
}else{
this.nodeSet := {this}+childrenNodeSet(children);
}
}
}
function setRosePick(s: set<set<RoseTree>>): set<RoseTree>
requires s != {}
{
var x :| x in s; x
}
function setUnion(setosets: set<set<RoseTree>>) : set<RoseTree>
decreases setosets
{
if setosets == {} then {} else
var x := setRosePick(setosets);
assert x <= x + setUnion(setosets-{x});
x + setUnion(setosets-{x})
}
lemma setUnionDef(s: set<set<RoseTree>>, y: set<RoseTree>)
requires y in s
ensures setUnion(s) == y + setUnion(s - {y})
{
var x := setRosePick(s);
if y == x {
}else{
calc {
setUnion(s);
==
x + setUnion(s - {x});
== {setUnionDef(s - {x}, y); }
x + y + setUnion(s - {x} - {y});
== { assert s - {x} - {y} == s - {y} - {x}; }
y + x + setUnion(s - {y} - {x});
== {setUnionDef(s - {y}, x); }
y + setUnion(s - {y});
}
}
}
lemma setUnionReturns(s: set<set<RoseTree>>)
ensures s == {} ==> setUnion(s) == {}
ensures s != {} ==> forall x :: x in s ==> x <= setUnion(s)
{
if s == {} {
assert setUnion(s) == {};
} else {
forall x | x in s
ensures x <= setUnion(s)
{
setUnionDef(s, x);
assert x <= x + setUnion(s-{x});
}
}
}
function childNodeSets(children: array<RoseTree>): set<set<RoseTree>>
reads children
reads set x | 0 <= x < children.Length :: children[x]
{
set x | 0 <= x < children.Length :: children[x].nodeSet
}
function childNodeSetsPartial(children: array<RoseTree>, index: int): set<set<RoseTree>>
requires 0 <= index < children.Length
reads children
reads set x | index <= x < children.Length :: children[x]
{
set x | index <= x < children.Length :: children[x].nodeSet
}
function childrenNodeSet(children: array<RoseTree>): set<RoseTree>
reads children
reads set x | 0 <= x < children.Length :: children[x]
ensures forall x :: x in childNodeSets(children) ==> x <= childrenNodeSet(children)
ensures forall i :: 0 <= i < children.Length ==> children[i].nodeSet <= childrenNodeSet(children)
{
var y := childNodeSets(children);
setUnionReturns(y);
setUnion(y)
}
In particular I'm trying to define the height function for the tree.
function height(node: RoseTree):nat
reads node
reads node.children
reads set x | 0 <= x < node.children.Length :: node.children[x]
decreases node.nodeSet
{
if node.children.Length == 0 then 1 else 1 + maxChildHeight(node, node.children,node.children.Length-1,0)
}
function maxChildHeight(node: RoseTree, children: array<RoseTree>, index: nat, best: nat) : nat
reads node
reads node.children
reads set x | 0 <= x < node.children.Length :: node.children[x]
requires children == node.children
requires 0 <= index < children.Length
ensures forall x :: 0 <= x <= index < children.Length ==> maxChildHeight(node, children, index, best) >= height(children[x])
decreases node.nodeSet - setUnion(childNodeSetsPartial(children, index)), 1
{
if index == 0 then best else if height(children[index]) >= best then maxChildHeight(node, children, index-1, height(children[index])) else maxChildHeight(node, children, index-1, best)
}
I though it should be possible to show that the nodeSet of the node will be a subset of its parent node or that the union of child node sets will be a subset of the parent node, and thus both functions will terminate. My decreases expressions don't prove it to dafny and I'm not quite sure how to proceed. Is there another way to prove termination or can I fix these decrease statements?
Also, do all instances of a class have the constructor ensure statements applied implicitly or only if explicitly constructed using the constructor?
Edit: updated definitions of childNodeSetsPartial and maxChildHeight
to recurse downward. It still doesn't verify.
Defining mutable linked heap-allocated data structures in Dafny is not very common except as an exercise. So you should consider whether a datatype would serve you better, as in
datatype RoseTree = Node(children: seq<RoseTree>)
function height(r: RoseTree): int
{
if r.children == [] then
1
else
var c := set i | 0 <= i < |r.children| :: height(r.children[i]);
assert height(r.children[0]) in c;
assert c != {};
SetMax(c) + 1
}
If you insist on mutable linked heap-allocated data structures, then there is a standard idiom for doing that. Please read sections 0 and 1 of these lecture notes and check out the modern version of the example code here.
Applying this idiom to your code, we get the following.
class RoseTree {
var NodeType: int
var id: string
var children: array<RoseTree>
ghost var repr: set<object>
predicate Valid()
reads this, repr
decreases repr
{
&& this in repr
&& children in repr
&& (forall i | 0 <= i < children.Length ::
children[i] in repr
&& children[i].repr <= repr
&& this !in children[i].repr
&& children[i].Valid())
}
constructor(nt: int, id: string, children: array<RoseTree>)
requires forall i | 0 <= i < children.Length :: children[i].Valid()
ensures Valid()
{
this.NodeType := nt;
this.id := id;
this.children := children;
this.repr := {this, children} +
(set i | 0 <= i < children.Length :: children[i]) +
(set x, i | 0 <= i < children.Length && x in children[i].repr :: x);
}
}
function SetMax(s: set<int>): int
requires s != {}
ensures forall x | x in s :: SetMax(s) >= x
{
var x :| x in s;
if s == {x} then
x
else
var y := SetMax(s - {x});
assert forall z | z in s :: z == x || (z in (s - {x}) && y >= z);
if x > y then x else y
}
function height(node: RoseTree): nat
requires node.Valid()
reads node.repr
{
if node.children.Length == 0 then
1
else
var c := set i | 0 <= i < node.children.Length :: height(node.children[i]);
assert height(node.children[0]) in c;
assert c != {};
SetMax(c) + 1
}
do all instances of a class have the constructor ensure statements applied implicitly or only if explicitly constructed using the constructor?
I'm not sure if I understand this question. I think the answer is "no", though. Since a class might have multiple constructors with different postconditions.
I would like to generate in an efficient way a list of integers (preferably ordered)
with the following defining properties:
All integers have the same number of bit set N.
All integers have the same sum of bit indices K.
To be definite, for an integer I
its binary representation is:
$I=\sum_{j=0}^M c_j 2^j$ where $c_j=0$ or $1$
The number of bit sets is:
$N(I)=\sum_{j=0}^M c_j$
The sum of bit indices is:
$K(I)=\sum_{j=0}^M j c_j$
I have an inefficient way to generate the list as follows:
make a do/for loop over integers incrementing by use
of a "snoob" function - smallest next integer with same number of bit set
and at each increment checking if it has the correct value of K
this is grossly inefficient because in general starting from an integer
with the correct N and K value the snoob integer from I does not have the correct K and one has to make many snoob calculations to get the next integer
with both N and K equal to the chosen values.
Using snoob gives an ordered list which is handy for dichotomic search but
not absolutely compulsory.
Counting the number of elements in this list is easily done by recursion
when viewed as a partition numner counting. here is a recursive function in fortran 90 doing that job:
=======================================================================
recursive function BoundedPartitionNumberQ(N, M, D) result (res)
implicit none
! number of partitions of N into M distinct integers, bounded by D
! appropriate for Fermi counting rules
integer(8) :: N, M, D, Nmin
integer(8) :: res
Nmin = M*(M+1)/2 ! the Fermi sea
if(N < Nmin) then
res = 0
else if((N == Nmin) .and. (D >= M)) then
res = 1
else if(D < M) then
res = 0
else if(D == M) then
if(N == Nmin) then
res = 1
else
res = 0
endif
else if(M == 0) then
res = 0
else
res = BoundedPartitionNumberQ(N-M,M-1,D-1)+BoundedPartitionNumberQ(N-M,M,D-1)
endif
end function BoundedPartitionNumberQ
========================================================================================
My present solution is inefficient when I want to generate lists with several $10^7$
elements. Ultimately I want to stay within the realm of C/C++/Fortran and reach lists of lengths
up to a few $10^9$
my present f90 code is the following:
program test
implicit none
integer(8) :: Nparticles
integer(8) :: Nmax, TmpL, CheckL, Nphi
integer(8) :: i, k, counter
integer(8) :: NextOne
Nphi = 31 ! word size is Nphi+1
Nparticles = 16 ! number of bit set
print*,Nparticles,Nphi
Nmax = ishft(1_8, Nphi + 1) - ishft(1_8, Nphi + 1 - Nparticles)
i = ishft(1, Nparticles) - 1
counter = 0
! integer CheckL is the sum of bit indices
CheckL = Nparticles*Nphi/2 ! the value of the sum giving the largest list
do while(i .le. Nmax) ! we increment the integer
TmpL = 0
do k=0,Nphi
if (btest(i,k)) TmpL = TmpL + k
end do
if (TmpL == CheckL) then ! we check whether the sum of bit indices is OK
counter = counter + 1
end if
i = NextOne(i) ! a version of "snoob" described below
end do
print*,counter
end program
!==========================================================================
function NextOne (state)
implicit none
integer(8) :: bit
integer(8) :: counter
integer(8) :: NextOne,state,pstate
bit = 1
counter = -1
! find first one bit
do while (iand(bit,state) == 0)
bit = ishft(bit,1)
end do
! find next zero bit
do while (iand(bit,state) /= 0)
counter = counter + 1
bit = ishft(bit,1)
end do
if (bit == 0) then
print*,'overflow in NextOne'
NextOne = not(0)
else
state = iand(state,not(bit-1)) ! clear lower bits i &= (~(bit-1));
pstate = ishft(1_8,counter)-1 ! needed by IBM/Zahir compiler
! state = ior(state,ior(bit,ishft(1,counter)-1)) ! short version OK with gcc
state = ior(state,ior(bit,pstate))
NextOne = state
end if
end function NextOne
Since you mentioned C/C++/Fortran, I've tried to keep this relatively language agnostic/easily transferable but have also included faster builtins alternatives where applicable.
All integers have the same number of bit set N
Then we can also say, all valid integers will be permutations of N set bits.
First, we must generate the initial/min permutation:
uint32_t firstPermutation(uint32_t n){
// Fill the first n bits (on the right)
return (1 << n) -1;
}
Next, we must set the final/max permutation - indicating the 'stop point':
uint32_t lastPermutation(uint32_t n){
// Fill the last n bits (on the left)
return (0xFFFFFFFF >> n) ^ 0xFFFFFFFF;
}
Finally, we need a way to get the next permutation.
uint32_t nextPermutation(uint32_t n){
uint32_t t = (n | (n - 1)) + 1;
return t | ((((t & -t) / (n & -n)) >> 1) - 1);
}
// or with builtins:
uint32_t nextPermutation(uint32_t &p){
uint32_t t = (p | (p - 1));
return (t + 1) | (((~t & -~t) - 1) >> (__builtin_ctz(p) + 1));
}
All integers have the same sum of bit indices K
Assuming these are integers (32bit), you can use this DeBruijn sequence to quickly identify the index of the first set bit - fsb.
Similar sequences exist for other types/bitcounts, for example this one could be adapted for use.
By stripping the current fsb, we can apply the aforementioned technique to identify index of the next fsb, and so on.
int sumIndices(uint32_t n){
const int MultiplyDeBruijnBitPosition[32] = {
0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8,
31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9
};
int sum = 0;
// Get fsb idx
do sum += MultiplyDeBruijnBitPosition[((uint32_t)((n & -n) * 0x077CB531U)) >> 27];
// strip fsb
while (n &= n-1);
return sum;
}
// or with builtin
int sumIndices(uint32_t n){
int sum = 0;
do sum += __builtin_ctz(n);
while (n &= n-1);
return sum;
}
Finally, we can iterate over each permutation, checking if the sum of all indices matches the specified K value.
p = firstPermutation(n);
lp = lastPermutation(n);
do {
p = nextPermutation(p);
if (sumIndices(p) == k){
std::cout << "p:" << p << std::endl;
}
} while(p != lp);
You could easily change the 'handler' code to do something similar starting at a given integer - using it's N & K values.
A basic recursive implementation could be:
void listIntegersWithWeight(int currentBitCount, int currentWeight, uint32_t pattern, int index, int n, int k, std::vector<uint32_t> &res)
{
if (currentBitCount > n ||
currentWeight > k)
return;
if (index < 0)
{
if (currentBitCount == n && currentWeight == k)
res.push_back(pattern);
}
else
{
listIntegersWithWeight(currentBitCount, currentWeight, pattern, index - 1, n, k, res);
listIntegersWithWeight(currentBitCount + 1, currentWeight + index, pattern | (1u << index), index - 1, n, k, res);
}
}
That is not my suggestion, just the starting point. On my PC, for n = 16, k = 248, both this version and the iterative version take almost (but not quite) 9 seconds. Almost exactly the same amount of time, but that's just a coincidence. More pruning can be done:
currentBitCount + index + 1 < n if the number of set bits cannot reach n with the number of unfilled positions that are left, continuing is pointless.
currentWeight + (index * (index + 1) / 2) < k if the sum of positions cannot reach k, continuing is pointless.
Together:
void listIntegersWithWeight(int currentBitCount, int currentWeight, uint32_t pattern, int index, int n, int k, std::vector<uint32_t> &res)
{
if (currentBitCount > n ||
currentWeight > k ||
currentBitCount + index + 1 < n ||
currentWeight + (index * (index + 1) / 2) < k)
return;
if (index < 0)
{
if (currentBitCount == n && currentWeight == k)
res.push_back(pattern);
}
else
{
listIntegersWithWeight(currentBitCount, currentWeight, pattern, index - 1, n, k, res);
listIntegersWithWeight(currentBitCount + 1, currentWeight + index, pattern | (1u << index), index - 1, n, k, res);
}
}
On my PC with the same parameters, this only takes half a second. It can probably be improved further.
I've been looking at the use of lemmas in Dafny but am finding it hard to understand and obviously the below example doesn't verify, quite possibly because Dafny doesn't see the induction or something like a lemma to prove some property of count? Basically, I don't know how or what I need to define to help convince Dafny that counting is inductive and a thing etc. Some of the ensures and invariants specifications are not necessary, but that's not the point. btw, this was easier in Spec#.
function count(items: seq<int>, item: int): nat
decreases |items|
{
if |items| == 0 then 0 else
(if items[|items| - 1] == item then 1 else 0)
+ count( items[..(|items| - 1)], item )
}
method occurences(items: array<int>, item: int) returns (r: nat)
requires items != null
ensures r <= items.Length
// some number of occurences of item
ensures r > 0 ==> exists k: nat :: k < items.Length
&& items[k] == item
// no occurences of item
ensures r == 0 ==> forall k: nat :: k < items.Length
==> items[k] != item
ensures r == count( items[..], item )
{
var i: nat := 0;
var num: nat := 0;
while i < items.Length
// i is increasing and there could be elements that match
invariant num <= i <= items.Length
invariant num > 0 ==> exists k: nat :: k < i
&& items[k] == item
invariant num == 0 ==> forall k: nat :: k < i
==> items[k] != item
invariant num == old(num) + 1 || num == old(num)
invariant num == count( items[..i], item )
{
if items[i] == item
{ num := num + 1; }
i := i + 1;
}
return num;
}
I would use a definition of count based around a multiset, then everything works:
function count(items: seq<int>, item: int): nat
decreases |items|
{
multiset(items)[item]
}
method occurences(items: array<int>, item: int) returns (r: nat)
requires items != null
ensures r <= items.Length
// some number of occurences of item
ensures r > 0 ==> exists k: nat :: k < items.Length
&& items[k] == item
// no occurences of item
ensures r == 0 ==> forall k: nat :: k < items.Length
==> items[k] != item
ensures r == count(items[..], item)
{
var i: nat := 0;
var num: nat := 0;
while i < items.Length
// i is increasing and there could be elements that match
invariant num <= i <= items.Length
invariant num > 0 ==> exists k: nat :: k < i
&& items[k] == item
invariant num == 0 ==> forall k: nat :: k < i
==> items[k] != item
invariant num == old(num) + 1 || num == old(num)
invariant num == count(items[..i], item)
{
if items[i] == item
{ num := num + 1; }
i := i + 1;
}
assert items[..i] == items[..];
r := num;
}
I would also like to suggest two alternative approaches, and another solution to your original design.
Without changing the implementation, I personally would probably write the specification like this:
function count(items: seq<int>, item: int): nat
decreases |items|
{
multiset(items)[item]
}
method occurences(items: array<int>, item: int) returns (num: nat)
requires items != null
ensures num <= items.Length
ensures num == 0 <==> item !in items[..]
ensures num == count(items[..], item)
{
num := 0;
var i: nat := 0;
while i < items.Length
invariant num <= i <= items.Length
invariant num == 0 <==> item !in items[..i]
invariant num == count(items[..i], item)
{
if items[i] == item
{ num := num + 1; }
i := i + 1;
}
assert items[..i] == items[..];
}
If I were to decide on the implementation too then I would write it like this:
method occurences(items: array<int>, item: int) returns (num: nat)
requires items != null
ensures num == multiset(items[..])[item]
{
num := multiset(items[..])[item];
}
There is a way to get the original to verify by adding an extra assertion. NB. I think that "old" doesn't do what you think it does in a loop invariant.
function count(items: seq<int>, item: int): nat
decreases |items|
{
if |items| == 0 then 0 else
(if items[|items|-1] == item then 1 else 0)
+ count(items[..|items|-1], item )
}
method occurences(items: array<int>, item: int) returns (r: nat)
requires items != null
ensures r <= items.Length
// some number of occurences of item
ensures r > 0 ==> exists k: nat :: k < items.Length
&& items[k] == item
// no occurences of item
ensures r == 0 ==> forall k: nat :: k < items.Length
==> items[k] != item
ensures r == count( items[..], item )
{
var i: nat := 0;
var num:nat := 0;
while i < items.Length
invariant num <= i <= items.Length
invariant num > 0 ==> exists k: nat :: k < i
&& items[k] == item
invariant num == 0 ==> forall k: nat :: k < i
==> items[k] != item
invariant num == count(items[..i], item)
{
assert items[..i+1] == items[..i] + [items[i]];
if items[i] == item
{ num := num + 1; }
i := i + 1;
}
assert items[..i] == items[..];
r := num;
}
As we all know, the simplest algorithm to generate Fibonacci sequence is as follows:
if(n<=0) return 0;
else if(n==1) return 1;
f(n) = f(n-1) + f(n-2);
But this algorithm has some repetitive calculation. For example, if you calculate f(5), it will calculate f(4) and f(3). When you calculate f(4), it will again calculate both f(3) and f(2). Could someone give me a more time-efficient recursive algorithm?
I have read about some of the methods for calculating Fibonacci with efficient time complexity following are some of them -
Method 1 - Dynamic Programming
Now here the substructure is commonly known hence I'll straightly Jump to the solution -
static int fib(int n)
{
int f[] = new int[n+2]; // 1 extra to handle case, n = 0
int i;
f[0] = 0;
f[1] = 1;
for (i = 2; i <= n; i++)
{
f[i] = f[i-1] + f[i-2];
}
return f[n];
}
A space-optimized version of above can be done as follows -
static int fib(int n)
{
int a = 0, b = 1, c;
if (n == 0)
return a;
for (int i = 2; i <= n; i++)
{
c = a + b;
a = b;
b = c;
}
return b;
}
Method 2- ( Using power of the matrix {{1,1},{1,0}} )
This an O(n) which relies on the fact that if we n times multiply the matrix M = {{1,1},{1,0}} to itself (in other words calculate power(M, n )), then we get the (n+1)th Fibonacci number as the element at row and column (0, 0) in the resultant matrix. This solution would have O(n) time.
The matrix representation gives the following closed expression for the Fibonacci numbers:
fibonaccimatrix
static int fib(int n)
{
int F[][] = new int[][]{{1,1},{1,0}};
if (n == 0)
return 0;
power(F, n-1);
return F[0][0];
}
/*multiplies 2 matrices F and M of size 2*2, and
puts the multiplication result back to F[][] */
static void multiply(int F[][], int M[][])
{
int x = F[0][0]*M[0][0] + F[0][1]*M[1][0];
int y = F[0][0]*M[0][1] + F[0][1]*M[1][1];
int z = F[1][0]*M[0][0] + F[1][1]*M[1][0];
int w = F[1][0]*M[0][1] + F[1][1]*M[1][1];
F[0][0] = x;
F[0][1] = y;
F[1][0] = z;
F[1][1] = w;
}
/*function that calculates F[][] raise to the power n and puts the
result in F[][]*/
static void power(int F[][], int n)
{
int i;
int M[][] = new int[][]{{1,1},{1,0}};
// n - 1 times multiply the matrix to {{1,0},{0,1}}
for (i = 2; i <= n; i++)
multiply(F, M);
}
This can be optimized to work in O(Logn) time complexity. We can do recursive multiplication to get power(M, n) in the previous method.
static int fib(int n)
{
int F[][] = new int[][]{{1,1},{1,0}};
if (n == 0)
return 0;
power(F, n-1);
return F[0][0];
}
static void multiply(int F[][], int M[][])
{
int x = F[0][0]*M[0][0] + F[0][1]*M[1][0];
int y = F[0][0]*M[0][1] + F[0][1]*M[1][1];
int z = F[1][0]*M[0][0] + F[1][1]*M[1][0];
int w = F[1][0]*M[0][1] + F[1][1]*M[1][1];
F[0][0] = x;
F[0][1] = y;
F[1][0] = z;
F[1][1] = w;
}
static void power(int F[][], int n)
{
if( n == 0 || n == 1)
return;
int M[][] = new int[][]{{1,1},{1,0}};
power(F, n/2);
multiply(F, F);
if (n%2 != 0)
multiply(F, M);
}
Method 3 (O(log n) Time)
Below is one more interesting recurrence formula that can be used to find nth Fibonacci Number in O(log n) time.
If n is even then k = n/2:
F(n) = [2*F(k-1) + F(k)]*F(k)
If n is odd then k = (n + 1)/2
F(n) = F(k)*F(k) + F(k-1)*F(k-1)
How does this formula work?
The formula can be derived from the above matrix equation.
fibonaccimatrix
Taking determinant on both sides, we get
(-1)n = Fn+1Fn-1 – Fn2
Moreover, since AnAm = An+m for any square matrix A, the following identities can be derived (they are obtained from two different coefficients of the matrix product)
FmFn + Fm-1Fn-1 = Fm+n-1
By putting n = n+1,
FmFn+1 + Fm-1Fn = Fm+n
Putting m = n
F2n-1 = Fn2 + Fn-12
F2n = (Fn-1 + Fn+1)Fn = (2Fn-1 + Fn)Fn (Source: Wiki)
To get the formula to be proved, we simply need to do the following
If n is even, we can put k = n/2
If n is odd, we can put k = (n+1)/2
public static int fib(int n)
{
if (n == 0)
return 0;
if (n == 1 || n == 2)
return (f[n] = 1);
// If fib(n) is already computed
if (f[n] != 0)
return f[n];
int k = (n & 1) == 1? (n + 1) / 2
: n / 2;
// Applyting above formula [See value
// n&1 is 1 if n is odd, else 0.
f[n] = (n & 1) == 1? (fib(k) * fib(k) +
fib(k - 1) * fib(k - 1))
: (2 * fib(k - 1) + fib(k))
* fib(k);
return f[n];
}
Method 4 - Using a formula
In this method, we directly implement the formula for the nth term in the Fibonacci series. Time O(1) Space O(1)
Fn = {[(√5 + 1)/2] ^ n} / √5
static int fib(int n) {
double phi = (1 + Math.sqrt(5)) / 2;
return (int) Math.round(Math.pow(phi, n)
/ Math.sqrt(5));
}
Reference: http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/fibFormula.html
Look here for implementation in Erlang which uses formula
. It shows nice linear resulting behavior because in O(M(n) log n) part M(n) is exponential for big numbers. It calculates fib of one million in 2s where result has 208988 digits. The trick is that you can compute exponentiation in O(log n) multiplications using (tail) recursive formula (tail means with O(1) space when used proper compiler or rewrite to cycle):
% compute X^N
power(X, N) when is_integer(N), N >= 0 ->
power(N, X, 1).
power(0, _, Acc) ->
Acc;
power(N, X, Acc) ->
if N rem 2 =:= 1 ->
power(N - 1, X, Acc * X);
true ->
power(N div 2, X * X, Acc)
end.
where X and Acc you substitute with matrices. X will be initiated with and Acc with identity I equals to .
One simple way is to calculate it iteratively instead of recursively. This will calculate F(n) in linear time.
def fib(n):
a,b = 0,1
for i in range(n):
a,b = a+b,a
return a
Hint: One way you achieve faster results is by using Binet's formula:
Here is a way of doing it in Python:
from decimal import *
def fib(n):
return int((Decimal(1.6180339)**Decimal(n)-Decimal(-0.6180339)**Decimal(n))/Decimal(2.236067977))
you can save your results and use them :
public static long[] fibs;
public long fib(int n) {
fibs = new long[n];
return internalFib(n);
}
public long internalFib(int n) {
if (n<=2) return 1;
fibs[n-1] = fibs[n-1]==0 ? internalFib(n-1) : fibs[n-1];
fibs[n-2] = fibs[n-2]==0 ? internalFib(n-2) : fibs[n-2];
return fibs[n-1]+fibs[n-2];
}
F(n) = (φ^n)/√5 and round to nearest integer, where φ is the golden ratio....
φ^n can be calculated in O(lg(n)) time hence F(n) can be calculated in O(lg(n)) time.
// D Programming Language
void vFibonacci ( const ulong X, const ulong Y, const int Limit ) {
// Equivalent : if ( Limit != 10 ). Former ( Limit ^ 0xA ) is More Efficient However.
if ( Limit ^ 0xA ) {
write ( Y, " " ) ;
vFibonacci ( Y, Y + X, Limit + 1 ) ;
} ;
} ;
// Call As
// By Default the Limit is 10 Numbers
vFibonacci ( 0, 1, 0 ) ;
EDIT: I actually think Hynek Vychodil's answer is superior to mine, but I'm leaving this here just in case someone is looking for an alternate method.
I think the other methods are all valid, but not optimal. Using Binet's formula should give you the right answer in principle, but rounding to the closest integer will give some problems for large values of n. The other solutions will unnecessarily recalculate the values upto n every time you call the function, and so the function is not optimized for repeated calling.
In my opinion the best thing to do is to define a global array and then to add new values to the array IF needed. In Python:
import numpy
fibo=numpy.array([1,1])
last_index=fibo.size
def fib(n):
global fibo,last_index
if (n>0):
if(n>last_index):
for i in range(last_index+1,n+1):
fibo=numpy.concatenate((fibo,numpy.array([fibo[i-2]+fibo[i-3]])))
last_index=fibo.size
return fibo[n-1]
else:
print "fib called for index less than 1"
quit()
Naturally, if you need to call fib for n>80 (approximately) then you will need to implement arbitrary precision integers, which is easy to do in python.
This will execute faster, O(n)
def fibo(n):
a, b = 0, 1
for i in range(n):
if i == 0:
print(i)
elif i == 1:
print(i)
else:
temp = a
a = b
b += temp
print(b)
n = int(input())
fibo(n)