Prolog recursive arithmetic - math

I am very new to prolog and am having some issues understanding some basic arithmetic. I want to create a functor that will recursively multiply. IE: 3*4 = 3+3+3+3 = 12.
I put it through SWIPL's trace command and it fails when decrementing Count.
Here is the code I have so far but it does not work.
multn(_,0,0).
multn(_, Count ,Return) :- Count is Count-1,
Return is 0,
multn(_,Count,Return),
Return is Return + _.
EDIT: made some new changes based on what you said about the functionality of "is".
multn(_, Count ,Return) :- Count1 is (Count-1),
multn(_,Count1,Return1),
Return is (Return1 + _).
Now it is making it all the way down the recursion chain to the base case and when it starts it way back up it fails out trying to todo Return is (Return1+ _). It seems to be changing the _ variable. here it my trace:
[trace] ?- multn(3,2,X).
Call: (6) multn(3, 2, _G388) ? creep
^ Call: (7) _L142 is 2+ -1 ? creep
^ Exit: (7) 1 is 2+ -1 ? creep
Call: (7) multn(_L160, 1, _L143) ? creep
^ Call: (8) _L163 is 1+ -1 ? creep
^ Exit: (8) 0 is 1+ -1 ? creep
Call: (8) multn(_L181, 0, _L164) ? creep
Exit: (8) multn(_L181, 0, 0) ? creep
^ Call: (8) _L143 is 0+_G461 ? creep
ERROR: is/2: Arguments are not sufficiently instantiated
^ Exception: (8) _L143 is 0+_G461 ? creep
Exception: (7) multn(_L160, 1, _L143) ? creep
Exception: (6) multn(3, 2, _G388) ? creep
Last EDIT: Finally figured it out, using _ was causing the weird change in value. Thanks for your help.

It looks like your don't understand how Prolog works.
The key thing to understand is that both Count in Count is Count-1 are the same, they must have the same value. It's like variables in algebra - all Xs in an equation means the same value. So Count is Count-1 will always fail.
Similar problems with Return variable.
In Prolog you have to introduce new variables to do what you intended, like NewCount is Count-1.

Related

How to make Pre and Post conditions for recursive functions in SPARK?

I'm translating an exercise I made in Dafny into SPARK, where one verifies a tail recursive function against a recursive one. The Dafny source (censored, because it might still be used for classes):
function Sum(n:nat):nat
decreases n
{
if n==0 then n else n+Sum(n-1)
}
method ComputeSum(n:nat) returns (s:nat)
ensures s == Sum(n)
{
s := 0;
// ...censored...
}
What I got in SPARK so far:
function Sum (n : in Natural) return Natural
is
begin
if n = 0 then
return n;
else
return n + Sum(n - 1);
end if;
end Sum;
function ComputeSum(n : in Natural) return Natural
with
Post => ComputeSum'Result = Sum(n)
is
s : Natural := 0;
begin
-- ...censored...
return s;
end ComputeSum;
I cannot seem to figure out how to express the decreases n condition (which now that I think about it might be a little odd... but I got graded for it a few years back so who am I to judge, and the question remains how to get it done). As a result I get warnings of possible overflow and/or infinite recursion.
I'm guessing there is a pre or post condition to be added. Tried Pre => n <= 1 which obviously does not overflow, but I still get the warning. Adding Post => Sum'Result <= n**n on top of that makes the warning go away, but that condition gets a "postcondition might fail" warning, which isn't right, but guess the prover can't tell. Also not really the expression I should check against, but I cannot seem to figure what other Post I'm looking for. Possibly something very close to the recursive expression, but none of my attempts work. Must be missing out on some language construct...
So, how could I express the recursive constraints?
Edit 1:
Following links to this SO answer and this SPARK doc section, I tried this:
function Sum (n : in Natural) return Natural
is
(if n = 0 then 0 else n + Sum(n - 1))
with
Pre => (n in 0 .. 2),
Contract_Cases => (n = 0 => Sum'Result = 0,
n >= 1 => Sum'Result = n + Sum(n - 1)),
Subprogram_Variant => (Decreases => n);
However getting these warnings from SPARK:
spark.adb:32:30: medium: overflow check might fail [reason for check: result of addition must fit in a 32-bits machine integer][#0]
spark.adb:36:56: warning: call to "Sum" within its postcondition will lead to infinite recursion
If you want to prove that the result of some tail-recursive summation function equals the result of a given recursive summation function for some value N, then it should, in principle, suffice to only define the recursive function (as an expression function) without any post-condition. You then only need to mention the recursive (expression) function in the post-condition of the tail-recursive function (note that there was no post-condition (ensures) on the recursive function in Dafny either).
However, as one of SPARK's primary goal is to proof the absence of runtime errors, you must have to prove that overflow cannot occur and for this reason, you do need a post-condition on the recursive function. A reasonable choice for such a post-condition is, as #Jeffrey Carter already suggested in the comments, the explicit summation formula for arithmetic progression:
Sum (N) = N * (1 + N) / 2
The choice is actually very attractive as with this formula we can now also functionally validate the recursive function itself against a well-known mathematically explicit expression for computing the sum of a series of natural numbers.
Unfortunately, using this formula as-is will only bring you somewhere half-way. In SPARK (and Ada as well), pre- and post-conditions are optionally executable (see also RM 11.4.2 and section 5.11.1 in the SPARK Reference Guide) and must therefore themselves be free of any runtime errors. Therefore, using the formula as-is will only allow you to prove that no overflow occurs for any positive number up until
max N s.t. N * (1 + N) <= Integer'Last <-> N = 46340
as in the post-condition, the multiplication is not allowed to overflow either (note that Natural'Last = Integer'Last = 2**31 - 1).
To work around this, you'll need to make use of the big integers package that has been introduced in the Ada 202x standard library (see also RM A.5.6; this package is already included in GNAT CE 2021 and GNAT FSF 11.2). Big integers are unbounded and computations with these integers never overflow. Using these integers, one can proof that overflow will not occur for any positive number up until
max N s.t. N * (1 + N) / 2 <= Natural'Last <-> N = 65535 = 2**16 - 1
The usage of these integers in a post-condition is illustrated in the example below.
Some final notes:
The Subprogram_Variant aspect is only needed to prove that a recursive subprogram will eventually terminate. Such a proof of termination must be requested explicitly by adding an annotation to the function (also shown in the example below and as discussed in the SPARK documentation pointed out by #egilhh in the comments). The Subprogram_Variant aspect is, however, not needed for your initial purpose: proving that the result of some tail-recursive summation function equals the result of a given recursive summation function for some value N.
To compile a program that uses functions from the new Ada 202x standard library, use compiler option -gnat2020.
While I use a subtype to constrain the range of permissible values for N, you could also use a precondition. This should not make any difference. However, in SPARK (and Ada as well), it is in general considered to be a best practise to express contraints using (sub)types as much as possible.
Consider counterexamples as possible clues rather than facts. They may or may not make sense. Counterexamples are optionally generated by some solvers and may not make sense. See also the section 7.2.6 in the SPARK user’s guide.
main.adb
with Ada.Numerics.Big_Numbers.Big_Integers;
procedure Main with SPARK_Mode is
package BI renames Ada.Numerics.Big_Numbers.Big_Integers;
use type BI.Valid_Big_Integer;
-- Conversion functions.
function To_Big (Arg : Integer) return BI.Valid_Big_Integer renames BI.To_Big_Integer;
function To_Int (Arg : BI.Valid_Big_Integer) return Integer renames BI.To_Integer;
subtype Domain is Natural range 0 .. 2**16 - 1;
function Sum (N : Domain) return Natural is
(if N = 0 then 0 else N + Sum (N - 1))
with
Post => Sum'Result = To_Int (To_Big (N) * (1 + To_Big (N)) / 2),
Subprogram_Variant => (Decreases => N);
-- Request a proof that Sum will terminate for all possible values of N.
pragma Annotate (GNATprove, Terminating, Sum);
begin
null;
end Main;
output (gnatprove)
$ gnatprove -Pdefault.gpr --output=oneline --report=all --level=1 --prover=z3
Phase 1 of 2: generation of Global contracts ...
Phase 2 of 2: flow analysis and proof ...
main.adb:13:13: info: subprogram "Sum" will terminate, terminating annotation has been proved
main.adb:14:30: info: overflow check proved
main.adb:14:32: info: subprogram variant proved
main.adb:14:39: info: range check proved
main.adb:16:18: info: postcondition proved
main.adb:16:31: info: range check proved
main.adb:16:53: info: predicate check proved
main.adb:16:69: info: division check proved
main.adb:16:71: info: predicate check proved
Summary logged in [...]/gnatprove.out
ADDENDUM (in response to comment)
So you can add the post condition as a recursive function, but that does not help in proving the absence of overflow; you will still have to provide some upper bound on the function result in order to convince the prover that the expression N + Sum (N - 1) will not cause an overflow.
To check the absence of overflow during the addition, the prover will consider all possible values that Sum might return according to it's specification and see if at least one of those value might cause the addition to overflow. In the absence of an explicit bound in the post condition, Sum might, according to its return type, return any value in the range Natural'Range. That range includes Natural'Last and that value will definitely cause an overflow. Therefore, the prover will report that the addition might overflow. The fact that Sum never returns that value given its allowable input values is irrelevant here (that's why it reports might). Hence, a more precise upper bound on the return value is required.
If an exact upper bound is not available, then you'll typically fallback onto a more conservative bound like, in this case, N * N (or use saturation math as shown in the Fibonacci example from the SPARK user manual, section 5.2.7, but that approach does change your function which might not be desirable).
Here's an alternative example:
example.ads
package Example with SPARK_Mode is
subtype Domain is Natural range 0 .. 2**15;
function Sum (N : Domain) return Natural
with Post =>
Sum'Result = (if N = 0 then 0 else N + Sum (N - 1)) and
Sum'Result <= N * N; -- conservative upper bound if the closed form
-- solution to the recursive function would
-- not exist.
end Example;
example.adb
package body Example with SPARK_Mode is
function Sum (N : Domain) return Natural is
begin
if N = 0 then
return N;
else
return N + Sum (N - 1);
end if;
end Sum;
end Example;
output (gnatprove)
$ gnatprove -Pdefault.gpr --output=oneline --report=all
Phase 1 of 2: generation of Global contracts ...
Phase 2 of 2: flow analysis and proof ...
example.adb:8:19: info: overflow check proved
example.adb:8:28: info: range check proved
example.ads:7:08: info: postcondition proved
example.ads:7:45: info: overflow check proved
example.ads:7:54: info: range check proved
Summary logged in [...]/gnatprove.out
I landed in something that sometimes works, which I think is enough for closing the title question:
function Sum (n : in Natural) return Natural
is
(if n = 0 then 0 else n + Sum(n - 1))
with
Pre => (n in 0 .. 10), -- works with --prover=z3, not Default (CVC4)
-- Pre => (n in 0 .. 100), -- not working - "overflow check might fail, e.g. when n = 2"
Subprogram_Variant => (Decreases => n),
Post => ((n = 0 and then Sum'Result = 0)
or (n > 0 and then Sum'Result = n + Sum(n - 1)));
-- Contract_Cases => (n = 0 => Sum'Result = 0,
-- n > 0 => Sum'Result = n + Sum(n - 1)); -- warning: call to "Sum" within its postcondition will lead to infinite recursion
-- Contract_Cases => (n = 0 => Sum'Result = 0,
-- n > 0 => n + Sum(n - 1) = Sum'Result); -- works
-- Contract_Cases => (n = 0 => Sum'Result = 0,
-- n > 0 => Sum'Result = n * (n + 1) / 2); -- works and gives good overflow counterexamples for high n, but isn't really recursive
Command line invocation in GNAT Studio (Ctrl+Alt+F), --counterproof=on and --prover=z3 my additions to it:
gnatprove -P%PP -j0 %X --output=oneline --ide-progress-bar --level=0 -u %fp --counterexamples=on --prover=z3
Takeaways:
Subprogram_Variant => (Decreases => n) is required to tell the prover n decreases for each recursive invocation, just like the Dafny version.
Works inconsistently for similar contracts, see commented Contract_Cases.
Default prover (CVC4) fails, using Z3 succeeds.
Counterproof on fail makes no sense.
n = 2 presented as counterproof for range 0 .. 100, but not for 0 .. 10.
Possibly related to this mention in the SPARK user guide: However, note that since the counterexample is always generated only using CVC4 prover, it can just explain why this prover cannot prove the property.
Cleaning between changing options required, e.g. --prover.

Using <> in an assert statement in OCaml causes error

So this might be a stupid question, but I am running into an error in utop right now after just beginning to use OCaml. I am trying to assert that two ints are structurally not equal.
assert 2 <> 3;;
Error: This expression has type int but an expression was expected of type
bool because it is in the condition of an assertion
The entire statement causes an error, but simply typing the expression I am asserting correctly evaluates to true.
2 <> 3;;
- : bool = true
I added parentheses to the original assert statement and that fixes the problem.
assert (2 <> 3);;
- : unit = ()
I am just wondering what exactly happened without the parentheses to cause the error initially. When do you need parentheses typically?
This is an issue with precedence, which determines how "eagerly" a parsing rule is applied. assert has a relatively high precedence, higher than <> and other operations. This means that this expression
assert 2 <> 3
is parsed as
(assert 2) <> 3
and not as
assert (2 <> 3)
You can find the full table of precedence here: https://caml.inria.fr/pub/docs/manual-ocaml/expr.html#sec133

SWI Prolog - Recursion with lists

I am new to ProLog and I am having some troubles understanding recursion over lists.
I am stuck on this exercise. Basically given the vocabulary I need to convert a list of Italian numbers to English numbers.
This is my KB.
tran(uno, one).
tran(due, two).
tran(tre, three).
tran(quattro, four).
tran(cinque, five).
tran(sei, six).
tran(sette, seven).
tran(otto, eight).
tran(nove, nine).
listran(L,[]).
listran(L,[H|T]) :- tran(H,E), listran([E|L],T).
This program should give the translated list (in reverse order). However, it only outputs true when I pass:
?- listran(X, [uno, due, tre]).
I've tried to trace it and seems that at the end removes?? all elements from my translated list. This is the trace output.
[trace] ?- listran(X,[uno,due,tre]).
Call: (8) listran(_5566, [uno, due, tre]) ? creep
Call: (9) tran(uno, _5820) ? creep
Exit: (9) tran(uno, one) ? creep
Call: (9) listran([one|_5566], [due, tre]) ? creep
Call: (10) tran(due, _5826) ? creep
Exit: (10) tran(due, two) ? creep
Call: (10) listran([two, one|_5566], [tre]) ? creep
Call: (11) tran(tre, _5832) ? creep
Exit: (11) tran(tre, three) ? creep
Call: (11) listran([three, two, one|_5566], []) ? creep
Exit: (11) listran([three, two, one|_5566], []) ? creep
Exit: (10) listran([two, one|_5566], [tre]) ? creep
Exit: (9) listran([one|_5566], [due, tre]) ? creep
Exit: (8) listran(_5566, [uno, due, tre]) ? creep
true.
Can someone help me understanding this little problem?
Thank you in advance.
The problem is in both clauses:
listran(L,[]).
listran(L,[H|T]) :- tran(H,E), listran([E|L],T).
here you state that: translate H and place it to head of L and continue, this holds for every L, you need explicitly state that the current head of L is E not add E:
listran([],[]).
listran([E|T1],[H|T]) :- tran(H,E), listran(T1,T).
Here you say the head of first list is E and continue with rest until base case where both lists are empty.
An interesting (and "prologish") way is to use DCG :
tran(uno) --> [one].
tran(due) --> [two].
tran(tre) --> [three].
tran(quattro) --> [four].
tran(cinque) --> [five].
tran(sei) --> [six].
tran(sette) --> [seven].
tran(otto) --> [eight].
tran(nove) --> [nine].
listran(In,Out) :-
phrase(trans(In), Out).
trans([]) --> [].
trans([H|T]) --> tran(H), trans(T).

Backtracking in Standard ML

I have seen in my SML manual the following function, which computes how many coins of a particular kind are needed for a particular change.
For example change [5,2] 16 =[5,5,2,2,2] because with two 5-coins and three 2-coins one gets 16.
the following code is a backtracking approach:
exception Change;
fun change _ 0 = nil|
change nil _ = raise Change|
change (coin::coins)=
if coin>amt then change coins amt
else (coin:: change (coin::coins) (amt-coin))
handle Change=> change coins amt;
It works, but I don't understand how exactly.
I know what backtracking is, I just don't understand this particular function.
What I understood so far: If amt is 0, it means our change is computed, and there is nothing to be cons'd onto the final list.
If there are no more coins in our 'coin-list', we need to go back one step.
This is where I get lost: how exactly does raising an exception helps us go back?
as I see it, the handler tries to make a call to the change function, but shouldn't the "coins" parameter be nil? therefore entering an infinite loop? why does it "go back"?
The last clause is pretty obvious to me: if the coin-value is greater than the amount left to change, we use the remaining coins to build the change. If it is smaller than the amount left, we cons it onto the result list.
This is best seen by writing out how evaluation proceeds for a simple example. In each step, I just replace a call to change by the respective right-hand side (I added extra parentheses for extra clarity):
change [3, 2] 4
= if 3 > 4 then ... else ((3 :: change [3, 2] (4 - 3)) handle Change => change [2] 4)
= (3 :: change [3, 2] 1) handle Change => change [2] 4
= (3 :: (if 3 > 1 then change [2] 1 else ...)) handle Change => change [2] 4
= (3 :: change [2] 1) handle Change => change [2] 4
= (3 :: (if 2 > 1 then change [] 1 else ...)) handle Change => change [2] 4
= (3 :: (raise Change)) handle Change => change [2] 4
At this point an exception has been raised. It bubbles up to the current handler so that evaluation proceeds as follows:
= change [2] 4
= if 2 > 4 then ... else ((2 :: change [2] (4 - 2)) handle Change => change [] 4)
= (2 :: change [2] 2) handle Change => change [] 4
= (2 :: (if 2 > 2 then ... else ((2 :: change [2] (2 - 2)) handle Change => change [] 2)) handle Change => change [] 4
= (2 :: ((2 :: change [2] 0) handle Change => change [] 2)) handle Change => change [] 4
= (2 :: ((2 :: []) handle Change => change [] 2)) handle Change => change [] 4
= (2 :: (2 :: [])) handle Change => change [] 4
= 2 :: 2 :: []
No more failures up to here, so we terminate successfully.
In short, every handler is a backtracking point. At each failure (i.e., raise) you proceed at the innermost handler, which is the last backtracking point. Each handler itself is set up such that it contains the respective call to try instead.
You can rewrite this use of exceptions into using the 'a option type instead. The original function:
exception Change;
fun change _ 0 = []
| change [] _ = raise Change
| change (coin::coins) amt =
if coin > amt
then change coins amt
else coin :: change (coin::coins) (amt-coin)
handle Change => change coins amt;
In the modified function below, instead of the exception bubbling up, it becomes a NONE. One thing that becomes slightly more apparent here is that coin only occurs in one of the two cases (where in the code above it always occurs but is reverted in case of backtracking).
fun change' _ 0 = SOME []
| change' [] _ = NONE
| change' (coin::coins) amt =
if coin > amt
then change' coins amt
else case change' (coin::coins) (amt-coin) of
SOME result => SOME (coin :: result)
| NONE => change' coins amt
Another way to demonstrate what happens is by drawing a call tree. This does not gather the result as Andreas Rossberg's evaluation by hand, but it does show that only the times change is taking an else-branch is there a possibility to backtrack, and if a backtrack occurs (i.e. NONE is returned or an exception is thrown), don't include coin in the result.
(original call ->) change [2,5] 7
\ (else)
`-change [2,5] 5
/ \ (else)
___________________/ `-change [2,5] 3
/ / \ (else)
/ / `-change [2,5] 1
`-change [5] 5 / \ (then)
\ (else) / `-change [5] 1
`-change [] 0 / \ (then)
\ / `-change [] 1
`-SOME [] `-change [5] 3 \ (base)
\ (then) `-NONE
`-change [] 3
\
`-NONE
Source: https://www.cs.cmu.edu/~rwh/introsml/core/exceptions.htm
The expression exp handle match is an exception handler. It is
evaluated by attempting to evaluate exp. If it returns a value, then
that is the value of the entire expression; the handler plays no role
in this case. If, however, exp raises an exception exn, then the
exception value is matched against the clauses of the match (exactly
as in the application of a clausal function to an argument) to
determine how to proceed. If the pattern of a clause matches the
exception exn, then evaluation resumes with the expression part of
that clause. If no pattern matches, the exception exn is re-raised so
that outer exception handlers may dispatch on it. If no handler
handles the exception, then the uncaught exception is signaled as the
final result of evaluation. That is, computation is aborted with the
uncaught exception exn.
In more operational terms, evaluation of exp handle match proceeds by
installing an exception handler determined by match, then evaluating
exp. The previous binding of the exception handler is preserved so
that it may be restored once the given handler is no longer needed.
Raising an exception consists of passing a value of type exn to the
current exception handler. Passing an exception to a handler
de-installs that handler, and re-installs the previously active
handler. This ensures that if the handler itself raises an exception,
or fails to handle the given exception, then the exception is
propagated to the handler active prior to evaluation of the handle
expression. If the expression does not raise an exception, the
previous handler is restored as part of completing the evaluation of
the handle expression.

Understanding recursion

I am struggling to understand this recursion used in the dynamic programming example. Can anyone explain the working of this. The objective is to find the least number of coins for a value.
//f(n) = 1 + min f(n-d) for all denomimations d
Pseudocode:
int memo[128]; //initialized to -1
int min_coin(int n)
{
if(n < 0) return INF;
if(n == 0) return 0;
if(memo[n] != -1)
int ans = INF;
for(int i = 0; i < num_denomination; ++i)
{
ans = min(ans, min_coin(n - denominations[i]));
}
return memo[n] = ans+1; //when does this get called?
}
This particular example is explained very well in this article at Topcoder.
Basically this recursion is using the solutions to smaller problems (least number of coins for a smaller n) to find the solution for the overall problem. The dynamic programming aspect of this is the memoization of the solutions to the sub-problems so they don't have to be recalculated every time.
And yes - there are {} missing as ring0 mentioned in his comment - the recursion should only be executed if the sub-problem has not been solved before.
To answer the owner's question when does this get called? : in a solution based on a recursive program, the same function is called by itself... but eventually returns... When does it return? from the time the function ceased to call itself
f(a) {
if (a > 0) f(a-1);
display "x"
}
f(5);
f(5) would call f(4), in turns call f(3) that call f(2) which calls f(1) calling f(0).
f(0) has a being 0, so it does not call f(), and displays "x" then returns. It returns to the previous f(1) that, after calling f(0) - done - displays also "x". f(1) ends, f(2) displays "x", ... , until f(5). You get 6 "x".
In another terms from what ring0 has already mentioned - when the program reaches the base case and starts to unwind by going up the stack (call frames). For similar case using factorial example see this.
#!/usr/bin/env perl
use strict;
use IO::Handle;
use Carp qw(cluck);
STDOUT->autoflush(1);
STDERR->autoflush(1);
sub factorial {
my $v = shift;
dummy_func();
return 1 if $v == 1;
print "Variable v value: $v and it's address:", \$v, "\ncurrent sub factorial addr:", \&factorial, "\n","-"x40;
return $v * factorial($v - 1);
}
sub dummy_func {
cluck;
}
factorial(5);

Resources