I'm very new to mathematical logic and I'm trying to learn Prolog recently, I wonder if I can use Prolog to do resolution reasoning, e.g., to do the following reasoning:
knowing that ∀x.(sheep(x)→eatgrass(x))
knowing that ∀x.(deadsheep(x)→¬eatgrass(x))
to prove ∀x.(deadsheep(x)→¬sheep(x))
what I'm trying to realize is that write code like the following:
eatgrass(X) :- sheep(X).
false :- deadsheep(X), eatgrass(X).
sheep(X).
deadsheep(X).
and get query answer 'false' when query
?- sheep(a),deadsheep(a).
It seems in Prolog I cannot realize something like line 2:
false :- deadsheep(X), eatgrass(X).
so I wonder if there is a way to do the reasoning like above mentioned in Prolog, thanks!
false :- deadsheep(X), eatgrass(X).
is an integrity constraint.
While exploitable in a more resolution-based theorem prover, you cannot have this in Prolog because it's not a clause (neither a definite clause, aka. Horn clause, i.e. without negation in the body, nor a general clause, i.e. with 'negation as failure' in the body).
(As an example, the Markgraf Karl Resolution Theorem Prover of 1981 indeed could handle integrity constraints)
Integrity constraints can be found in Answer Set Programming systems, which find solutions to logic programs in a way quite different from Prolog: not by SLDNF proof search but by finding sets of ground facts that are a model of the program (i.e. that make every statement of the program true).
You program with
sheep(X), deadsheep(X). does not make sense (because it says "everything is a sheep" and "everythinh is a deadsheep"), but if you change this to:
eatgrass(X) :- sheep(X).
false :- deadsheep(X), eatgrass(X).
sheep(flopsy).
deadsheep(flopsy).
then this program is a way of asking: is there a set of ground atoms (in the logic sense) based on eatgrass/1, sheep/1, deadsheep/1 that is a model for the program, i.e. for which every statement of the program becomes true?.
Well, there is not, because we need
sheep(flopsy).
deadsheep(flopsy).
to be true, so clearly eatgrass(flopsy) needs to be true too, and this violates the integrity constraint.
What you can do is test the integrity constraint as part of your query:
Program:
eatgrass(X) :- sheep(X).
sheep(flopsy).
deadsheep(flopsy).
Query: Is there a sheep X such that X is not both eating grass and dead?
?- sheep(X),\+ (deadsheep(X),eatgrass(X)).
In this case, no.
As #David Tonhofer suggested, integrity constraints can be found in Answer Set Programming systems, I found potassco's clingo is a great tool to solve my problem, so I put the example code here in case someone may need a similar one:
eatgrass(X) :- sheep(X).
:- deadsheep(X), eatgrass(X).
sheep(shawn).
deadsheep(shawn).
The second line is an integrity constraint which can be supported in clingo, and if you run this program using clingo online, clingo will tell you this is unsatisfiable, which suggests the reasoning is correct.
Related
I am working with some code that was in part written by my professor that takes a tableau as input, expands it, and then outputs the complete tableau.
I am having trouble understanding the logic behind the order in which the predicates are being run. What is the deciding factor that would, say, make the program address conjunction before negation if both were present in the original formula? And how does recursion play into this?
The code is as follows:
%negation
expand([[not(not(X))|B]|T], T1) :-
expand([[X|B]|T], T1).
%conjunction
expand([[(X)+(Y)|B]|T], T1):-
expand([[X, Y|B]|T], T1).
expand([[not((X)+(Y))|B]|T], T1):-
expand([[not(X)|B], [not(Y)|B]|T], T1).
%disjunction
expand([[(X)/(Y)|B]|T], T1):-
expand([[X|B], [Y|B]|T], T1).
expand([[not((X)/(Y))|B]|T], T1):-
expand([[not(X), not(Y)|B]|T], T1).
%not sure what the rest is or how it works
expand([[X|B]|T1], T5) :-
expand([B], T2), distribute(X,T2,T3), expand(T1,T4), append(T3,T4,T5).
expand([[]|T1], [[]|T2]) :-
expand(T1, T2).
expand([],[]).
distribute(X,[B|T],[[X|B]|T1]) :-
distribute(X,T,T1).
distribute(_,[],[]).
Apologies for the vague post, I am unfamiliar with this language
A Prolog program consists of
Facts, a simple statement of truth. Something like this
mother( jane , alice ) .
mother( jane , john ) .
is two facts that state that Jane is the mother of both Alice and John.
Predicates, more complex assertions of truth. Something like this:
sibling(X,Y) :- same_mother(X,Y), same_father(X,Y) .
half_sibling(X,Y) :- same_mother(X,Y) , \+ same_father(X,Y) .
half_sibling(X,Y) :- \+ same_mother(X,Y) , same_father(X,Y) .
same_mother(X,Y) :- mother(M,X), mother(M,Y) .
same_father(X,Y) :- father(F,X), father(F,Y) .
states that
two people, X and Y, are siblings if they have both the same mother and the same father, and
two people, X and Y, are half-siblings if they have either
the same mother and different fathers, or
different mothers and the same father
Each predicate is a logic proposition written in a restricted form of the predicate calculus, and forms what amounts to a search tree (more of a search graph, really). When you query/execute a predicate, say
sibling(stephen,alice).
Prolog's inference engine essentially traverses the search tree until either succeeds or fails. If it succeeded, one can backtrack into it, and it will continue the traversal until it either succeeds again, or fails.
So, the order in which predicates are "executed" is entirely dependent on the predicate with which evaluation began, and it's structure.
Note that, depending on what arguments/parameters are instantiated or not instantiated, when a predicate is evaluated, allows one to ask questions with more than one answer:
sibling( john, X ). asks the question "Who are John's siblings?"
sibling( X , Y ). asks the question "Who are all the pairs of siblings?"
A good text to start learning Prolog are
Clocksin+Mellish's Programming in Prolog — it's a pretty good introduction to the language.
Clocksin's Clause and Effect — I've heard good things about this as a beginner's text, but I've not read it.
Once you've gotten the basics down...
Sterling+Shapiro's The Art of Prolog is a most excellent book to take you deeper
And O'Keefe's The Craft of Prolog is yet a deeper dive.
I tried to implement LTL logic syntactically using the axiomatization command, with the purpose of automatically finding proofs for theorems (motivation of proving program properties).
However the automatic provers such as (cvc4, z3, e, etc) all use quantifiers of some sort. For example using FOL one could prove F(p)-->G(p) which is obviously false.
My question is if there exists a prover, just like the ones mentioned, but that is made for propositional logic, i.e. only has access to MP and the propositional logic axioms.
I am rather new to isabelle so there might be an easier way of doing this im not seeing.
EDIT: I am looking for a hilbert style deduction prover and not a SAT as this would defeat the problem of implementing it axiomatically
I think the sat method only uses propositional logic.
However, I would recommend not to use axiomatizations and just define the syntax of LTL using datatypes and the semantics using functions. Maybe you can reuse the formalization from https://www.isa-afp.org/entries/LTL.html
Without axiomatizations you are then free to use any method.
What you want is a SAT solver, such as minisat.
However the automatic provers such as (cvc4, z3, e, etc) all use quantifiers of some sort. For example using FOL one could prove F(p)-->G(p) which is obviously false.
This is not correct. Any first-order theorem prover, like iProver, E, Vampire, will not prove forall X. f(X) => g(x).
I am very confused in how CLP works in Prolog. Not only do I find it hard to see the benefits (I do see it in specific cases but find it hard to generalise those) but more importantly, I can hardly make up how to correctly write a recursive predicate. Which of the following would be the correct form in a CLP(R) way?
factorial(0, 1).
factorial(N, F):- {
N > 0,
PrevN = N - 1,
factorial(PrevN, NewF),
F = N * NewF}.
or
factorial(0, 1).
factorial(N, F):- {
N > 0,
PrevN = N - 1,
F = N * NewF},
factorial(PrevN, NewF).
In other words, I am not sure when I should write code outside the constraints. To me, the first case would seem more logical, because PrevN and NewF belong to the constraints. But if that's true, I am curious to see in which cases it is useful to use predicates outside the constraints in a recursive function.
There are several overlapping questions and issues in your post, probably too many to coherently address to your complete satisfaction in a single post.
Therefore, I would like to state a few general principles first, and then—based on that—make a few specific comments about the code you posted.
First, I would like to address what I think is most important in your case:
LP ⊆ CLP
This means simply that CLP can be regarded as a superset of logic programming (LP). Whether it is to be considered a proper superset or if, in fact, it makes even more sense to regard them as denoting the same concept is somewhat debatable. In my personal view, logic programming without constraints is much harder to understand and much less usable than with constraints. Given that also even the very first Prolog systems had a constraint like dif/2 and also that essential built-in predicates like (=)/2 perfectly fit the notion of "constraint", the boundaries, if they exist at all, seem at least somewhat artificial to me, suggesting that:
LP ≈ CLP
Be that as it may, the key concept when working with CLP (of any kind) is that the constraints are available as predicates, and used in Prolog programs like all other predicates.
Therefore, whether you have the goal factorial(N, F) or { N > 0 } is, at least in principle, the same concept: Both mean that something holds.
Note the syntax: The CLP(ℛ) constraints have the form { C }, which is {}(C) in prefix notation.
Note that the goal factorial(N, F) is not a CLP(ℛ) constraint! Neither is the following:
?- { factorial(N, F) }.
ERROR: Unhandled exception: type_error({factorial(_3958,_3960)},...)
Thus, { factorial(N, F) } is not a CLP(ℛ) constraint either!
Your first example therefore cannot work for this reason alone already. (In addition, you have a syntax error in the clause head: factorial (, so it also does not compile at all.)
When you learn working with a constraint solver, check out the predicates it provides. For example, CLP(ℛ) provides {}/1 and a few other predicates, and has a dedicated syntax for stating relations that hold about floating point numbers (in this case).
Other constraint solver provide their own predicates for describing the entities of their respective domains. For example, CLP(FD) provides (#=)/2 and a few other predicates to reason about integers. dif/2 lets you reason about any Prolog term. And so on.
From the programmer's perspective, this is exactly the same as using any other predicate of your Prolog system, whether it is built-in or stems from a library. In principle, it's all the same:
A goal like list_length(Ls, L) can be read as: "The length of the list Ls is L."
A goal like { X = A + B } can be read as: The number X is equal to the sum of A and B. For example, if you are using CLP(Q), it is clear that we are talking about rational numbers in this case.
In your second example, the body of the clause is a conjunction of the form (A, B), where A is a CLP(ℛ) constraint, and B is a goal of the form factorial(PrevN, NewF).
The point is: The CLP(ℛ) constraint is also a goal! Check it out:
?- write_canonical({a,b,c}).
{','(a,','(b,c))}
true.
So, you are simply using {}/1 from library(clpr), which is one of the predicates it exports.
You are right that PrevN and NewF belong to the constraints. However, factorial(PrevN, NewF) is not part of the mini-language that CLP(ℛ) implements for reasoning over floating point numbers. Therefore, you cannot pull this goal into the CLP(ℛ)-specific part.
From a programmer's perspective, a major attraction of CLP is that it blends in completely seamlessly into "normal" logic programming, to the point that it can in fact hardly be distinguished at all from it: The constraints are simply predicates, and written down like all other goals.
Whether you label a library predicate a "constraint" or not hardly makes any difference: All predicates can be regarded as constraints, since they can only constrain answers, never relax them.
Note that both examples you post are recursive! That's perfectly OK. In fact, recursive predicates will likely be the majority of situations in which you use constraints in the future.
However, for the concrete case of factorial, your Prolog system's CLP(FD) constraints are likely a better fit, since they are completely dedicated to reasoning about integers.
I have scribbled the term "retracting in OCaml" in a small space in my notebook and now I can't seem to recollect what it was about nor can I find anything about it on the internet.
Does this term really exist or is it my lecturer's own notation for some property of OCaml. My classmates also don't seem to remember what it was about so I just want to confirm if I was dreaming or not.
Another possible explanation: in math, a retraction is a left inverse of a morphism (see this definition). In particular, a parser can be seen as a retraction w.r.t. a given pretty-printer: start from an abstract syntax tree (AST) and pretty-print it, then parsing the resulting source code should yield the original AST (while the opposite is not necessarily true). It doesn't have much to do with OCaml per se but it is linked to an algebraic view (of compiling) which is quite common in functional programming.
I am developing a program that solves a more complicated version of the infamous puzzle 'The farmer, the fox, the goose, and the grain', which has eight components instead of four. I've already determined the solution; additionally, I've written out just the necessary states to complete the problem, like this:
move([w,w,w,w,w,w,w,w],[e,w,w,e,w,w,w,w]).
move([e,w,w,e,w,w,w,w],[w,w,w,e,w,w,w,w]).
etc.
My goal now is to have this program follow those states, chaining from one to the next, until it reaches the ultimate goal of [e,e,e,e,e,e,e,e]. To accomplish this, I have defined predicates as such:
solution([e,e,e,e,e,e,e,e],[]).
solution(Start,End) :-
move(Start,NextConfig),
solution(NextConfig,End).
My query is solution([w,w,w,w,w,w,w,w],[e,e,e,e,e,e,e,e]). However, this results in apparently infinite recursion. What am I missing?
To avoid cycles, try closure0/3
solution(S) :-
closure0(move, S,[e,e,e,e,e,e,e,e]).