Revising for a course on automated reasoning and I don't quite understand how to answer this question:
Show how the notion of pairs (x, y) can be defined in higher-order logic using a lambda abstraction. Define a function π1 that returns the first element of such a pair. Finally, show that π1(x, y) = x.
I've found similar questions on stackoverflow, but they're all to do with scheme, which I've never used. An explanation in English/relevant mathematical notation would be appreciated
Here you go
PAIR := λx. λy. λp. p x y
π1 := λp. p (λx. λy. x)
π2 := λp. p (λx. λy. y)
π1 (PAIR a b) => a
π2 (PAIR a b) => b
Check the wiki entry on Church encoding for some good examples, too
The main topic of this question is to understand how data can be represented as functions. When you're working with other paradigms , the normal way of thinking is "data = something that's stored in a variable" (could be an array, object, whatever structure you want).
But when we're in functional programming, we can also represent data as functions.
So let's say you want a function pair(x,y)
This is "pseudo" lisp language:
(function pair x y =
lambda(pick)
if pick = 1 return x
else return y )
That example, is showing a function that returns a lambda function which expects a parameter.
(function pi this-is-pair = this-is-pair 1)
this-is-pair should be constructed with a pair function, therefore, the parameter is a function which expects other parameter (pick).
And now, you can test what you need
(pi (pair x y ) ) should return x
I would highly recommend you to see this video about compound data. Most of the examples are made on lisp, but it's great to understand a concept like that.
Pairs or tuples describes Products Domain, is the union of all elements of the set A and all elements of the set B:
A × B = { (a, b) | a ∈ A, b ∈ B }
Here, A and B are diferent types, so if you for example are in a laguage program like C, Java, you can have pair like (String, Integer), (Char, Boolean), (Double, Double)
Now, the function π1, is just a function that takes a pair and returns the first element, this function is called in usually first, and that's how it looks like π1(x, y) = x, on the other hand you have second, doing the same thing but returning the second element:
fst(a, b) = a
snd(a, b) = b
When I studied the signature "Characteristics of the programming languages" in college our professor recommended this book, see the chapter Product Domain to understand well all this concepts.
Related
I am learning OCaml. I know that OCaml provides us with both imperative style of programming and functional programming.
I came across this code as part of my course to compute the n'th Fibonacci number in OCaml
let memoise f =
let table = ref []
in
let rec find tab n =
match tab with
| [] ->
let v = (f n)
in
table := (n, v) :: !table;
v
| (n', v) :: t ->
if n' = n then v else (find t n)
in
fun n -> find !table n
let fibonacci2 = memoise fibonacci1
Where the function fibonacci1 is implemented in the standard way as follows:
let rec fibonacci1 n =
match n with
| 0 | 1 -> 1
| _ -> (fibonacci1 (n - 1)) + (fibonacci1 (n - 2))
Now my question is that how are we achieving memoisation in fibonacci2. table has been defined inside the function fibonacci2 and thus, my logic dictates that after the function finishes computation, the list table should get lost and after each call the table will get built again and again.
I ran some a simple test where I called the function fibonacci 35 twice in the OCaml REPL and the second function call returned the answer significantly faster than the first call to the function (contrary to my expectations).
I though that this might be possible if declaring a variable using ref gives it a global scope by default.
So I tried this
let f y = let x = ref 5 in y;;
print_int !x;;
But this gave me an error saying that the value of x is unbounded.
Why does this behave this way?
The function memoise returns a value, call it f. (f happens to be a function). Part of that value is the table. Every time you call memoise you're going to get a different value (with a different table).
In the example, the returned value f is given the name fibonacci2. So, the thing named fibonacci2 has a table inside it that can be used by the function f.
There is no global scope by default, that would be a huge mess. At any rate, this is a question of lifetime not of scope. Lifetimes in OCaml last as long as an object can be reached somehow. In the case of the table, it can be reached through the returned function, and hence it lasts as long as the function does.
In your second example you are testing the scope (not the lifetime) of x, and indeed the scope of x is restricted to the subexpresssion of its let. (I.e., it is meaningful only in the expression y, where it's not used.) In the original code, all the uses of table are within its let, hence there's no problem.
Although references are a little tricky, the underlying semantics of OCaml come from lambda calculus, and are extremely clean. That's why it's such a delight to code in OCaml (IMHO).
I would like to implement term graphs in Haskell, so that I can implement a term rewriting engine that uses sharing. Something like
data TG f v = Var v | Op f [TG f v] | P (Ptr (TG f v))
And I would want something like the following to make sense:
let
t' = Op 'f' [Var 'x', Var 'y']
t = getPointer t'
in
Op 'g' [P t,P t]
Then during rewriting, I only have to rewrite t once.
However, I noticed two things: (1) the module is called Foreign.Storable, so should it only be used for FFI stuff and (2) there are no instances of Foreign.Storable for any types like lists; why is this?
As pointed out in the comments, if you want to define a normal algebraic datatype in Haskell but gain access to the graph structure, you need to use some variant of observable sharing. Types like ForeignPtr are really for interfacing with external code or low-level memory management and aren't really appropriate for this kind of situation.
All the available techniques for observable sharing require some kind of slightly "unsafe" code - in that the burden is on the user not to misuse it. The issue is that Haskell's semantics aren't intended to allow you to "see" whether two values are the same pointer or not. However in practice the worst that can happen is that you will miss some situation where the user used a single definition, so you will end up with duplication in your internal data structure. Depending on the semantics of your own structure, this may just have a performance impact.
Observable sharing is usually based on lower level primitives for pointer equality - i.e. checking whether two specified Haskell values are actually being stored at exactly the same location in memory, or the more versatile stable names, which represent the location in memory of a single Haskell value and can be stored in a table and compared for equality later on.
Higher level libraries like data-reify help to hide these details from you.
The nicest way to use observable sharing is to allow users to write normal values of the algebraic types, e.g. for your example simply:
let t = Op 'f' [Var 'x', Var 'y']
in Op 'g' [P t,P t]
and then have your library use whichever approach to observable sharing to translate that into some kind of explicit graph structure as soon as you receive the values from the user. For example you might translate into a different datatype with explicit pointers, or augment the TG type with them. The explicit pointers would just be some kind of lookup into your own map structure, e.g.
data InternalTG f v = ... | Pointer Int
type TGMap f v = IntMap (InternalTG f v)
If using something like data-reify then InternalTG f v would be the DeRef type for TG f v.
You can then do your rewriting on the resulting graph structure.
As an alternative to using observable sharing at all, if you are willing for your users to use a monad to construct their values and explicitly choose when to use sharing (as suggested by the inclusion of getPointer above), then you can simply use a state monad to build up the graph explicitly:
-- your code
data TGState f v = TGState { tgMap :: IntMap (TG f v), tgNextSymbol :: Int }
initialTGState :: TGState f v
initialTGState = TGState { tgMap = IntMap.empty, tgNextSymbol = 0 }
type TGMonad f v a = State (TGState f v) a
newtype Ptr tg = Ptr Int -- a "phantom type" just to give some type safety
getPointer :: TG f v -> TGMonad f v (Ptr (TG f v))
getPointer tg = do
tgState <- get
let sym = tgNextSymbol tgState
put $
TGState { tgMap = IntMap.insert sym tg (tgMap tgState),
tgNextSymbol = sym + 1 }
return (Ptr sym)
runTGMonad :: TGMonad a -> (a, IntMap (TG f v))
runTGMonad m =
let (v, tgState) = runState m
(v, tgMap tgState)
-- user code
do
let t' = Op 'f' [Var 'x', Var 'y']
t <- getPointer t'
return $ Op 'g' [P t,P t]
Once you have the graph by whatever route, there are all sorts of techniques for manipulating it, but these are probably beyond the scope of your original question.
I was required to write a set of functions for problems in class. I think the way I wrote them was a bit more complicated than they needed to be. I had to implement all the functions myself, without using and pre-defined ones. I'd like to know if there are any quick any easy "one line" versions of these answers?
Sets can be represented as lists. The members of a set may appear in any order on the list, but there shouldn't be more than one
occurrence of an element on the list.
(a) Define dif(A, B) to
compute the set difference of A and B, A-B.
(b) Define cartesian(A,
B) to compute the Cartesian product of set A and set B, { (a, b) |
a∈A, b∈B }.
(c) Consider the mathematical-induction proof of the
following: If a set A has n elements, then the powerset of A has 2n
elements. Following the proof, define powerset(A) to compute the
powerset of set A, { B | B ⊆ A }.
(d) Define a function which, given
a set A and a natural number k, returns the set of all the subsets of
A of size k.
(* Takes in an element and a list and compares to see if element is in list*)
fun helperMem(x,[]) = false
| helperMem(x,n::y) =
if x=n then true
else helperMem(x,y);
(* Takes in two lists and gives back a single list containing unique elements of each*)
fun helperUnion([],y) = y
| helperUnion(a::x,y) =
if helperMem(a,y) then helperUnion(x,y)
else a::helperUnion(x,y);
(* Takes in an element and a list. Attaches new element to list or list of lists*)
fun helperAttach(a,[]) = []
helperAttach(a,b::y) = helperUnion([a],b)::helperAttach(a,y);
(* Problem 1-a *)
fun myDifference([],y) = []
| myDifference(a::x,y) =
if helper(a,y) then myDifference(x,y)
else a::myDifference(x,y);
(* Problem 1-b *)
fun myCartesian(xs, ys) =
let fun first(x,[]) = []
| first(x, y::ys) = (x,y)::first(x,ys)
fun second([], ys) = []
| second(x::xs, ys) = first(x, ys) # second(xs,ys)
in second(xs,ys)
end;
(* Problem 1-c *)
fun power([]) = [[]]
| power(a::y) = union(power(y),insert(a,power(y)));
I never got to problem 1-d, as these took me a while to get. Any suggestions on cutting these shorter? There was another problem that I didn't get, but I'd like to know how to solve it for future tests.
(staircase problem) You want to go up a staircase of n (>0) steps. At one time, you can go by one step, two steps, or three steps. But,
for example, if there is one step left to go, you can go only by one
step, not by two or three steps. How many different ways are there to
go up the staircase? Solve this problem with sml. (a) Solve it
recursively. (b) Solve it iteratively.
Any help on how to solve this?
Your set functions seem nice. I would not change anything principal about them except perhaps their formatting and naming:
fun member (x, []) = false
| member (x, y::ys) = x = y orelse member (x, ys)
fun dif ([], B) = []
| dif (a::A, B) = if member (a, B) then dif (A, B) else a::dif(A, B)
fun union ([], B) = B
| union (a::A, B) = if member (a, B) then union (A, B) else a::union(A, B)
(* Your cartesian looks nice as it is. Here is how you could do it using map: *)
local val concat = List.concat
val map = List.map
in fun cartesian (A, B) = concat (map (fn a => map (fn b => (a,b)) B) A) end
Your power is also very neat. If you call your function insert, it deserves a comment about inserting something into many lists. Perhaps insertEach or similar is a better name.
On your last task, since this is a counting problem, you don't need to generate the actual combinations of steps (e.g. as lists of steps), only count them. Using the recursive approach, try and write the base cases down as they are in the problem description.
I.e., make a function steps : int -> int where the number of ways to take 0, 1 and 2 steps are pre-calculated, but for n steps, n > 2, you know that there is a set of combinations of steps that begin with either 1, 2 or 3 steps plus the number combinations of taking n-1, n-2 and n-3 steps respectively.
Using the iterative approach, start from the bottom and use parameterised counting variables. (Sorry for the vague hint here.)
Suppose I have a binary operator f :: "sT => sT => sT". I want to define f so that it implements a 4x4 multiplication table for the Klein four group, shown here on the Wiki:
http://en.wikipedia.org/wiki/Klein_four-group
Here, all I'm attempting to do is create a table with 16 entries. First, I define four constants like this:
consts
k_1::sT
k_a::sT
k_b::sT
k_ab::sT
Then I define my function to implement the 16 entries in the table:
k_1 * k_1 = k_1
k_1 * k_a = k_a
...
k_ab * k_ab = k_1
I don't know how to do any normal-like programming in Isar, and I've seen on the Isabelle user's list where it was said that (certain) programming-like constructs have been intentionally de-emphasized in the language.
The other day, I was trying to create a simple, contrived function, and after finding the use of if, then, else in a source file, I couldn't find a reference to those commands in isar-ref.pdf.
In looking at the tutorials, I see definition for defining functions in a straightforward way, and other than that, I only see information on recursive and inductive functions, which require datatype, and my situation is more simple than that.
If left to my own devices, I guess I would try and define a datatype for those 4 constants shown above, and then create some conversion functions so that I end up with a binary operator f :: sT => sT => sT. I messed around a little with trying to use fun, but it wasn't turning out to be a simple deal.
I had done a little experimenting with fun and inductive
UPDATE: I add some material here in response to the comment telling me that Programming and Proving is where I'll find the answers. It seems I might be going astray of the ideal Stackoverflow format.
I had done some basic experimenting, mainly with fun, but also with inductive. I gave up on inductive fairly fast. Here's the type of error I got from simple examples:
consts
k1::sT
inductive k4gI :: "sT => sT => sT" where
"k4gI k1 k1 = k1"
--"OUTPUT ERROR:"
--{*Proofs for inductive predicate(s) "k4gI"
Ill-formed introduction rule ""
((k4gI k1 k1) = k1)
Conclusion of introduction rule must be an inductive predicate
*}
My multiplication table isn't inductive, so I didn't see that inductive was what I should spend my time chasing.
"Pattern matching" seems a key idea here, so I experimented with fun. Here's some really messed up code trying to use fun with only a standard function type:
consts
k1::sT
fun k4gF :: "sT => sT => sT" where
"k4gF k1 k1 = k1"
--"OUTPUT ERROR:"
--"Malformed definition:
Non-constructor pattern not allowed in sequential mode.
((k4gF k1 k1) = k1)"
I got that kind of error, and I had read things like this in Programming and Proving:
"Recursive functions are defined with fun by pattern matching over datatype constructors.
That all gives a novice the impression that fun requires datatype. As far its big brother function, I don't know about that.
It seems here, all I need is a recursive function with 16 base cases, and that would define my multiplication table.
Is function the answer?
In editing this question, I remembered function from the past, and here's function at work:
consts
k1::sT
function k4gF :: "sT => sT => sT" where
"k4gF k1 k1 = k1"
try
The output of try is telling me it can be proved (Update: I think it's actually telling me that only 1 of the proof steps can be prove.):
Trying "solve_direct", "quickcheck", "try0", "sledgehammer", and "nitpick"...
Timestamp: 00:47:27.
solve_direct: (((k1, k1) = (k1, k1)) ⟹ (k1 = k1)) can be solved directly with
HOL.arg_cong: ((?x = ?y) ⟹ ((?f ?x) = (?f ?y))) [name "HOL.arg_cong", kind "lemma"]
HOL.refl: (?t = ?t) [name "HOL.refl"]
MFZ.HOL⇣'eq⇣'is⇣'reflexive: (?r = ?r) [name "MFZ.HOL⇣'eq⇣'is⇣'reflexive", kind "theorem"]
MFZ.HOL_eq_is_reflexive: (?r = ?r) [name "MFZ.HOL_eq_is_reflexive", kind "lemma"]
Product_Type.Pair_inject:
(⟦((?a, ?b) = (?a', ?b')); (⟦(?a = ?a'); (?b = ?b')⟧ ⟹ ?R)⟧ ⟹ ?R)
[name "Product_Type.Pair_inject", kind "lemma"]
I don't know what that means. I only know about function because of trying to prove an inconsistency. I only know it doesn't complain as much. If using function like this is how I define my multiplication table, then I'm happy.
Still, being an argumentative type, I didn't learn about function in a tutorial. I learned about it several months ago in a reference manual, and I still don't know much about how to use it.
I have a function which I prove with auto, but the function is probably no good, fortunately. That adds to the function's mystery. There's information on function in Defining Recursive Functions in Isabelle/HOL, and it compares fun and function.
However, I haven't seen one example of fun or function that doesn't use a recursive datatype, such as nat or 'a list. Maybe I didn't look hard enough.
Sorry for being verbose and this not ending up as a direct question, but there's no tutorial with Isabelle that takes a person directly from A to B.
Below, I don't adhere to an "only answer the question" format, but I am responding to my own question, and so everything I say will be of interest to the original poster.
(2nd update begin)
This should be my last update. To be content with "unsophisticated methods", it helps to be able to make comparisons to see the "low tech" way can be the best way.
I finally quit trying to make my main type work with the new type, and I just made me a Klein four-group out of a datatype like this, where the proof of associativity is at the end:
datatype AT4k = e4kt | a4kt | b4kt | c4kt
fun AOP4k :: "AT4k => AT4k => AT4k" where
"AOP4k e4kt y = y"
| "AOP4k x e4kt = x"
| "AOP4k a4kt a4kt = e4kt"
| "AOP4k a4kt b4kt = c4kt"
| "AOP4k a4kt c4kt = b4kt"
| "AOP4k b4kt a4kt = c4kt"
| "AOP4k b4kt b4kt = e4kt"
| "AOP4k b4kt c4kt = a4kt"
| "AOP4k c4kt a4kt = b4kt"
| "AOP4k c4kt b4kt = a4kt"
| "AOP4k c4kt c4kt = e4kt"
notation
AOP4k ("AOP4k") and
AOP4k (infixl "*" 70)
theorem k4o_assoc2:
"(x * y) * z = x * (y * z)"
by(smt AOP4k.simps(1) AOP4k.simps(10) AOP4k.simps(11) AOP4k.simps(12)
AOP4k.simps(13) AOP4k.simps(2) AOP4k.simps(3) AOP4k.simps(4) AOP4k.simps(5)
AOP4k.simps(6) AOP4k.simps(7) AOP4k.simps(8) AOP4k.simps(9) AT4k.exhaust)
The consequence is that I am now content with my if-then-else multiplication function. Why? Because the if-then-else function is very conducive to simp magic. This pattern matching doesn't work any magic in and of itself, not to mention that I would still have to work out the coercive subtyping part of it.
Here's the if-then-else function for the 4x4 multiplication table:
definition AO4k :: "sT => sT => sT" where
"AO4k x y =
(if x = e4k then y else
(if y = e4k then x else
(if x = y then e4k else
(if x = a4k y = c4k then b4k else
(if x = b4k y = c4k then a4k else
(if x = c4k y = a4k then b4k else
(if x = c4k y = b4k then a4k else
c4k)))))))"
Because of the one nested if-then-else statement, when I run auto, it produces 64 goals. I made 16 simp rules, one for every value in the multiplication table, so when I run auto, with all the other simp rules, the auto proof takes about 90ms.
Low tech is the way to go sometimes; it's a RISC vs. CISC thing, somewhat.
A small thing like a multiplication table can be important for testing things, but it can't be useful if it's gonna slow my THY down because it's in some big loop that takes forever to finish.
(2nd update end)
(Update begin)
(UPDATE: My question above falls under the category "How do I do basic programming in Isabelle, like with other programming languages?" Here, I go beyond the specific question some, but I try to keep my comments about the challenge to a beginner who is trying to learn Isabelle when the docs are at the intermediate level, at least, in my opinion they are.
Specific to my question, though, is that I have need for a case statement, which is a very basic feature of many, many programming languages.
In looking for a case statement today, I thought I hit gold after doing one more search in the docs, this time in Isabelle - A Proof Assistant for
Higher-Order Logic.
On page 5 it documents a case statement, but on page 18, it clarifies that it's only good for datatype, and I seem to confirm that with an error like this:
definition k4oC :: "kT => kT => kT" (infixl "**" 70) where
"k4oC x y = (case x y of k1 k1 => k1)"
--{*Error in case expression:
Not a datatype constructor: "i130429a.k1"
In clause
((k1 k1) ⇒ k1)*}
This is an example that a person, whether expert or beginner, has a need for a tutorial to run through the basic programming features of Isabelle.
If you say, "There are tutorials that do that." I say, "No, there aren't, not in my opinion".
The tutorials emphasize the important, sophisticated features of Isabelle that separate it from the crowd.
That's commentary, but it's commentary meant to tie into the question, "How do I learn Isabelle?", and which my original question above is related to.
The way you learn Isabelle without being a PhD graduate student at Cambridge, TUM, or NICTA, is you struggle for 6 to 12 months or more. If during that time you don't abandon, you can be at a level that will allow you to appreciate the intermediate level instruction available. Experience may vary.
For me, the 3 books that will take me to the next level of proving, weaning me off of auto and metis, when I find time to go through them, are
Isabelle - A Proof Assistant for Higher-Order Logic
Programming and Proving in Isabelle/HOL
Isabelle/Isar --- a versatile environment for human-readable formal proof documents
If someone says, "You've abused the Stackoverflow answer format by engaging in long-winded commentary and opinion."
I say, "Well, I asked for a good way to do some basic programming in Isabelle, where I was hoping for something more sophisticated than a big if-then-else statement. No one provided anything close to what I asked for. In fact, I am who provided a pattern matching function, and what I needed to do it is not even close to being documented. Pattern matching is a simple concept, but not necessarily in Isabelle, due to the proof requirements for recursive functions. (If there's a simple way to do it to replace my if-then-else function below, or even a case statement way, I'd sure like to know.)
Having said that, I am inclined to take some liberties, and there are, at this time, only 36 views for this page anyway, of which probably, at least 10 come from my browser.
Isabelle/HOL is a powerful language. I'm not complaining. It only sounds like it.)
(Update end)
It can count for a lot just to know that something is true or false, in this case being told that function can work with non-inductive types. However, how I end up using function below is not a result of anything I've seen in any one Isabelle document, and I had need for this former SO question on coercive subtyping:
What is an Isabelle/HOL subtype? What Isar commands produce subtypes?
I end up with two ways that I completed a 2x2 part of my multiplication table. I link here to the theory: as ASCII friendly A_i130429a.thy, jEdit friendly i130429a.thy, the PDF, and folder.
The two ways are:
The clumsy but fast and simp friendly if-then-else way. The definition takes 0ms, and the proof takes 155ms.
The pattern matching way using function. Here I could think aloud in public for a long time about this way of doing things, but I won't. I know I'll use what I've learned here, but it's definitely not an elegant solution for a simple multiplication table function, and it's far from obvious that a person would have to do all that to create a basic function that uses pattern matching. Of course, maybe I don't have to do all that. The definition takes 391ms, and the proof takes 317ms.
As to having to resort to using if-then-else, either Isabelle/HOL is not feature rich when it comes to basic programming statements, or these basic statements aren't documented. The if-then-else statement is not even in the Isar Reference Manual index. I think, "If it's not documented, maybe there's a nice, undocumented case statement like Haskell has". Still, I'd take Isabelle over Haskell any day.
Below, I explain the different sections of A_i130429a.thy. It's sort of trivial, but not completely, since I haven't seen an example to teach me how to do that.
I start with a type and four constants, which remain undefined.
typedecl kT
consts
k1::kT
ka::kT
kb::kT
kab::kT
Of note is that the constants remain undefined. That I'm leaving a lot of things undefined is part of why I have problems finding good examples in docs and sources to use as templates for myself.
I do a test to try and intelligently use function on a non-inductive datatype, but it doesn't work. With my if-then-else function, after I figure out I'm not restricting my function domain, I then see that the problem with this function was also with the domain. The function k4f0 is wanting x to be k1 or ka for every x, which obviously is not true.
function k4f0 :: "kT => kT" where
"k4f0 k1 = k1"
| "k4f0 ka = ka"
apply(auto)
apply(atomize_elim)
--"goal (1 subgoal):
1. (!! (x::sT). ((x = k1) | (x = ka)))"
I give up and define me an ugly function with if-then-else.
definition k4o :: "kT => kT => kT" (infixl "**" 70) where
"k4o x y =
(if x = k1 & y = k1 then k1 else
(if x = k1 & y = ka then ka else
(if x = ka & y = k1 then ka else
(if x = ka & y = ka then k1 else (k1)
))))"
declare k4o_def [simp add]
The hard part becomes trying to prove associativity of my function k4o. But that's only because I'm not restricting the domain. I put in an implication into the statement, and the auto magic kicks in, the fastforce magic is there also, and faster, so I use it.
abbreviation k4g :: "kT set" where
"k4g == {k1, ka}"
theorem
"(x \<in> k4g & y \<in> k4g & z \<in> k4g) --> (x ** y) ** z = x ** (y ** z)"
by(fastforce)(*155ms*)
The magic makes me happy, and I'm then motivated to try and get it done with function and pattern matching. Because of the recent SO answer on coercive subtyping, linked to above, I figure out how to fix the domain with typedef. I don't thinks it's the perfect solution, but I definitely learned something.
typedef kTD = "{x::kT. x = k1 | x = ka}"
by(auto)
declare [[coercion_enabled]]
declare [[coercion Abs_kTD]]
function k4f :: "kTD => kTD => kT" (infixl "***" 70) where
"k4f k1 k1 = k1"
| "k4f k1 ka = ka"
| "k4f ka k1 = ka"
| "k4f ka ka = k1"
by((auto),(*391ms*)
(atomize_elim),
(metis (lifting, full_types) Abs_kTD_cases mem_Collect_eq),
(metis (lifting, full_types) Rep_kTD_cases Rep_kTD_inverse mem_Collect_eq),
(metis (lifting, full_types) Rep_kTD_cases Rep_kTD_inverse mem_Collect_eq),
(metis (lifting, full_types) Rep_kTD_cases Rep_kTD_inverse mem_Collect_eq),
(metis (lifting, full_types) Rep_kTD_cases Rep_kTD_inverse mem_Collect_eq))
termination
by(metis "termination" wf_measure)
theorem
"(x *** y) *** z = x *** (y *** z)"
by(smt
Abs_kTD_cases
k4f.simps(1)
k4f.simps(2)
k4f.simps(3)
k4f.simps(4)
mem_Collect_eq)(*317ms*)
A more or less convenient syntax for defining a "finite" function is the function update syntax: For a function f, f(x := y) represents the function %z. if z = x then y else f z. If you want to update more than one value, separate them with commas: f(x1 := y1, x2 := y2).
So, for example function which is addition for 0, 1 and undefined else could be written as:
undefined (0 := undefined(0 := 0, 1 := 1),
1 := undefined(0 := 1, 1 := 2))
Another possibility to define a finite function is to generate it from a list of pairs; for example with map_of. With f xs y z = the (map_of xs (y,z)), then the above function could be written as
f [((0,0),0), ((0,1),1), ((1,0),1), ((1,1),1)]
(Actually, it is not quite the same function, as it might behave differently outside the defined Domain).
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Functional programming and non-functional programming
I'm afraid Wikipedia did not bring me any further.
Many thanks
PS: This past thread is also very good, however I am happy I asked this question again as the new answers were great - thanks
Functional programming and non-functional programming
First learn what a Turing machine is (from wikipedia).
A Turing machine is a device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.
This is about Lamda calculus. (from wikipedia).
In mathematical logic and computer science, the lambda calculus, also written as the λ-calculus, is a formal system for studying computable recursive functions, a la computability theory, and related phenomena such as variable binding and substitution.
The functional programming languages use, as their fundamental model of computation, the lambda calculus, while all the other programming languages use the Turing machine as their fundamental model of computation. (Well, technically, I should say functional programming languages vr.s imperative programming languages- as languages in other paradigms use other models. For example, SQL uses the relational model, Prolog uses a logic model, and so on. However, pretty much all the languages people actually think about when discussing programming languages are either functional or imperative, so I’ll stick with the easy generality.)
What do I mean by “fundamental model of computation”? Well, all languages can be thought of in two layers: one, some core Turing-complete language, and then layers of either abstractions or syntactic sugar (depending upon whether you like them or not) which are defined in terms of the base Turing-complete language. The core language for imperative languages is then a variant of the classic Turing machine model of computation one might call “the C language”. In this language, memory is an array of bytes that can be read from and written to, and you have one or more CPUs which read memory, perform simple arithmetic, branch on conditions, and so on. That’s what I mean by the fundamental model of computation of these languages is the Turing Machine.
The fundamental model of computation for functional languages is the Lambda Calculus, and this shows up in two different ways. First, one thing that many functional languages do is to write their specifications explicitly in terms of a translation to the lambda calculus to specify the behavior of a program written in the language (this is known as “denotational semantics”). And second, almost all functional programming languages implement their compilers to use an explicit lambda-calculus-like intermediate language- Haskell has Core, Lisp and Scheme have their “desugared” representation (after all macros have been applied), Ocaml (Objective Categorical Abstract Machine Language) has it’s lispish intermediate representation, and so on.
So what is this lambda calculus I’ve been going on about? Well, the basic idea is that, to do any computation, you only need two things. The first thing you need is function abstraction- the definition of an unnamed, single-argument, function. Alonzo Church, who first defined the Lambda calculus used the rather obscure notation to define a function as the greek letter lambda, followed by the one-character name of the argument to the function, followed by a period, followed by the expression which was the body of the function. So the identity function, which given any value, simply returns that value, would look like “λx.x” I’m going to use a slight more human-readable approach- I’m going to replace the λ character with the word “fun”, the period with “->”, and allow white space and allow multi-character names. So I might write the identity function as “fun x -> x”, or even “fun whatever -> whatever”. The change in notation doesn’t change the fundamental nature. Note that this is the source of the name “lambda expression” in languages like Haskell and Lisp- expressions that introduce unnamed local functions.
The only other thing you can do in the Lambda Calculus is to call functions. You call a function by applying an argument to it. I’m going to follow the standard convention that application is just the two names in a row- so f x is applying the value x to the function named f. We can replace f with some other expression, including a Lambda expression, if we want- and we can When you apply an argument to an expression, you replace the application with the body of the function, with all the occurrences of the argument name replaced with whatever value was applied. So the expression (fun x -> x x) y becomes y y.
The theoreticians went to great lengths to precisely define what they mean by “replacing all occurrences of the variable with the the value applied”, and can go on at great lengths about how precisely this works (throwing around terms like “alpha renaming”), but in the end things work exactly like you expect them to. The expression (fun x -> x x) (x y) becomes (x y) (x y)- there is no confusion between the argument x within the anonymous function, and the x in the value being applied. This works even in multiple levels- the expression (fun x -> (fun x -> x x)) (x x)) (x y) becomes first (fun x -> x x) ((x y) (x y)) and then ((x y) (x y)) ((x y) (x y)). The x in the innermost function (“(fun x -> x x)”) is a different x than the other x’s.
It is perfectly valid to think of function application as a string manipulation. If I have a (fun x -> some expression), and I apply some value to it, then the result is just some expression with all the x’s textually replaced with the “some value” (except for those which are shadowed by another argument).
As an aside, I will add parenthesis where needed to disambiguate things, and also elide them where not needed. The only difference they make is grouping, they have no other meaning.
So that’s all there is too it to the Lambda calculus. No, really, that’s all- just anonymous function abstraction, and function application. I can see you’re doubtful about this, so let me address some of your concerns.
First, I specified that a function only took one argument- how do you have a function that takes two, or more, arguments? Easy- you have a function that takes one argument, and returns a function that takes the second argument. For example, function composition could be defined as fun f -> (fun g -> (fun x -> f (g x))) – read that as a function that takes an argument f, and returns a function that takes an argument g and return a function that takes an argument x and return f (g x).
So how do we represent integers, using only functions and applications? Easily (if not obviously)- the number one, for instance, is a function fun s -> fun z -> s z – given a “successor” function s and a “zero” z, one is then the successor to zero. Two is fun s -> fun z -> s s z, the successor to the successor to zero, three is fun s -> fun z -> s s s z, and so on.
To add two numbers, say x and y, is again simple, if subtle. The addition function is just fun x -> fun y -> fun s -> fun z -> x s (y s z). This looks odd, so let me run you through an example to show that it does, in fact work- let’s add the numbers 3 and 2. Now, three is just (fun s -> fun z -> s s s z) and two is just (fun s -> fun z -> s s z), so then we get (each step applying one argument to one function, in no particular order):
(fun x -> fun y -> fun s -> fun z -> x s (y s z)) (fun s -> fun z -> s s s z) (fun s -> fun z -> s s z)
(fun y -> fun s -> fun z -> (fun s -> fun z -> s s s z) s (y s z)) (fun s -> fun z -> s s z)
(fun y -> fun s -> fun z -> (fun z -> s s s z) (y s z)) (fun s -> fun z -> s s z)
(fun y -> fun s -> fun z -> s s s (y s z)) (fun s -> fun z -> s s z)
(fun s -> fun z -> s s s ((fun s -> fun z -> s s z) s z))
(fun s -> fun z -> s s s (fun z -> s s z) z)
(fun s -> fun z -> s s s s s z)
And at the end we get the unsurprising answer of the successor to the successor to the successor to successor to the successor to zero, known more colloquially as five. Addition works by replacing the zero (or where we start counting) of the x value with the y value- to define multiplication, we instead diddle with the concept of “successor”:
(fun x -> fun y -> fun s -> fun z -> x (y s) z)
I’ll leave it to you to verify that the above code does
Wikipedia says
Imperative programs tend to emphasize the series of steps taken by a program in carrying out an action, while functional programs tend to emphasize the composition and arrangement of functions, often without specifying explicit steps. A simple example illustrates this with two solutions to the same programming goal (calculating Fibonacci numbers). The imperative example is in C++.
// Fibonacci numbers, imperative style
int fibonacci(int iterations)
{
int first = 0, second = 1; // seed values
for (int i = 0; i < iterations; ++i) {
int sum = first + second;
first = second;
second = sum;
}
return first;
}
std::cout << fibonacci(10) << "\n";
A functional version (in Haskell) has a different feel to it:
-- Fibonacci numbers, functional style
-- describe an infinite list based on the recurrence relation for Fibonacci numbers
fibRecurrence first second = first : fibRecurrence second (first + second)
-- describe fibonacci list as fibRecurrence with initial values 0 and 1
fibonacci = fibRecurrence 0 1
-- describe action to print the 10th element of the fibonacci list
main = print (fibonacci !! 10)
See this PDF also
(A) Functional programming describes solutions mechanically. You define a machine that is constantly outputting correctly, e.g. with Caml:
let rec factorial = function
| 0 -> 1
| n -> n * factorial(n - 1);;
(B) Procedural programming describes solutions temporally. You describe a series of steps to transform a given input into the correct output, e.g. with Java:
int factorial(int n) {
int result = 1;
while (n > 0) {
result *= n--;
}
return result;
}
A Functional Programming Language wants you to do (A) all the time. To me, the greatest tangible uniqueness of purely functional programming is statelessness: you never declare variables independently of what they are used for.