Access elements of data types - isabelle

Is it possible in Isabelle to access the individual elements of a data type? Let's say I have the following data type:
datatype foo = mat int int int int
and (e.g. in a lemma)
fixes A :: foo
Is it possible to access the single elements of A? Or alternatively, fix the single elements (fix a b c d :: int) and then define A as mat a b c d?

Alternatively it is possible to define custom extractor functions when specifying a data type. In your case, for example
datatype foo = Mat (mat_a : int) (mat_b : int) (mat_c : int) (mat_d : int)
would work.
Then you can access the first element of a foo value x by mat_a x, the second by mat_b x, and so on.
Example:
value "mat_a (Mat 1 2 3 4)"
"1" :: "int"

On a logical level, you can use the case syntax to deconstruct the datatype (i.e. case A of mat a b c d ⇒ …). You can also define your own projection functions using fun or primrec, e.g.
primrec foo1 where "foo1 (mat a b c d) = a"
In a proof, you can access the values using obtain and the cases command, e.g.
obtain a b c d where "A = mat a b c d" by (cases A) auto
As for your questions about definitions, you can make local definitions in Isar proofs like this:
define A where "A = mat a b c d"
and you can then unfold that definition using the theorem A_def.
If you want to use your definition in the premises or goal already (and have it unfolded in the theorem after proving it), you can use defines:
lemma
defines "A ≡ mat a b c d"
shows …
Again, this gives you a fact A_def that you can use to unfold the definition.
You can also use let ?A = mat a b c d or pattern matching with is to introduce abbreviations. In contrast to the definitions from before, these are only on the syntactic level, i.e. you type ?A, but after parsing, you have mat a b c d, and you will also see mat a b c d in the output. is works like this:
lemma
shows "P (mat a b c d)" (is "P ?A")
proof -
term ?A
It also works after "assumes".

Related

Swapping Variables by pattern matching?

Assume you have 2 Integer Variables a and b
How would you swap them only if a > b by using a match expression?
If a <= b do not swap the ints.
In an imperative language:
if (a > b){
int temp=a;
a=b;
b=temp;
}
Doing the same in ocaml seems surprisingly hard.
I tried
let swap a b =
match a,b with
| a,b when a > b -> b,a
| a,b when a <= b -> a,b
I am trying to do this because in the following function call, I want to make sure that x is the bigger of the two variables.
One easy way :
let swap a b =
if (a>b) then (b,a)
else (a,b)
But this is not equivalent to the C code, your C code is swapping the value of the variable - this is how imperative language are doing.
In Ocaml, there is no side-effect (except if you use reference to some int). This swap function will return a tuple whose members are always ordered (the first member will be always smaller than the second order).
Without state, you cannot "swap" the values of the variables since the variables are immutable. Your best bet is to use a tuple and introduce new variables in the scope. Example:
let diff a b =
let (min, max) = if a <= b then (a, b) else (b, a)
in max - min
You can of course use the same identifiers and shadow the original variables:
let diff a b =
let (a, b) = if a <= b then (a, b) else (b, a)
in b - a
It doesn't really help with readability though.
Just for reference, if you'd like to swap the values in two refs, it would look like the following:
let swap a_ref b_ref =
let a, b = !a_ref, !b_ref in
a_ref := b;
b_ref := a
;;
which has the type val swap : 'a ref -> 'a ref -> unit.

Ocaml pattern matching for "square" tuple?

In attempting to learn Ocaml and functional languages in general, I have been looking into pattern matching. I was reading this documentation, and decided to try the following exercise for myself:
Make an expression that evaluates to true when an integer 4-tuple is input such that each element in the 4-tuple is equal.
(4, 4, 4, 4) -> true
(4, 2, 4, 4) -> false
I find that doing pattern matching for the specificity of the value of the elements to not be obvious. This is the code I wrote.
let sqr x = match x with
(a, a, a, a) -> true
| (_, _, _, _) -> false ;;
Of course, this code throws the following error:
Error: Variable a is bound several times in this matching
How else can I not only enforce that x is a 4-tuple, but also of strictly integers that are equal?
(Also, of course a "square" tuple should not allow non-positive integers, but I'm more concerned with the aforementioned problem as of now).
`
As you found out, unlike some other languages' pattern-matching systems, you can't do this in OCaml. What you can do is match each element of the tuple separately while using guards to only succeed if some property (like equivalence) holds across them:
let sqr x =
match x with
| (a, b, c, d) when a = b && b = c && c = d -> `Equal
| (a, b, c, d) when (a < b && b < c && c < d)
|| (a > b && b > c && c > d) -> `Ordered
| _ -> `Boring
You have many ways to do pattern-matching, pattern matching is not only when using the match keyword
let fourtuple_equals (a,b,c,d) = List.for_all ((=) a) [b;c;d]
val fourtuple_equals : 'a * 'a * 'a * 'a -> bool = <fun>
Here you have a pattern matching directly in the parameter in order to access your four elements tuple.
In this example I use a list to have a more concise code, but is not the more efficient.

OCaml Understanding Functions and Partial Applications

I am writing a form of form of transform in OCaml that takes in a function and also accepts a list to transform. I understand something is wrong with my pattern matching in terms of type-checking, as it will not compile and claims the types do not match but I am not sure what exactly is wrong with my cases.
I receive an actual declaration error underlining the name of the function when I attempt to compile.
let rec convert (fun: 'b -> 'c option) (l: 'b list) : 'c list =
begin match l with
| [] -> []
| h::tl -> if f h = Some h then h :: convert f tl
else convert f tl
end
I wrote the following test, which should pass in order to ensure the function works properly.
let test () : bool =
let f = func x -> if x > 3 then Some (x + 1) else None in
convert f [-1; 3; 4] = [5]
;; run_test "Add one" test
I am pretty confident the error is somewhere in my second pattern match.
You should provide the exact error message in the future when asking about a compilation error (as well as the position the compiler complains about).
In h :: convert f tl, convert f tl is 'c list, but h is 'b, so you can't combine them like this. Neither does f h = Some h make sense: f h is 'c option and Some h is 'b option. You probably want to match f h instead:
| h::tl -> match f h with
| Some h1 -> ...
| None -> ...

How to define map fusion in the Pure language?

I'm experimenting with the Pure language based on term rewriting.
I want to define "map fusion" using an equation, like this:
> map f (map g list) = map (f . succ . g) list;
(The succ is there to verify that the rule kicks in.)
However, it doesn't seem to work:
> map id (map id [2,3,4]);
[2,3,4]
The Pure manual says that
expressions are evaluated using the “leftmost-innermost” reduction strategy
So I suppose what's happening is that the innermost map id [2,3,4] expression is reduced first, so my rule never kicks in.
How to make map fusion work, then?
Here's a related experiment. The first rule doesn't kick in:
> a (b x) = "foo";
> b x = "bar";
> a (b 5);
a "bar"
I should have read the manual more closely. What I needed to do is to turn the pattern into a macro using the def keyword. This way it works:
> def map f (map g list) = map (f . succ . g) list;
> map id (map id [2,3,4]);
[3,4,5]

Importance of isomorphic functions

Short Question: What is the importance of isomorphic functions in programming (namely in functional programming)?
Long Question: I'm trying to draw some analogs between functional programming and concepts in Category Theory based off of some of the lingo I hear from time-to-time. Essentially I'm trying to "unpackage" that lingo into something concrete I can then expand on. I'll then be able to use the lingo with an understanding of just-what-the-heck-I'm-talking about. Which is always nice.
One of these terms I hear all the time is Isomorphism, I gather this is about reasoning about equivalence between functions or function compositions. I was wondering if someone could provide some insights into some common patterns where the property of isomorphism comes in handy (in functional programming), and any by-products gained, such as compiler optimizations from reasoning about isomorphic functions.
I take a little issue with the upvoted answer for isomorphism, as the category theory definition of isomorphism says nothing about objects. To see why, let's review the definition.
Definition
An isomorphism is a pair of morphisms (i.e. functions), f and g, such that:
f . g = id
g . f = id
These morphisms are then called "iso"morphisms. A lot of people don't catch that the "morphism" in isomorphism refers to the function and not the object. However, you would say that the objects they connect are "isomorphic", which is what the other answer is describing.
Notice that the definition of isomorphism does not say what (.), id, or = must be. The only requirement is that, whatever they are, they also satisfy the category laws:
f . id = f
id . f = f
(f . g) . h = f . (g . h)
Composition (i.e. (.)) joins two morphisms into one morphism and id denotes some sort of "identity" transition. This means that if our isomorphisms cancel out to the identity morphism id, then you can think of them as inverses of each other.
For the specific case where the morphisms are functions, then id is defined as the identity function:
id x = x
... and composition is defined as:
(f . g) x = f (g x)
... and two functions are isomorphisms if they cancel out to the identity function id when you compose them.
Morphisms versus objects
However, there are multiple ways two objects could be isomorphic. For example, given the following two types:
data T1 = A | B
data T2 = C | D
There are two isomorphisms between them:
f1 t1 = case t1 of
A -> C
B -> D
g1 t2 = case t2 of
C -> A
D -> B
(f1 . g1) t2 = case t2 of
C -> C
D -> D
(f1 . g1) t2 = t2
f1 . g1 = id :: T2 -> T2
(g1 . f1) t1 = case t1 of
A -> A
B -> B
(g1 . f1) t1 = t1
g1 . f1 = id :: T1 -> T1
f2 t1 = case t1 of
A -> D
B -> C
g2 t2 = case t2 of
C -> B
D -> A
f2 . g2 = id :: T2 -> T2
g2 . f2 = id :: T1 -> T1
So that's why it's better to describe the isomorphism in terms of the specific functions relating the two objects rather than the two objects, since there may not necessarily be a unique pair of functions between two objects that satisfy the isomorphism laws.
Also, note that it is not sufficient for the functions to be invertible. For example, the following function pairs are not isomorphisms:
f1 . g2 :: T2 -> T2
f2 . g1 :: T2 -> T2
Even though no information is lost when you compose f1 . g2, you don't return back to your original state, even if the final state has the same type.
Also, isomorphisms don't have to be between concrete data types. Here's an example of two canonical isomorphisms are not between concrete algebraic data types and instead simply relate functions: curry and uncurry:
curry . uncurry = id :: (a -> b -> c) -> (a -> b -> c)
uncurry . curry = id :: ((a, b) -> c) -> ((a, b) -> c)
Uses for Isomorphisms
Church Encoding
One use of isomorphisms is to Church-encode data types as functions. For example, Bool is isomorphic to forall a . a -> a -> a:
f :: Bool -> (forall a . a -> a -> a)
f True = \a b -> a
f False = \a b -> b
g :: (forall a . a -> a -> a) -> Bool
g b = b True False
Verify that f . g = id and g . f = id.
The benefit of Church encoding data types is that they sometimes run faster (because Church-encoding is continuation-passing style) and they can be implemented in languages that don't even have language support for algebraic data types at all.
Translating Implementations
Sometimes one tries to compare one library's implementation of some feature to another library's implementation, and if you can prove that they are isomorphic, then you can prove that they are equally powerful. Also, the isomorphisms describe how to translate one library into the other.
For example, there are two approaches that provide the ability to define a monad from a functor's signature. One is the free monad, provided by the free package and the other is operational semantics, provided by the operational package.
If you look at the two core data types, they look different, especially their second constructors:
-- modified from the original to not be a monad transformer
data Program instr a where
Lift :: a -> Program instr a
Bind :: Program instr b -> (b -> Program instr a) -> Program instr a
Instr :: instr a -> Program instr a
data Free f r = Pure r | Free (f (Free f r))
... but they are actually isomorphic! That means that both approaches are equally powerful and any code written in one approach can be translated mechanically into the other approach using the isomorphisms.
Isomorphisms that are not functions
Also, isomorphisms are not limited to functions. They are actually defined for any Category and Haskell has lots of categories. This is why it's more useful to think in terms of morphisms rather than data types.
For example, the Lens type (from data-lens) forms a category where you can compose lenses and have an identity lens. So using our above data type, we can define two lenses that are isomorphisms:
lens1 = iso f1 g1 :: Lens T1 T2
lens2 = iso g1 f1 :: Lens T2 T1
lens1 . lens2 = id :: Lens T1 T1
lens2 . lens1 = id :: Lens T2 T2
Note that there are two isomorphisms in play. One is the isomorphism that is used to build each lens (i.e. f1 and g1) (and that's also why that construction function is called iso), and then the lenses themselves are also isomorphisms. Note that in the above formulation, the composition (.) used is not function composition but rather lens composition, and the id is not the identity function, but instead is the identity lens:
id = iso id id
Which means that if we compose our two lenses, the result should be indistinguishable from that identity lens.
An isomorphism u :: a -> b is a function that has an inverse, i.e. another function v :: b -> a such that the relationships
u . v = id
v . u = id
are satisfied. You say that two types are isomorphic if there is an isomorphism between them. This essentially means that you can consider them to be the same type - anything that you can do with one, you can do with the other.
Isomorphism of functions
The two function types
(a,b) -> c
a -> b -> c
are isomorphic, since we can write
u :: ((a,b) -> c) -> a -> b -> c
u f = \x y -> f (x,y)
v :: (a -> b -> c) -> (a,b) -> c
v g = \(x,y) -> g x y
You can check that u . v and v . u are both id. In fact, the functions u and v are better known by the names curry and uncurry.
Isomorphism and Newtypes
We exploit isomorphism whenever we use a newtype declaration. For example, the underlying type of the state monad is s -> (a,s) which can be a little confusing to think about. By using a newtype declaration:
newtype State s a = State { runState :: s -> (a,s) }
we generate a new type State s a which is isomorphic to s -> (a,s) and which makes it clear when we use it, we are thinking about functions that have modifiable state. We also get a convenient constructor State and a getter runState for the new type.
Monads and Comonads
For a more advanced viewpoint, consider the isomorphism using curry and uncurry that I used above. The Reader r a type has the newtype declaration
newType Reader r a = Reader { runReader :: r -> a }
In the context of monads, a function f producing a reader therefore has the type signature
f :: a -> Reader r b
which is equivalent to
f :: a -> r -> b
which is one half of the curry/uncurry isomorphism. We can also define the CoReader r a type:
newtype CoReader r a = CoReader { runCoReader :: (a,r) }
which can be made into a comonad. There we have a function cobind, or =>> which takes a function that takes a coreader and produces a raw type:
g :: CoReader r a -> b
which is isomorphic to
g :: (a,r) -> b
But we already saw that a -> r -> b and (a,r) -> b are isomorphic, which gives us a nontrivial fact: the reader monad (with monadic bind) and the coreader comonad (with comonadic cobind) are isomorphic as well! In particular, they can both be used for the same purpose - that of providing a global environment that is threaded through every function call.
Think in terms of datatypes. In Haskell for example you can think of two data types to be isomorphic, if there exists a pair of functions that transform data between them in a unique way. The following three types are isomorphic to each other:
data Type1 a = Ax | Ay a
data Type2 a = Blah a | Blubb
data Maybe a = Just a | Nothing
You can think of the functions that transform between them as isomorphisms. This fits with the categorical idea of isomorphism. If between Type1 and Type2 there exist two functions f and g with f . g = g . f = id, then the two functions are isomorphisms between those two types (objects).

Resources