I've just begun playing around with the math-classes library and I'd would like to prove the following lemma:
Require Import
MathClasses.interfaces.abstract_algebra MathClasses.interfaces.vectorspace MathClasses.interfaces.canonical_names.
Lemma Munit_is_its_own_negation `{Module R M} : Munit = - Munit.
I was planning to prove this like so:
Add Munit to the right side using right_identity: Munit = - Munit & Munit
Use left_inverse on the right side: Munit = Munit
Use reflexivity.
However, when I try to apply rewrite <- right_inverse, I get the following error:
Error:
Unable to satisfy the following constraints:
In environment:
R : Type
M : Type
Re : Equiv R
Rplus : Plus R
Rmult : Mult R
Rzero : Zero R
Rone : One R
Rnegate : Negate R
Me : Equiv M
Mop : SgOp M
Munit : MonUnit M
Mnegate : Negate M
sm : ScalarMult R M
H : Module R M
?A : "Type"
?B : "Type"
?H : "Equiv (MonUnit M)"
?op : "?A → ?B → MonUnit M"
?inv : "?A → ?B"
?RightInverse : "RightInverse ?op ?inv Munit"
Why is Coq looking for an Equiv (MonUnit M) rather than just an Equiv M or MonUnit M, which are in the environment? Is it possible to complete this proof? If so, how?
Munit is an instance of the parameterized MonUnit typeclass. That means Munit is essentially a record (with exactly one field -- mon_unit), but I think you'd like to have your statement about the unit element of type M, since it doesn't make much sense to negate a record usually.
I believe it's possible, in principle, to make Coq unpack Munit and do the right thing, but why struggle if we can just restate the lemma:
Lemma mon_unit_is_its_own_negation `{Module R M} :
mon_unit = - mon_unit.
Then everything goes just as you've described:
Proof.
rewrite <- (right_identity (- mon_unit)).
now rewrite left_inverse.
Qed.
Related
I am currently attempting to use VST to verify the correctness of a project which involves a global array of doubles. However, when attempting to access the array I have that the head of the array is given as a data_at statement while the rest of the array is given as a sepcon list of mapsto statements and there does not appear to be any way to prove field_compatible for elements beyond the head of the array.
Trying to access elements beyond offset_val 0 seems to inevitably involve proving a size_compatible statement. This is where I run into a problem. Since the alignment of tdouble is set to 4 and the size is set to 8, there seems to be a possibility that the head of the array is at Ptrofs.modulus - 12 making size_compatible false for the next element in the array. Am I going about this the wrong way?
I made a toy example with the same problem that I've mentioned above.
double dbls[] = {0.0, 1.1};
int main() {
double sum;
sum = dbls[0] + dbls[1];
return 0;
}
I will frame my answer in the form of a Coq development:
Require Import VST.floyd.proofauto.
Require Import VST.progs.foo.
#[export] Instance CompSpecs : compspecs. make_compspecs prog. Defined.
Definition Vprog : varspecs. mk_varspecs prog. Defined.
Definition main_spec :=
DECLARE _main
WITH gv : globals
PRE [] main_pre prog tt gv
POST [ tint ] main_post prog gv.
Definition Gprog : funspecs := [ ].
Lemma body_main: semax_body Vprog Gprog f_main main_spec.
Proof.
start_function.
(* Remark 1: it seems to be a bug in VST 2.11.1 (and earlier versions)
that the array is not packaged up into
(data_at Ews (tarray Tdouble 2) ...)
the way it ought to be. This seems to work better for integer
arrays, et cetera.*)
(* Remark 2: you are right to be concerned about alignment, but
VST addresses that issue correctly. Any extern global variable
in a C program, such as your [dbls] array, is aligned at the
biggest possible alignment requirement. VST expresses this
with the "headptr" predicate, and for any identifier id,
(gv id) is a headptr. So therefore, *)
assert_PROP (headptr (gv _dbls)) by entailer!.
(* and you can see above the line, H: headptr (gv _dbls). *)
Print headptr.
(* This shows that (gv _dbls) must be at offset zero within
some block, which guarantees alignment at any type.
One useful consequence is proved by the lemma
headptr_field_compatible: *)
Check headptr_field_compatible.
(* And now, let's apply that lemma: *)
pose proof headptr_field_compatible (tarray tdouble 2) nil _
H (eq_refl _) Logic.I ltac:(simpl; rep_lia).
(* So we see that as long as the pesky 'align_compatible_rec' is proved,
the pointer (gv _dbls) should be 'field_compatible' with the array
type that you want. And it's straightforward though tedious to prove
the 'align_compatible_rec' premise, as follows: *)
spec H0.
apply align_compatible_rec_Tarray; intros.
Search align_compatible_rec.
eapply align_compatible_rec_by_value; [ reflexivity | ].
apply Z.divide_add_r.
apply Z.divide_0_r.
apply Z.divide_mul_l.
apply Z.mod_divide; compute; intros; congruence.
(* Normally, VST users shouldn't have to do this 'by hand'.
We should fix the bug (failure to nicely package the precondition).
But in the interim, perhaps this gives what you need for a workaround.*)
I'm using pl in racket: https://pl.barzilay.org/
The download can be found here: http://pl.barzilay.org/pl.plt
( : f1 : -> (Pairof Symbol String))
(define (f1)
(cons 'wwww "aaa"))
Error:
Type Checker: Polymorphic function `cons' could not be applied to arguments:
Argument 1:
Expected: a
Given: 'wwww
Argument 2:
Expected: (Listof a)
Given: String
Result type: (Listof a)
Expected result: (Pairof Symbol String)
in: (cons (quote wwww) "aaa")
What I did wrong and how can I fix it?
The #lang pl language that I use in my class is a variant of Typed
Racket. One of the changes it has is that cons has a more restricted
type (as you've seen) which allows it to only construct proper lists.
(As a sidenote, the reason there is no formal documentation is that this
language is intended to be used in the class, and as such it's something
that is subject to random pedagogic needs and not as something to be
used for random Racket code... So my class notes are the main place
that "documents" the language.)
This time I'm proving function calling other. vars.c:
int pure0 ()
{
return 0;
}
int get0(int* arr)
{
int z = pure0();
return 0;
}
My proof start - verif_vars.v:
Require Import floyd.proofauto.
Require Import vars.
Local Open Scope logic.
Local Open Scope Z.
Definition get0_spec :=
DECLARE _get0
WITH sh : share, arr : Z->val, varr : val
PRE [_arr OF (tptr tint)]
PROP ()
LOCAL (`(eq varr) (eval_id _arr);
`isptr (eval_id _arr))
SEP (`(array_at tint sh arr 0 100) (eval_id _arr))
POST [tint] `(array_at tint sh arr 0 100 varr).
Definition pure0_spec :=
DECLARE _pure0
WITH sh : share
PRE []
PROP ()
LOCAL ()
SEP ()
POST [tint] local(`(eq (Vint (Int.repr 0))) retval).
Definition Vprog : varspecs := nil.
Definition Gprog : funspecs := get0_spec :: pure0_spec ::nil.
Lemma body_pure0: semax_body Vprog Gprog f_pure0 pure0_spec.
Proof.
start_function.
forward.
Qed.
Lemma body_get0: semax_body Vprog Gprog f_get0 get0_spec.
Proof.
start_function.
name arrarg _arr.
forward_call (sh).
entailer!.
Which induces the goal:
2 subgoals, subgoal 1 (ID 566)
Espec : OracleKind
sh : share
arr : Z -> val
varr : val
Delta := abbreviate : tycontext
POSTCONDITION := abbreviate : ret_assert
MORE_COMMANDS := abbreviate : statement
Struct_env := abbreviate : type_id_env.type_id_env
arrarg : name _arr
============================
Forall (closed_wrt_vars (eq _z')) [`(array_at tint sh arr 0 100 varr)]
subgoal 2 (ID 567) is:
DO_THE_after_call_TACTIC_NOW
I suppose it states, that the function call does not alter arr contents, which is quite obvious for me.
What can I do with this goal? Which tactic applies here, and what exactly means the statement? Should I enrich the pure0 spec to somehow point out, that it does not modify anything?
FIRST: When writing VST/Verifiable-C questions, please indicate which version of VST you are using. It appears you are using 1.4.
SECOND: I am not sure this answers all your questions, but,
"closed_wrt_vars S P" says that the lifted assertion P is closed with respect to all the variables in the set S. That is, S is a set of C-language identifiers that may stand for nonaddressable local variables ("temps", not "vars"). P is an assertion of the form "environ->mpred", and "closed" means that if you change the "environ" to have different values for any of the variables in set S, then the truth of P will not change.
"Forall" is Coq's standard library predicate to apply a predicate to a list. So,
Forall (closed_wrt_vars (eq _z')) [`(array_at tint sh arr 0 100 varr)]
means, let the set S be the singleton set containing just the variable _z'.
We assert here that all the predicates in the list are closed w.r.t. S.
There's exactly one predicate in the list, and it's "trivially lifted",
that is, for any predicate (P: mpred), the lifted predicate
`(P)
is equivalent to (fun rho:environ => P). Trivially, then, `P doesn't
care what you do to rho, including changing the value of _z'.
The "auto with closed" (or just to be sure, "auto 50 with closed")
should take care of this, and you indicate that it does take care of it.
So I assume that the rest of your question was, "what's going on here?",
and I hope I answered it.
Solution, used in vst/progs/verif_reverse.v:
auto with closed.
Unfortunately, it answers only half of the questions.
By the way (unrelated to your question), the precondition
`isptr (eval_id _arr) for get0 is probably unnecessary.
It is implied already by `(array_at tint sh arr 0 100) (eval_id _arr)).
Furthermore, suppose you did want the `isptr (eval_id _arr) in your precondition; you might consider writing it as,
PROP (isptr varr)
LOCAL (`(eq varr) (eval_id _arr))
SEP (`(array_at tint sh arr 0 100 varr))
which is (in some ways) simpler and more "canonical".
I've only read the standard tutorial and fumbled around a bit, so I may be missing something simple.
If this isn't possible in Idris, please explain why. Furthermore, if can be done in another language please provide a code sample and explain what's different about that language's type system that makes it possible.
Here's my approach. Problems first arise in the third section.
Create an empty list of a known type
v : List Nat
v = []
This compiles and manifests in the REPL as [] : List Nat. Excellent.
Generalize to any provided type
emptyList : (t : Type) -> List t
emptyList t = []
v' : List Nat
v' = emptyList Nat
Unsurprisingly, this works and v' == v.
Constrain type to instances of Ord class
emptyListOfOrds : Ord t => (t : Type) -> List t
emptyListOfOrds t = []
v'' : List Nat
v'' = emptyListOfOrds Nat -- !!! typecheck failure
The last line fails with this error:
When elaborating right hand side of v'':
Can't resolve type class Ord t
Nat is an instance of Ord, so what's the problem? I tried replacing the Nats in v'' with Bool (not an instance of Ord), but there was no change in the error.
Another angle...
Does making Ord t an explicit parameter satisfy the type checker? Apparently not, but even if it did requiring the caller to pass redundant information isn't ideal.
emptyListOfOrds' : Ord t -> (t : Type) -> List t
emptyListOfOrds' a b = []
v''' : List Nat
v''' = emptyListOfOrds (Ord Nat) Nat -- !!! typecheck failure
The error is more elaborate this time:
When elaborating right hand side of v''':
When elaborating an application of function stackoverflow.emptyListOfOrds':
Can't unify
Type
with
Ord t
Specifically:
Can't unify
Type
with
Ord t
I'm probably missing some key insights about how values are checked against type declarations.
As other answers have explained, this is about how and where the variable t is bound. That is, when you write:
emptyListOfOrds : Ord t => (t : Type) -> List t
The elaborator will see that 't' is unbound at the point it is used in Ord t and so bind it implicitly:
emptyListOfOrds : {t : Type} -> Ord t => (t : Type) -> List t
So what you'd really like to say is something a bit like:
emptyListOfOrds : (t : Type) -> Ord t => List t
Which would bind the t before the type class constraint, and therefore it's in scope when Ord t appears. Unfortunately, this syntax isn't supported. I see no reason why this syntax shouldn't be supported but, currently, it isn't.
You can still implement what you want, but it's ugly, I'm afraid:
Since classes are first class, you can give them as ordinary arguments:
emptyListOfOrds : (t : type) -> Ord t -> List t
Then you can use the special syntax %instance to search for the default instance when you call emptyListOfOrds:
v'' = emptyListOfOrds Nat %instance
Of course, you don't really want to do this at every call site, so you can use a default implicit argument to invoke the search procedure for you:
emptyListOfOrds : (t : Type) -> {default %instance x : Ord t} -> List t
v'' = emptyListOfOrds Nat
The default val x : T syntax will fill in the implicit argument x with the default value val if no other value is explicitly given. Giving %instance as the default then is pretty much identical to what happens with class constraints, and actually we could probably change the implementation of the Foo x => syntax to do exactly this... I think the only reason I didn't is that default arguments didn't exist yet when I implemented type classes at first.
You could write
emptyListOfOrds : Ord t => List t
emptyListOfOrds = []
v'' : List Nat
v'' = emptyListOfOrds
Or perhaps if you prefer
v'' = emptyListOfOrds {t = Nat}
If you ask for the type of emptyListOfOrds the way you had written, you get
Ord t => (t2 : Type) -> List t2
Turing on :set showimplicits in the repl, and then asking again gives
{t : Type} -> Prelude.Classes.Ord t => (t2 : Type) -> Prelude.List.List t2
It seems specifying an Ord t constraint introduces an an implicit param t, and then your explicit param t gets assigned a new name. You can always explicitly supply a value for that implicit param, e.g. emptyListOfOrds {t = Nat} Nat. As far as if this is the "right" behavior or a limitation for some reason, perhaps you could open an issue about this on github? Perhaps there's some conflict with explicit type params and typeclass constraints? Normally typeclasses are for when you have things implicitly resolved... though I think I remember there being syntax for obtaining an explicit reference to a typeclass instance.
Not an answer, just some thoughts.
The problem here is that (t : Type) introduces new scope that extends to the right but Ord t is outside of this scope:
*> :t emptyListOfOrds
emptyListOfOrds : Ord t => (t2 : Type) -> List t2
You can add class constraint after introducing type variable:
emptyListOfOrds : (t : Type) -> Ord t -> List t
emptyListOfOrds t o = []
But now you need to specify class instance explicitly:
instance [natord] Ord Nat where
compare x y = compare x y
v'' : List Nat
v'' = emptyListOfOrds Nat #{natord}
Maybe it is somehow possible to make Ord t argument implicit.
Does "Value Restriction" practically mean that there is no higher order functional programming?
I have a problem that each time I try to do a bit of HOP I get caught by a VR error. Example:
let simple (s:string)= fun rq->1
let oops= simple ""
type 'a SimpleType= F of (int ->'a-> 'a)
let get a = F(fun req -> id)
let oops2= get ""
and I would like to know whether it is a problem of a prticular implementation of VR or it is a general problem that has no solution in a mutable type-infered language that doesn't include mutation in the type system.
Does “Value Restriction” mean that there is no higher order functional programming?
Absolutely not! The value restriction barely interferes with higher-order functional programming at all. What it does do is restrict some applications of polymorphic functions—not higher-order functions—at top level.
Let's look at your example.
Your problem is that oops and oops2 are both the identity function and have type forall 'a . 'a -> 'a. In other words each is a polymorphic value. But the right-hand side is not a so-called "syntactic value"; it is a function application. (A function application is not allowed to return a polymorphic value because if it were, you could construct a hacky function using mutable references and lists that would subvert the type system; that is, you could write a terminating function type type forall 'a 'b . 'a -> 'b.
Luckily in almost all practical cases, the polymorphic value in question is a function, and you can define it by eta-expanding:
let oops x = simple "" x
This idiom looks like it has some run-time cost, but depending on the inliner and optimizer, that can be got rid of by the compiler—it's just the poor typechecker that is having trouble.
The oops2 example is more troublesome because you have to pack and unpack the value constructor:
let oops2 = F(fun x -> let F f = get "" in f x)
This is quite a but more tedious, but the anonymous function fun x -> ... is a syntactic value, and F is a datatype constructor, and a constructor applied to a syntactic value is also a syntactic value, and Bob's your uncle. The packing and unpacking of F is all going to be compiled into the identity function, so oops2 is going to compile into exactly the same machine code as oops.
Things are even nastier when you want a run-time computation to return a polymorphic value like None or []. As hinted at by Nathan Sanders, you can run afoul of the value restriction with an expression as simple as rev []:
Standard ML of New Jersey v110.67 [built: Sun Oct 19 17:18:14 2008]
- val l = rev [];
stdIn:1.5-1.15 Warning: type vars not generalized because of
value restriction are instantiated to dummy types (X1,X2,...)
val l = [] : ?.X1 list
-
Nothing higher-order there! And yet the value restriction applies.
In practice the value restriction presents no barrier to the definition and use of higher-order functions; you just eta-expand.
I didn't know the details of the value restriction, so I searched and found this article. Here is the relevant part:
Obviously, we aren't going to write the expression rev [] in a program, so it doesn't particularly matter that it isn't polymorphic. But what if we create a function using a function call? With curried functions, we do this all the time:
- val revlists = map rev;
Here revlists should be polymorphic, but the value restriction messes us up:
- val revlists = map rev;
stdIn:32.1-32.23 Warning: type vars not generalized because of
value restriction are instantiated to dummy types (X1,X2,...)
val revlists = fn : ?.X1 list list -> ?.X1 list list
Fortunately, there is a simple trick that we can use to make revlists polymorphic. We can replace the definition of revlists with
- val revlists = (fn xs => map rev xs);
val revlists = fn : 'a list list -> 'a list list
and now everything works just fine, since (fn xs => map rev xs) is a syntactic value.
(Equivalently, we could have used the more common fun syntax:
- fun revlists xs = map rev xs;
val revlists = fn : 'a list list -> 'a list list
with the same result.) In the literature, the trick of replacing a function-valued expression e with (fn x => e x) is known as eta expansion. It has been found empirically that eta expansion usually suffices for dealing with the value restriction.
To summarise, it doesn't look like higher-order programming is restricted so much as point-free programming. This might explain some of the trouble I have when translating Haskell code to F#.
Edit: Specifically, here's how to fix your first example:
let simple (s:string)= fun rq->1
let oops= (fun x -> simple "" x) (* eta-expand oops *)
type 'a SimpleType= F of (int ->'a-> 'a)
let get a = F(fun req -> id)
let oops2= get ""
I haven't figured out the second one yet because the type constructor is getting in the way.
Here is the answer to this question in the context of F#.
To summarize, in F# passing a type argument to a generic (=polymorphic) function is a run-time operation, so it is actually type-safe to generalize (as in, you will not crash at runtime). The behaviour of thusly generalized value can be surprising though.
For this particular example in F#, one can recover generalization with a type annotation and an explicit type parameter:
type 'a SimpleType= F of (int ->'a-> 'a)
let get a = F(fun req -> id)
let oops2<'T> : 'T SimpleType = get ""