I am working on a theory where I use extensional functions defined in the Funcset theory quite heavily. I need to work with function valued functions where both the function, and the values are extensional. It is quite annoying that some of my lemmas fail because an undefined function does not map everything into undefined. So the goal
undefined x = undefined
is not provable. I can work around this using restrictions, but it would be much more elegant without those. Is it safe to add a new axiom:
axiomatization where
undefined_at [simp]: "undefined x = undefined"
? I am concerned about this because
1) I'm not sure if I should fiddle around with the logic like this.
2) After I add this axiom, for goals like " undefined \in A", nitpick produces the error : Limit reached: too many nested axioms (256).
3) The seemingly similarly innocent axiom
axiomatization where
at_undefined [simp]: "f undefined = undefined"
produces weird goals like "P ==> undefined" .
The constant undefined does not really model the mathematical notion of undefined. Rather does it denote not being specified, as I have explained in a thread on the Isabelle mailing list.
Back in 2008, undefined actually was specified with the axiom undefined x = undefined, i.e., the function undefined maps everything to undefined. Soon, people realised that this was not what undefined should represent, because it restricted the function undefined to a constant function, which is not an arbitrary function at all. Adding this axiom does not make HOL unsound, but it severely restricts the generality of the what is proven, because undefined is used a lot by Isabelle's packages.
The other axiom at_undefined however leads to inconsistencies. As stated it means that every function f should be the identity on unspecified value undefined. Consider the type bool of Booleans. undefined must be either True or False. So if you take negation for f, then the axiom requires that ~ True = True or ~ False = False. Obviously, this is inconsistent with the specification of op ~, so the axiom is inconsistent.
Related
I'm trying to define a function that takes a set and a relation and returns a bool telling if the relation is reflexive on the set. I tried to define it like this:
definition refl::"'a set⇒('a×'a) set⇒bool" where
"refl A R = (∀x. x∈A⟹(x,x)∈R)"
but Isabelle gives me the following error:
Type unification failed: Clash of types "prop" and "bool"
Type error in application: incompatible operand type
Operator: (=) (refl A R) :: bool ⇒ bool
Operand: ∀x. x ∈ A ⟹ (x, x) ∈ R :: prop
I can't seem to find any function to force a "prop" into a "bool". I also tried changing the definition to set the RHS = True, but I get the same error.
What is the correct way to define my function?
You can't go from prop to bool. But you don't have to: just use the object level connectives (⟶ and ∀) instead of the meta-logical ones (⟹ and ⋀). They are logically equivalent, so this is not a problem.
The meta-logical connectives should (and usually can) only be used on the ‘outermost level’ of a proposition.
Note however that when you can use the mega-logical ones, it is usually more convenient to use them because the object-level ones are opaque to Isabelle and the Isar proof language (i.e. they are functions just like any other function) whereas Isar ‘knows’ what ⟹ and ⋀ mean. For instance, if you have a fact stated with ⟹ and ⋀, you can immediately instantiate variables and discharge assumptions in it using the of/OF attributes.
You need to write it such that the value is not prop in the first place — there's no conversion. In this case, you used the prop-level implication ⟹ between ∀x. x ∈ A and (x, x) ∈ R. You may use a single-width arrow --> instead, which is an implication of bools.
This is what I want to happen:
[m | m<-[1..1000], k<-[3,5], m `mod` k == 0]
When I put that code in the console I get the result I want but when I try to turn it into a generalized function Haskell is just spitting out tons of errors and I cannot figure out how to make it work.
This is what I have:
multiplesOfKLessThanN :: Num a => [a] -> a -> [a]
multiplesOfKLessThanN ks n = [m | m<-[1..n], k<-ks, m `mod` k == 0]
problem1 = putStrLn(show(multiplesOfKLessThanN([3,5] 1000)))
main = problem1
One Such Error I am getting:
Couldn't match expected type 'Integer -> [a0]' with actual type '[Integer]'
I also get other errors but inconsistently. This is something I have noticed with Haskell, it likes to change error messages even when the code hasn't changed at all like wtf?
Your use of mutliplesOfKLessThanN is not correct
mutliplesOfKLessThanN([3,5] 1000)
Is not interpreted by Haskell as
Apply mutliplesOfKLessThanN with [3,5] and 1000.
but instead it is interpteted as
Apply [3,5] to 1000 and apply multiplesOfKLessThanN to the result.
I think your misconception is in how function application occurs. In many languages function application requires parentheses e.g. f(x). For Haskell parentheses only ever mean "do this operation first", and function application is achieved by putting things next to each other. So f(x) works in Haskell because it is the same as f x, but f(x y) is the same as f(x(y)) and tells Haskell to evaluate x y first and then give it to f.
With your code Haskell can't apply [3,5] as a function, which is what Haskell is telling you, it expected a function (in fact a specific type of function).
The proper way to write this would be
multiplesOfKLessThanN [3,5] 1000
This should handle that main error you are getting.
So the error here is a type error because you are trying to make a more general type a be used by the function modulo. If you look at the type mod it expects an integral type class changing your constraint from Num to Integral should resolve your issue
I am puzzled by the following results of typeof in the Julia 1.0.0 REPL:
# This makes sense.
julia> typeof(10)
Int64
# This surprised me.
julia> typeof(function)
ERROR: syntax: unexpected ")"
# No answer at all for return example and no error either.
julia> typeof(return)
# In the next two examples the REPL returns the input code.
julia> typeof(in)
typeof(in)
julia> typeof(typeof)
typeof(typeof)
# The "for" word returns an error like the "function" word.
julia> typeof(for)
ERROR: syntax: unexpected ")"
The Julia 1.0.0 documentation says for typeof
"Get the concrete type of x."
The typeof(function) example is the one that really surprised me. I expected a function to be a first-class object in Julia and have a type. I guess I need to understand types in Julia.
Any suggestions?
Edit
Per some comment questions below, here is an example based on a small function:
julia> function test() return "test"; end
test (generic function with 1 method)
julia> test()
"test"
julia> typeof(test)
typeof(test)
Based on this example, I would have expected typeof(test) to return generic function, not typeof(test).
To be clear, I am not a hardcore user of the Julia internals. What follows is an answer designed to be (hopefully) an intuitive explanation of what functions are in Julia for the non-hardcore user. I do think this (very good) question could also benefit from a more technical answer provided by one of the more core developers of the language. Also, this answer is longer than I'd like, but I've used multiple examples to try and make things as intuitive as possible.
As has been pointed out in the comments, function itself is a reserved keyword, and is not an actual function istself per se, and so is orthogonal to the actual question. This answer is intended to address your edit to the question.
Since Julia v0.6+, Function is an abstract supertype, much in the same way that Number is an abstract supertype. All functions, e.g. mean, user-defined functions, and anonymous functions, are subtypes of Function, in the same way that Float64 and Int are subtypes of Number.
This structure is deliberate and has several advantages.
Firstly, for reasons I don't fully understand, structuring functions in this way was the key to allowing anonymous functions in Julia to run just as fast as in-built functions from Base. See here and here as starting points if you want to learn more about this.
Secondly, because each function is its own subtype, you can now dispatch on specific functions. For example:
f1(f::T, x) where {T<:typeof(mean)} = f(x)
and:
f1(f::T, x) where {T<:typeof(sum)} = f(x) + 1
are different dispatch methods for the function f1
So, given all this, why does, e.g. typeof(sum) return typeof(sum), especially given that typeof(Float64) returns DataType? The issue here is that, roughly speaking, from a syntactical perspective, sum needs to serves two purposes simultaneously. It needs to be both a value, like e.g. 1.0, albeit one that is used to call the sum function on some input. But, it is also needs to be a type name, like Float64.
Obviously, it can't do both at the same time. So sum on its own behaves like a value. You can write f = sum ; f(randn(5)) to see how it behaves like a value. But we also need some way of representing the type of sum that will work not just for sum, but for any user-defined function, and any anonymous function. The developers decided to go with the (arguably) simplest option and have the type of sum print literally as typeof(sum), hence the behaviour you observe. Similarly if I write f1(x) = x ; typeof(f1), that will also return typeof(f1).
Anonymous functions are a bit more tricky, since they are not named as such. What should we do for typeof(x -> x^2)? What actually happens is that when you build an anonymous function, it is stored as a temporary global variable in the module Main, and given a number that serves as its type for lookup purposes. So if you write f = (x -> x^2), you'll get something back like #3 (generic function with 1 method), and typeof(f) will return something like getfield(Main, Symbol("##3#4")), where you can see that Symbol("##3#4") is the temporary type of this anonymous function stored in Main. (a side effect of this is that if you write code that keeps arbitrarily generating the same anonymous function over and over you will eventually overflow memory, since they are all actually being stored as separate global variables of their own type - however, this does not prevent you from doing something like this for n = 1:largenumber ; findall(y -> y > 1.0, x) ; end inside a function, since in this case the anonymous function is only compiled once at compile-time).
Relating all of this back to the Function supertype, you'll note that typeof(sum) <: Function returns true, showing that the type of sum, aka typeof(sum) is indeed a subtype of Function. And note also that typeof(typeof(sum)) returns DataType, in much the same way that typeof(typeof(1.0)) returns DataType, which shows how sum actually behaves like a value.
Now, given everything I've said, all the examples in your question now make sense. typeof(function) and typeof(for) return errors as they should, since function and for are reserved syntax. typeof(typeof) and typeof(in) correctly return (respectively) typeof(typeof), and typeof(in), since typeof and in are both functions. Note of course that typeof(typeof(typeof)) returns DataType.
When I try to write
fun foo :: "nat ⇒ nat"
where "foo = Suc"
Isabelle complains that "Function has no arguments". Why is this? What's wrong with a fun having no arguments? I know that I can change fun to abbreviation or definition and all is fine. But it seems a shame to spoil the uniformity of my .thy file, in which every other definition is declared with fun.
Alex Krauss, the author of the present generation of fun and function in Isabelle/HOL had particular opinions about that, and probably also good formal reasons to say that a "function" really needs to have arguments. In SML you actually have a similar situation: "constants" without arguments are defined via val not fun.
In the rare situations, where zero-argument functions are needed in Isabelle/HOL, it is sufficiently easy to use definition [simp] "c = t to get mostly the same result, apart from the name of the key theorems produced internally: c_def versus c.simps.
I think the main inconvenience and occasional pitfal of function in this respect is its exposure of the auxiliary c_def that is not meant to be used in applications: it unfolds the internal construction behind the function specification, not its main characterizing equation.
Since no other answer seems to be on its way, let me repeat and extend on my previous comment. In Isabelle/HOL there are three ways of defining functions:
definition for non-recursive functions (which could just be seen as constants that serve as abbreviations for longer statements).
primrec for primitive recursive functions (in the sense that in every recursive call there is a fixed argument where a datatype constructor is removed).
fun for general recursive functions.
Both, primrec and fun expect at least one argument. For the former it is automatically checked that one of its arguments corresponds to the syntactic pattern of primitive recursion on datatypes, while for the latter the task of proving "termination" (or rather the well-foundedness of the call graph) will be delegated to the user in hard cases.
Anyway, it would of course be possible to relay primrec and fun to definition for easy cases without arguments, but at least to me this rather seems to obfuscate things for the user instead of clearing them up.
I have a rather large term foo. When I type
value "foo"
then Isabelle evaluates foo to a value, say foo_value. I would now like to prove the following lemma.
lemma "foo = foo_value"
What proof method should I use? I tried try, but that timed out. I guess I could proceed manually by unfolding the various definitions that occur in foo, but surely I should be able to tap into whatever mechanism the value command is using, right?
There are three proof methods that correspond to the different evaluation mechanisms of value:
eval uses the code generator; it corresponds to value [code]. The proof succeeds if the generated ML code evaluates to True.
normalization compiles the statement to a symbolic normalisation engine in ML. It mimicks value [nbe].
code_simp uses Isabelle's simplifier as an evaluator. It corresponds to value [simp].
The tutorial on code generation describes these proof methods in more detail. eval and normalization act like oracles, i.e., they bypass Isabelle's kernel whereas every evaluation step of code_simp goes through the kernel. Usually, eval is faster than normalization and normalization is faster than code_simp.
I am not sure whether it works in all cases, but you could try:
lemma "foo = foo_value"
by eval
In many cases, by simp should also work and I guess eval is kind of an oracle (in the sense that it is not fully verified by the kernel; please somebody correct me if I am wrong).