I have made a functor for format-able sets, as follows:
module type POrderedType =
sig
type t
val compare : t -> t -> int
val format : Format.formatter -> t -> unit
end
module type SET =
sig
include Set.S
val format : Format.formatter -> t -> unit
end
module MakeSet (P : POrderedType) : SET with type elt = P.t
Implementation of this is straightforward:
module MakeSet (P : OrderedType) =
struct
include Set.Make(P)
let format ff s =
let rec format' ff = function
| [] -> ()
| [v] -> Format.fprintf ff "%a" format v
| v::tl -> Format.fprintf ff "%a,# %a" format v format' tl in
Format.fprintf ff "#[<4>%a#]" format' (elements s)
end
I wanted to do something similar with maps. POrderedType is fine for keys, but I need a simpler type for values:
module type Printable =
sig
type t
val format : Format.formatter -> t -> unit
end
Then I wanted to do something similar to what I had done for sets, but I run into the following problem. Map.S values have type +'a t. I can't figure out a way to include the Map.S definition while constraining the 'a to be a Printable.t. What I want is something like the following (ignoring the fact that it is illegal):
module MakeMap (Pkey : POrderedType) (Pval : Printable) :
MAP with type key = Pkey.t and type 'a t = 'a t constraint 'a = Pval.t
Is there any way to do what I want without copying the entire signature of Map by hand?
I think the cleanest way to propose a printing function for polymorphic maps is to make the map printing function parametric over the values printing function. You can think of it this way:
functor-defined types are defined at the functor level, so providing functions for them is best done by adding new functor parameters (or enriching existing ones)
parametric types are bound (generalized) at the value level, so providing functions for them is best done by adding new parameters to the value
In OCaml, convenience tend to make people favor parametric polymorphism over functorization when possible. Functorization is sometimes necessary to enforce some type safety (here it's used to make sure that maps over different comparison functions have incompatible types), but otherwise people rather try to have polymorphism. So you're actually in the lucky situation here.
If you really want to have a functor producing monomorphic maps, well, I'm afraid you will have to copy the whole map interface and adapt it in the momonorphic case -- it's not much work.
Related
I'm looking to add a cache to a simple compiler in OCaml. I have created a simpler version of the code that I had that reproduces the same issue. What I need for the cache is to be able to create a Map with A's as keys, so I can lookup the compile output. Here is the code:
module A
= struct
type ineq =
| LT
| EQ
| GT
type t =
...
module ACacheMapKey
= struct
type t = t
let compare a b =
match cmp a b with
| LT -> -1
| EQ -> 0
| GT -> 1
end
module CMap = Map.Make(ACacheMapKey)
let cache_map = CMap.empty
let cmp a b =
...
end
In module A, type t is a recursive AST-like type. cmp a b returns a ineq. The compile function was left out for brevity, but it just uses the cache before running through a computationally expensive process. In order to create a cache map in A, I need a compatible key module. My attempt at that is ACacheMapKey, but the type t = t doesn't refer to the parent. The error it gives is Error: The type abbreviation t is cyclic. So, is there a better way to make a cache over A? Or is there an easy way to reference the parent and make my current structure work?
Type definitions, unlike let bindings, are recursive by default. So similarly to how you would make a let binding recursive by using the rec keyword:
let rec f = ...
you can make a type definition non-recursive by using the nonrec keyword:
type nonrec t = t
Let's say there is a type
immutable Foo
x :: Int64
y :: Float64
end
and there is a variable foo = Foo(1,2.0). I want to construct a new variable bar using foo as a prototype with field y = 3.0 (or, alternatively non-destructively update foo producing a new Foo object). In ML languages (Haskell, OCaml, F#) and a few others (e.g. Clojure) there is an idiom that in pseudo-code would look like
bar = {foo with y = 3.0}
Is there something like this in Julia?
This is tricky. In Clojure this would work with a data structure, a dynamically typed immutable map, so we simply call the appropriate method to add/change a key. But when working with types we'll have to do some reflection to generate an appropriate new constructor for the type. Moreover, unlike Haskell or the various MLs, Julia isn't statically typed, so one does not simply look at an expression like {foo with y = 1} and work out what code should be generated to implement it.
Actually, we can build a Clojure-esque solution to this; since Julia provides enough reflection and dynamism that we can treat the type as a sort of immutable map. We can use fieldnames to get the list of "keys" in order (like [:x, :y]) and we can then use getfield(foo, :x) to get field values dynamically:
immutable Foo
x
y
z
end
x = Foo(1,2,3)
with_slow(x, p) =
typeof(x)(((f == p.first ? p.second : getfield(x, f)) for f in fieldnames(x))...)
with_slow(x, ps...) = reduce(with_slow, x, ps)
with_slow(x, :y => 4, :z => 6) == Foo(1,4,6)
However, there's a reason this is called with_slow. Because of the reflection it's going to be nowhere near as fast as a handwritten function like withy(foo::Foo, y) = Foo(foo.x, y, foo.z). If Foo is parametised (e.g. Foo{T} with y::T) then Julia will be able to infer that withy(foo, 1.) returns a Foo{Float64}, but won't be able to infer with_slow at all. As we know, this kills the crab performance.
The only way to make this as fast as ML and co is to generate code effectively equivalent to the handwritten version. As it happens, we can pull off that version as well!
# Fields
type Field{K} end
Base.convert{K}(::Type{Symbol}, ::Field{K}) = K
Base.convert(::Type{Field}, s::Symbol) = Field{s}()
macro f_str(s)
:(Field{$(Expr(:quote, symbol(s)))}())
end
typealias FieldPair{F<:Field, T} Pair{F, T}
# Immutable `with`
for nargs = 1:5
args = [symbol("p$i") for i = 1:nargs]
#eval with(x, $([:($p::FieldPair) for p = args]...), p::FieldPair) =
with(with(x, $(args...)), p)
end
#generated function with{F, T}(x, p::Pair{Field{F}, T})
:($(x.name.primary)($([name == F ? :(p.second) : :(x.$name)
for name in fieldnames(x)]...)))
end
The first section is a hack to produce a symbol-like object, f"foo", whose value is known within the type system. The generated function is like a macro that takes types as opposed to expressions; because it has access to Foo and the field names it can generate essentially the hand-optimised version of this code. You can also check that Julia is able to properly infer the output type, if you parametrise Foo:
#code_typed with(x, f"y" => 4., f"z" => "hello") # => ...::Foo{Int,Float64,String}
(The for nargs line is essentially a manually-unrolled reduce which enables this.)
Finally, lest I be accused of giving slightly crazy advice, I want to warn that this isn't all that idiomatic in Julia. While I can't give very specific advice without knowing your use case, it's generally best to have fields with a manageable (small) set of fields and a small set of functions which do the basic manipulation of those fields; you can build on those functions to create the final public API. If what you want is really an immutable dict, you're much better off just using a specialised data structure for that.
There is also setindex (without the ! at the end) implemented in the FixedSizeArrays.jl package, which does this in an efficient way.
I read about polymorphism in function and saw this example
fun len nil = 0
| len rest = 1 + len (tl rest)
All the other examples dealt with nil arg too.
I wanted to check the polymorphism concept on other types, like
fun func (a : int) : int = 1
| func (b : string) : int = 2 ;
and got the follow error
stdIn:1.6-2.33 Error: parameter or result constraints of clauses don't agree
[tycon mismatch]
this clause: string -> int
previous clauses: int -> int
in declaration:
func = (fn a : int => 1: int
| b : string => 2: int)
What is the mistake in the above function? Is it legal at all?
Subtype Polymorphism:
In a programming languages like Java, C# o C++ you have a set of subtyping rules that govern polymorphism. For instance, in object-oriented programming languages if you have a type A that is a supertype of a type B; then wherever A appears you can pass a B, right?
For instance, if you have a type Mammal, and Dog and Cat were subtypes of Mammal, then wherever Mammal appears you could pass a Dog or a Cat.
You can achive the same concept in SML using datatypes and constructors. For instance:
datatype mammal = Dog of String | Cat of String
Then if you have a function that receives a mammal, like:
fun walk(m: mammal) = ...
Then you could pass a Dog or a Cat, because they are constructors for mammals. For instance:
walk(Dog("Fido"));
walk(Cat("Zoe"));
So this is the way SML achieves something similar to what we know as subtype polymorphism in object-oriented languajes.
Ad-hoc Polymorphysm:
Coercions
The actual point of confusion could be the fact that languages like Java, C# and C++ typically have automatic coercions of types. For instance, in Java an int can be automatically coerced to a long, and a float to a double. As such, I could have a function that accepts doubles and I could pass integers. Some call these automatic coercions ad-hoc polymorphism.
Such form of polymorphism does not exist in SML. In those cases you are forced to manually coerced or convert one type to another.
fun calc(r: real) = r
You cannot call it with an integer, to do so you must convert it first:
calc(Real.fromInt(10));
So, as you can see, there is no ad-hoc polymorphism of this kind in SML. You must do castings/conversions/coercions manually.
Function Overloading
Another form of ad-hoc polymorphism is what we call method overloading in languages like Java, C# and C++. Again, there is no such thing in SML. You may define two different functions with different names, but no the same function (same name) receiving different parameters or parameter types.
This concept of function or method overloading must not be confused with what you use in your examples, which is simply pattern matching for functions. That is syntantic sugar for something like this:
fun len xs =
if null xs then 0
else 1 + len(tl xs)
Parametric Polymorphism:
Finally, SML offers parametric polymorphism, very similar to what generics do in Java and C# and I understand that somewhat similar to templates in C++.
So, for instance, you could have a type like
datatype 'a list = Empty | Cons of 'a * 'a list
In a type like this 'a represents any type. Therefore this is a polymorphic type. As such, I could use the same type to define a list of integers, or a list of strings:
val listOfString = Cons("Obi-wan", Empty);
Or a list of integers
val numbers = Cons(1, Empty);
Or a list of mammals:
val pets = Cons(Cat("Milo", Cons(Dog("Bentley"), Empty)));
This is the same thing you could do with SML lists, which also have parametric polymorphism:
You could define lists of many "different types":
val listOfString = "Yoda"::"Anakin"::"Luke"::[]
val listOfIntegers 1::2::3::4::[]
val listOfMammals = Cat("Misingo")::Dog("Fido")::Cat("Dexter")::Dog("Tank")::[]
In the same sense, we could have parametric polymorphism in functions, like in the following example where we have an identity function:
fun id x = x
The type of x is 'a, which basically means you can substitute it for any type you want, like
id("hello");
id(35);
id(Dog("Diesel"));
id(Cat("Milo"));
So, as you can see, combining all these different forms of polymorphism you should be able to achieve the same things you do in other statically typed languages.
No, it's not legal. In SML, every function has a type. The type of the len function you gave as an example is
fn : 'a list -> int
That is, it takes a list of any type and returns an integer. The function you're trying to make takes and integer or a string, and returns an integer, and that's not legal in the SML type system. The usual workaround is to make a wrapper type:
datatype wrapper = I of int | S of string
fun func (I a) = 1
| func (S a) = 2
That function has type
fn : wrapper -> int
Where wrapper can contain either an integer or a string.
The following type extension
module Dict =
open System.Collections.Generic
type Dictionary<'K, 'V> with
member this.Difference(that:Dictionary<'K, 'T>) =
let dict = Dictionary()
for KeyValue(k, v) in this do
if not (that.ContainsKey(k)) then
dict.Add(k, v)
dict
gives the error:
The signature and implementation are not compatible because the declaration of the type parameter 'TKey' requires a constraint of the form 'TKey : equality
But when I add the constraint it gives the error:
The declared type parameters for this type extension do not match the declared type parameters on the original type 'Dictionary<,>'
This is especially mysterious because the following type extension doesn't have the constraint and works.
type Dictionary<'K, 'V> with
member this.TryGet(key) =
match this.TryGetValue(key) with
| true, v -> Some v
| _ -> None
Now I'm having weird thoughts: is the constraint required only when certain members are accessed?
module Dict =
open System.Collections.Generic
type Dictionary<'K, 'V> with
member this.Difference(that:Dictionary<'K, 'T>) =
let dict = Dictionary(this.Comparer)
for KeyValue(k, v) in this do
if not (that.ContainsKey(k)) then
dict.Add(k, v)
dict
EDIT:
As per F# spec (14.11 Additional Constraints on CLI Methods)
Some specific CLI methods and types are treated specially by F#, because they are common in F# programming and cause extremely difficult-to-find bugs. For each use of the following constructs, the F# compiler imposes additional ad hoc constraints:
x.Equals(yobj) requires type ty : equality for the static type of x
x.GetHashCode() requires type ty : equality for the static type of x
new Dictionary<A,B>() requires A : equality, for any overload that does not take an IEqualityComparer<T>
as far as I can see the following code does the trick:
module Dict =
open System.Collections.Generic
type Dictionary<'K, 'V> with
member this.Difference(that: Dictionary<'K,'V2>) =
let diff =
this
|> Seq.filter (fun x -> not <| that.ContainsKey(x.Key))
|> Seq.map (fun x -> x.Key, x.Value)
System.Linq.Enumerable.ToDictionary(diff, fst, snd)
The problem is your use of the Add method. If you use this method of Dictionary<TKey, TValue> then F# will enforce that TKey has the equality constraint.
After playing around a bit I'm not sure that it's even possible to write this extension method. The F# type system appears to force the declaration type of the extension method to have no additional constraints than the original type (i get an error whenever I add the equality constraint). Additionally the type listed in the individal extension methods cannot differ than the listed type. I've tried a number of ways and can't get this to function correctly.
The closest I've come is the non-extension method as follows
let Difference (this : Dictionary<'K, 'T>) (that:Dictionary<'K, 'T> when 'K : equality) =
let dict = Dictionary()
for KeyValue(k, v) in this do
if not (that.ContainsKey(k)) then
dict.Add(k, v)
dict
Perhaps another F# ninja will be able to prove me wrong
(EDIT: CKoenig has a nice answer.)
Hm, I didn't immediately see a way to do this either.
Here's a non-type-safe solution that might provide some crazy inspiration for others.
open System.Collections.Generic
module Dict =
type Dictionary<'K, 'V> with
member this.Difference<'K2, 'T when 'K2 : equality>(that:Dictionary<'K2, 'T>) =
let dict = Dictionary<'K2,'V>()
for KeyValue(k, v) in this do
if not (that.ContainsKey(k |> box |> unbox)) then
dict.Add(k |> box |> unbox, v)
dict
open Dict
let d1 = Dictionary()
d1.Add(1, "foo")
d1.Add(2, "bar")
let d2 = Dictionary()
d2.Add(1, "cheese")
let show (d:Dictionary<_,_>) =
for (KeyValue(k,v)) in d do
printfn "%A: %A" k v
d1.Difference(d2) |> show
let d3 = Dictionary()
d3.Add(1, 42)
d1.Difference(d3) |> show
let d4 = Dictionary()
d4.Add("uh-oh", 42)
d1.Difference(d4) |> show // blows up at runtime
Overall it seems like there may be no way to unify the types K and K2 without also forcing them to have the same equality constraint though...
(EDIT: seems like calling into .NET which is equality-constraint-agnostic is a good way to create a dictionary in the absence of the extra constraint.)
Does "Value Restriction" practically mean that there is no higher order functional programming?
I have a problem that each time I try to do a bit of HOP I get caught by a VR error. Example:
let simple (s:string)= fun rq->1
let oops= simple ""
type 'a SimpleType= F of (int ->'a-> 'a)
let get a = F(fun req -> id)
let oops2= get ""
and I would like to know whether it is a problem of a prticular implementation of VR or it is a general problem that has no solution in a mutable type-infered language that doesn't include mutation in the type system.
Does “Value Restriction” mean that there is no higher order functional programming?
Absolutely not! The value restriction barely interferes with higher-order functional programming at all. What it does do is restrict some applications of polymorphic functions—not higher-order functions—at top level.
Let's look at your example.
Your problem is that oops and oops2 are both the identity function and have type forall 'a . 'a -> 'a. In other words each is a polymorphic value. But the right-hand side is not a so-called "syntactic value"; it is a function application. (A function application is not allowed to return a polymorphic value because if it were, you could construct a hacky function using mutable references and lists that would subvert the type system; that is, you could write a terminating function type type forall 'a 'b . 'a -> 'b.
Luckily in almost all practical cases, the polymorphic value in question is a function, and you can define it by eta-expanding:
let oops x = simple "" x
This idiom looks like it has some run-time cost, but depending on the inliner and optimizer, that can be got rid of by the compiler—it's just the poor typechecker that is having trouble.
The oops2 example is more troublesome because you have to pack and unpack the value constructor:
let oops2 = F(fun x -> let F f = get "" in f x)
This is quite a but more tedious, but the anonymous function fun x -> ... is a syntactic value, and F is a datatype constructor, and a constructor applied to a syntactic value is also a syntactic value, and Bob's your uncle. The packing and unpacking of F is all going to be compiled into the identity function, so oops2 is going to compile into exactly the same machine code as oops.
Things are even nastier when you want a run-time computation to return a polymorphic value like None or []. As hinted at by Nathan Sanders, you can run afoul of the value restriction with an expression as simple as rev []:
Standard ML of New Jersey v110.67 [built: Sun Oct 19 17:18:14 2008]
- val l = rev [];
stdIn:1.5-1.15 Warning: type vars not generalized because of
value restriction are instantiated to dummy types (X1,X2,...)
val l = [] : ?.X1 list
-
Nothing higher-order there! And yet the value restriction applies.
In practice the value restriction presents no barrier to the definition and use of higher-order functions; you just eta-expand.
I didn't know the details of the value restriction, so I searched and found this article. Here is the relevant part:
Obviously, we aren't going to write the expression rev [] in a program, so it doesn't particularly matter that it isn't polymorphic. But what if we create a function using a function call? With curried functions, we do this all the time:
- val revlists = map rev;
Here revlists should be polymorphic, but the value restriction messes us up:
- val revlists = map rev;
stdIn:32.1-32.23 Warning: type vars not generalized because of
value restriction are instantiated to dummy types (X1,X2,...)
val revlists = fn : ?.X1 list list -> ?.X1 list list
Fortunately, there is a simple trick that we can use to make revlists polymorphic. We can replace the definition of revlists with
- val revlists = (fn xs => map rev xs);
val revlists = fn : 'a list list -> 'a list list
and now everything works just fine, since (fn xs => map rev xs) is a syntactic value.
(Equivalently, we could have used the more common fun syntax:
- fun revlists xs = map rev xs;
val revlists = fn : 'a list list -> 'a list list
with the same result.) In the literature, the trick of replacing a function-valued expression e with (fn x => e x) is known as eta expansion. It has been found empirically that eta expansion usually suffices for dealing with the value restriction.
To summarise, it doesn't look like higher-order programming is restricted so much as point-free programming. This might explain some of the trouble I have when translating Haskell code to F#.
Edit: Specifically, here's how to fix your first example:
let simple (s:string)= fun rq->1
let oops= (fun x -> simple "" x) (* eta-expand oops *)
type 'a SimpleType= F of (int ->'a-> 'a)
let get a = F(fun req -> id)
let oops2= get ""
I haven't figured out the second one yet because the type constructor is getting in the way.
Here is the answer to this question in the context of F#.
To summarize, in F# passing a type argument to a generic (=polymorphic) function is a run-time operation, so it is actually type-safe to generalize (as in, you will not crash at runtime). The behaviour of thusly generalized value can be surprising though.
For this particular example in F#, one can recover generalization with a type annotation and an explicit type parameter:
type 'a SimpleType= F of (int ->'a-> 'a)
let get a = F(fun req -> id)
let oops2<'T> : 'T SimpleType = get ""