I'm interested in generalizing some computational tools to use a Cayley Table, meaning a lookup table based multiplication operation.
I could create a minimal implementation as follows :
date CayleyTable = CayleyTable {
ct_name :: ByteString,
ct_products :: V.Vector (V.Vector Int)
} deriving (Read, Show)
instance Eq (CayleyTable) where
(==) a b = ct_name a == ct_name b
data CTElement = CTElement {
ct_cayleytable :: CayleyTable,
ct_index :: !Int
}
instance Eq (CTElement) where
(==) a b = assert (ct_cayleytable a == ct_cayleytable b) $
ct_index a == ct_index b
instance Show (CTElement) where
show = ("CTElement" ++) . show . ctp_index
a **** b = assert (ct_cayleytable a == ct_cayleytable b) $
((ct_cayleytable a) ! a) ! b
There are however numerous problems with this approach, starting with the run time type checking via ByteString comparisons, but including the fact that read cannot be made to work correctly. Any idea how I should do this correctly?
I could imagine creating a family of newtypes CTElement1, CTElement2, etc. for Int with a CTElement typeclass that provides the multiplication and verifies their type consistency, except when doing IO.
Ideally, there might be some trick for passing around only one copy of this ct_cayleytable pointer too, perhaps using an implicit parameter like ?cayleytable, but this doesn't play nicely with multiple incompatible Cayley tables and gets generally obnoxious.
Also, I've gathered that an index into a vector can be viewed as a comonad. Is there any nice comonad instance for vector or whatever that might help smooth out this sort of type checking, even if ultimately doing it at runtime?
You thing you need to realize is that Haskell's type checker only checks types. So your CaleyTable needs to be a class.
class CaleyGroup g where
caleyTable :: g -> CaleyTable
... -- Any operations you cannot implement soley by knowing the caley table
data CayleyTable = CayleyTable {
...
} deriving (Read, Show)
If the caleyTable isn't known at compile time you have to use rank-2 types. Since the complier needs to enforce the invariant that the CaleyTable exists, when your code uses it.
manipWithCaleyTable :: Integral i => CaleyTable -> i -> (forall g. CaleyGroup g => g -> g) -> a
can be implemented for example. It allows you to perform group operations on the CaleyTable. It works by combining i and CaleyTable to make a new type it passes to its third argument.
Related
If I define a new struct as
mutable struct myStruct
data::AbstractMatrix
labels::Vector{String}
end
and I want to throw an error if the length of labels is not equal to the number of columns of data, I know that I can write a constructor that enforces this condition like
myStruct(data, labels) = length(labels) != size(data)[2] ? error("Labels incorrect length") : new(data,labels)
However, once the struct is initialized, the labels field can be set to the incorrect length:
m = myStruct(randn(2,2), ["a", "b"])
m.labels = ["a"]
Is there a way to throw an error if the labels field is ever set to length not equal to the number of columns in data?
You could use StaticArrays.jl to fix the matrix and vector's sizes to begin with:
using StaticArrays
mutable struct MatVec{R, C, RC, VT, MT}
data::MMatrix{R, C, MT, RC} # RC should be R*C
labels::MVector{C, VT}
end
but there's the downside of having to compile for every concrete type with a unique permutation of type parameters R,C,MT,VT. StaticArrays also does not scale as well as normal Arrays.
If you don't restrict dimensions in the type parameters (with all those downsides) and want to throw an error at runtime, you got good and bad news.
The good news is you can control whatever mutation happens to your type. m.labels = v would call the method setproperty!(object::myStruct, name::Symbol, v), which you can define with all the safeguards you like.
The bad news is that you can't control mutation to the fields' types. push!(m.labels, 1) mutates in the push!(a::Vector{T}, item) method. The myStruct instance itself doesn't actually change; it still points to the same Vector. If you can't guarantee that you won't do something like x = m.labels; push!(x, "whoops") , then you really do need runtime checks, like iscorrect(m::myStruct) = length(m.labels) == size(m.data)[2]
A good option is to not access the fields of your struct directly. Instead, do it using a function. Eg:
mutable struct MyStruct
data::AbstractMatrix
labels::Vector{String}
end
function modify_labels(s::MyStruct, new_labels::Vector{String})
# do all checks and modifications
end
You should check chapter 8 from "Hands-On Design Patterns and Best Practices with Julia: Proven solutions to common problems in software design for Julia 1.x"
Let's say there is a type
immutable Foo
x :: Int64
y :: Float64
end
and there is a variable foo = Foo(1,2.0). I want to construct a new variable bar using foo as a prototype with field y = 3.0 (or, alternatively non-destructively update foo producing a new Foo object). In ML languages (Haskell, OCaml, F#) and a few others (e.g. Clojure) there is an idiom that in pseudo-code would look like
bar = {foo with y = 3.0}
Is there something like this in Julia?
This is tricky. In Clojure this would work with a data structure, a dynamically typed immutable map, so we simply call the appropriate method to add/change a key. But when working with types we'll have to do some reflection to generate an appropriate new constructor for the type. Moreover, unlike Haskell or the various MLs, Julia isn't statically typed, so one does not simply look at an expression like {foo with y = 1} and work out what code should be generated to implement it.
Actually, we can build a Clojure-esque solution to this; since Julia provides enough reflection and dynamism that we can treat the type as a sort of immutable map. We can use fieldnames to get the list of "keys" in order (like [:x, :y]) and we can then use getfield(foo, :x) to get field values dynamically:
immutable Foo
x
y
z
end
x = Foo(1,2,3)
with_slow(x, p) =
typeof(x)(((f == p.first ? p.second : getfield(x, f)) for f in fieldnames(x))...)
with_slow(x, ps...) = reduce(with_slow, x, ps)
with_slow(x, :y => 4, :z => 6) == Foo(1,4,6)
However, there's a reason this is called with_slow. Because of the reflection it's going to be nowhere near as fast as a handwritten function like withy(foo::Foo, y) = Foo(foo.x, y, foo.z). If Foo is parametised (e.g. Foo{T} with y::T) then Julia will be able to infer that withy(foo, 1.) returns a Foo{Float64}, but won't be able to infer with_slow at all. As we know, this kills the crab performance.
The only way to make this as fast as ML and co is to generate code effectively equivalent to the handwritten version. As it happens, we can pull off that version as well!
# Fields
type Field{K} end
Base.convert{K}(::Type{Symbol}, ::Field{K}) = K
Base.convert(::Type{Field}, s::Symbol) = Field{s}()
macro f_str(s)
:(Field{$(Expr(:quote, symbol(s)))}())
end
typealias FieldPair{F<:Field, T} Pair{F, T}
# Immutable `with`
for nargs = 1:5
args = [symbol("p$i") for i = 1:nargs]
#eval with(x, $([:($p::FieldPair) for p = args]...), p::FieldPair) =
with(with(x, $(args...)), p)
end
#generated function with{F, T}(x, p::Pair{Field{F}, T})
:($(x.name.primary)($([name == F ? :(p.second) : :(x.$name)
for name in fieldnames(x)]...)))
end
The first section is a hack to produce a symbol-like object, f"foo", whose value is known within the type system. The generated function is like a macro that takes types as opposed to expressions; because it has access to Foo and the field names it can generate essentially the hand-optimised version of this code. You can also check that Julia is able to properly infer the output type, if you parametrise Foo:
#code_typed with(x, f"y" => 4., f"z" => "hello") # => ...::Foo{Int,Float64,String}
(The for nargs line is essentially a manually-unrolled reduce which enables this.)
Finally, lest I be accused of giving slightly crazy advice, I want to warn that this isn't all that idiomatic in Julia. While I can't give very specific advice without knowing your use case, it's generally best to have fields with a manageable (small) set of fields and a small set of functions which do the basic manipulation of those fields; you can build on those functions to create the final public API. If what you want is really an immutable dict, you're much better off just using a specialised data structure for that.
There is also setindex (without the ! at the end) implemented in the FixedSizeArrays.jl package, which does this in an efficient way.
I read about polymorphism in function and saw this example
fun len nil = 0
| len rest = 1 + len (tl rest)
All the other examples dealt with nil arg too.
I wanted to check the polymorphism concept on other types, like
fun func (a : int) : int = 1
| func (b : string) : int = 2 ;
and got the follow error
stdIn:1.6-2.33 Error: parameter or result constraints of clauses don't agree
[tycon mismatch]
this clause: string -> int
previous clauses: int -> int
in declaration:
func = (fn a : int => 1: int
| b : string => 2: int)
What is the mistake in the above function? Is it legal at all?
Subtype Polymorphism:
In a programming languages like Java, C# o C++ you have a set of subtyping rules that govern polymorphism. For instance, in object-oriented programming languages if you have a type A that is a supertype of a type B; then wherever A appears you can pass a B, right?
For instance, if you have a type Mammal, and Dog and Cat were subtypes of Mammal, then wherever Mammal appears you could pass a Dog or a Cat.
You can achive the same concept in SML using datatypes and constructors. For instance:
datatype mammal = Dog of String | Cat of String
Then if you have a function that receives a mammal, like:
fun walk(m: mammal) = ...
Then you could pass a Dog or a Cat, because they are constructors for mammals. For instance:
walk(Dog("Fido"));
walk(Cat("Zoe"));
So this is the way SML achieves something similar to what we know as subtype polymorphism in object-oriented languajes.
Ad-hoc Polymorphysm:
Coercions
The actual point of confusion could be the fact that languages like Java, C# and C++ typically have automatic coercions of types. For instance, in Java an int can be automatically coerced to a long, and a float to a double. As such, I could have a function that accepts doubles and I could pass integers. Some call these automatic coercions ad-hoc polymorphism.
Such form of polymorphism does not exist in SML. In those cases you are forced to manually coerced or convert one type to another.
fun calc(r: real) = r
You cannot call it with an integer, to do so you must convert it first:
calc(Real.fromInt(10));
So, as you can see, there is no ad-hoc polymorphism of this kind in SML. You must do castings/conversions/coercions manually.
Function Overloading
Another form of ad-hoc polymorphism is what we call method overloading in languages like Java, C# and C++. Again, there is no such thing in SML. You may define two different functions with different names, but no the same function (same name) receiving different parameters or parameter types.
This concept of function or method overloading must not be confused with what you use in your examples, which is simply pattern matching for functions. That is syntantic sugar for something like this:
fun len xs =
if null xs then 0
else 1 + len(tl xs)
Parametric Polymorphism:
Finally, SML offers parametric polymorphism, very similar to what generics do in Java and C# and I understand that somewhat similar to templates in C++.
So, for instance, you could have a type like
datatype 'a list = Empty | Cons of 'a * 'a list
In a type like this 'a represents any type. Therefore this is a polymorphic type. As such, I could use the same type to define a list of integers, or a list of strings:
val listOfString = Cons("Obi-wan", Empty);
Or a list of integers
val numbers = Cons(1, Empty);
Or a list of mammals:
val pets = Cons(Cat("Milo", Cons(Dog("Bentley"), Empty)));
This is the same thing you could do with SML lists, which also have parametric polymorphism:
You could define lists of many "different types":
val listOfString = "Yoda"::"Anakin"::"Luke"::[]
val listOfIntegers 1::2::3::4::[]
val listOfMammals = Cat("Misingo")::Dog("Fido")::Cat("Dexter")::Dog("Tank")::[]
In the same sense, we could have parametric polymorphism in functions, like in the following example where we have an identity function:
fun id x = x
The type of x is 'a, which basically means you can substitute it for any type you want, like
id("hello");
id(35);
id(Dog("Diesel"));
id(Cat("Milo"));
So, as you can see, combining all these different forms of polymorphism you should be able to achieve the same things you do in other statically typed languages.
No, it's not legal. In SML, every function has a type. The type of the len function you gave as an example is
fn : 'a list -> int
That is, it takes a list of any type and returns an integer. The function you're trying to make takes and integer or a string, and returns an integer, and that's not legal in the SML type system. The usual workaround is to make a wrapper type:
datatype wrapper = I of int | S of string
fun func (I a) = 1
| func (S a) = 2
That function has type
fn : wrapper -> int
Where wrapper can contain either an integer or a string.
I am learning Jason Hickey's Introduction to Objective Caml.
Here is an exercise I don't have any clue
First of all, what does it mean to implement a dictionary as a function? How can I image that?
Do we need any array or something like that? Apparently, we can't have array in this exercise, because array hasn't been introduced yet in Chapter 3. But How do I do it without some storage?
So I don't know how to do it, I wish some hints and guides.
I think the point of this exercise is to get you to use closures. For example, consider the following pair of OCaml functions in a file fun-dict.ml:
let empty (_ : string) : int = 0
let add d k v = fun k' -> if k = k' then v else d k'
Then at the OCaml prompt you can do:
# #use "fun-dict.ml";;
val empty : string -> int =
val add : ('a -> 'b) -> 'a -> 'b -> 'a -> 'b =
# let d = add empty "foo" 10;;
val d : string -> int =
# d "bar";; (* Since our dictionary is a function we simply call with a
string to look up a value *)
- : int = 0 (* We never added "bar" so we get 0 *)
# d "foo";;
- : int = 10 (* We added "foo" -> 10 *)
In this example the dictionary is a function on a string key to an int value. The empty function is a dictionary that maps all keys to 0. The add function creates a closure which takes one argument, a key. Remember that our definition of a dictionary here is function from key to values so this closure is a dictionary. It checks to see if k' (the closure parameter) is = k where k is the key just added. If it is it returns the new value, otherwise it calls the old dictionary.
You effectively have a list of closures which are chained not by cons cells by by closing over the next dictionary(function) in the chain).
Extra exercise, how would you remove a key from this dictionary?
Edit: What is a closure?
A closure is a function which references variables (names) from the scope it was created in. So what does that mean?
Consider our add function. It returns a function
fun k' -> if k = k' then v else d k
If you only look at that function there are three names that aren't defined, d, k, and v. To figure out what they are we have to look in the enclosing scope, i.e. the scope of add. Where we find
let add d k v = ...
So even after add has returned a new function that function still references the arguments to add. So a closure is a function which must be closed over by some outer scope in order to be meaningful.
In OCaml you can use an actual function to represent a dictionary. Non-FP languages usually don't support functions as first-class objects, so if you're used to them you might have trouble thinking that way at first.
A dictionary is a map, which is a function. Imagine you have a function d that takes a string and gives back a number. It gives back different numbers for different strings but always the same number for the same string. This is a dictionary. The string is the thing you're looking up, and the number you get back is the associated entry in the dictionary.
You don't need an array (or a list). Your add function can construct a function that does what's necessary without any (explicit) data structure. Note that the add function takes a dictionary (a function) and returns a dictionary (a new function).
To get started thinking about higher-order functions, here's an example. The function bump takes a function (f: int -> int) and an int (k: int). It returns a new function that returns a value that's k bigger than what f returns for the same input.
let bump f k = fun n -> k + f n
(The point is that bump, like add, takes a function and some data and returns a new function based on these values.)
I thought it might be worth to add that functions in OCaml are not just pieces of code (unlike in C, C++, Java etc.). In those non-functional languages, functions don't have any state associated with them, it would be kind of rediculous to talk about such a thing. But this is not the case with functions in functional languages, you should start to think of them as a kind of objects; a weird kind of objects, yes.
So how can we "make" these objects? Let's take Jeffrey's example:
let bump f k =
fun n ->
k + f n
Now what does bump actually do? It might help you to think of bump as a constructor that you may already be familiar with. What does it construct? it constructs a function object (very losely speaking here). So what state does that resulting object has? it has two instance variables (sort of) which are f and k. These two instance variables are bound to the resulting function-object when you invoke bump f k. You can see that the returned function-object:
fun n ->
k + f n
Utilizes these instance variables f and k in it's body. Once this function-object is returned, you can only invoke it, there's no other way for you to access f or k (so this is encapsulation).
It's very uncommon to use the term function-object, they are called just functions, but you have to keep in mind that they can "enclose" state as well. These function-objects (also called closures) are not far separated from the "real" objects in object-oriented programming languages, a very interesting discussion can be found here.
I'm also struggling with this problem. Here's my solution and it works for the cases listed in the textbook...
An empty dictionary simply returns 0:
let empty (k:string) = 0
Find calls the dictionary's function on the key. This function is trivial:
let find (d: string -> int) k = d k
Add extends the function of the dictionary to have another conditional branching. We return a new dictionary that takes a key k' and matches it against k (the key we need to add). If it matches, we return v (the corresponding value). If it doesn't match we return the old (smaller) dictionary:
let add (d: string -> int) k v =
fun k' ->
if k' = k then
v
else
d k'
You could alter add to have a remove function. Also, I added a condition to make sure we don't remove a non-exisiting key. This is just for practice. This implementation of a dictionary is bad anyways:
let remove (d: string -> int) k =
if find d k = 0 then
d
else
fun k' ->
if k' = k then
0
else
d k'
I'm not good with the terminology as I'm still learning functional programming. So, feel free to correct me.
I have made a functor for format-able sets, as follows:
module type POrderedType =
sig
type t
val compare : t -> t -> int
val format : Format.formatter -> t -> unit
end
module type SET =
sig
include Set.S
val format : Format.formatter -> t -> unit
end
module MakeSet (P : POrderedType) : SET with type elt = P.t
Implementation of this is straightforward:
module MakeSet (P : OrderedType) =
struct
include Set.Make(P)
let format ff s =
let rec format' ff = function
| [] -> ()
| [v] -> Format.fprintf ff "%a" format v
| v::tl -> Format.fprintf ff "%a,# %a" format v format' tl in
Format.fprintf ff "#[<4>%a#]" format' (elements s)
end
I wanted to do something similar with maps. POrderedType is fine for keys, but I need a simpler type for values:
module type Printable =
sig
type t
val format : Format.formatter -> t -> unit
end
Then I wanted to do something similar to what I had done for sets, but I run into the following problem. Map.S values have type +'a t. I can't figure out a way to include the Map.S definition while constraining the 'a to be a Printable.t. What I want is something like the following (ignoring the fact that it is illegal):
module MakeMap (Pkey : POrderedType) (Pval : Printable) :
MAP with type key = Pkey.t and type 'a t = 'a t constraint 'a = Pval.t
Is there any way to do what I want without copying the entire signature of Map by hand?
I think the cleanest way to propose a printing function for polymorphic maps is to make the map printing function parametric over the values printing function. You can think of it this way:
functor-defined types are defined at the functor level, so providing functions for them is best done by adding new functor parameters (or enriching existing ones)
parametric types are bound (generalized) at the value level, so providing functions for them is best done by adding new parameters to the value
In OCaml, convenience tend to make people favor parametric polymorphism over functorization when possible. Functorization is sometimes necessary to enforce some type safety (here it's used to make sure that maps over different comparison functions have incompatible types), but otherwise people rather try to have polymorphism. So you're actually in the lucky situation here.
If you really want to have a functor producing monomorphic maps, well, I'm afraid you will have to copy the whole map interface and adapt it in the momonorphic case -- it's not much work.