isolate in SMLofNJ.Cont - functional-programming

I was reading about continuations in Standard ML (SMLofNJ.Cont). I understood what callcc and throw does, but could not understand isolate. The documentation says
Discard all live data from the calling context (except what is reachable from f or x), then call f(x), then exit. This may use much less memory then something like f(x) before exit().
However this does not make any sense to me. I just wanted to know what this function does, with some examples.

MLton does a better job of explaining an implementation of isolate using callcc and throw:
val isolate: ('a -> unit) -> 'a t =
fn (f: 'a -> unit) =>
callcc
(fn k1 =>
let
val x = callcc (fn k2 => throw (k1, k2))
val _ = (f x ; Exit.topLevelSuffix ())
handle exn => MLtonExn.topLevelHandler exn
in
raise Fail "MLton.Cont.isolate: return from (wrapped) func"
end)
We use the standard nested callcc trick to return a continuation that is ready to receive an argument, execute the isolated function, and exit the program. [...]
The page continues to explain how to achieve the same effect with less space leaking.
MLton's CONT signature has a different documentation line than SML/NJ's CONT signature:
isolate f creates a continuation that evaluates f in an empty context.
This is a constant time operation, and yields a constant size stack.

Related

Haskell Data.Map lookup AND delete at the same time

I was recently using the Map type from Data.Map inside a State Monad and so I wanted to write a function, that looks up a value in the Map and also deletes it from the Map inside the State Monad.
My current implementation looks like this:
lookupDelete :: (Ord k) => k -> State (Map k v) (Maybe v)
lookupDelete k = do
m <- get
put (M.delete k m)
return $ M.lookup k m
While this works, it feels quite inefficient. With mutable maps in imperative languages, it is not uncommon to find delete functions, that also return the value that was deleted.
I couldn't find a function for this, so I would really appreciate if someone knows one (or can explain why there is none)
A simple implementation is in terms of alterF:
lookupDelete :: Ord k => k -> State (Map k v) (Maybe v)
lookupDelete = state . alterF (\x -> (x, Nothing))
The x in alterF's argument is the Maybe value stored at the key given to lookupDelete. This anonymous function returns a (Maybe v, Maybe v). (,) (Maybe v) is a functor, and basically it serves as a "context" through which we can save whatever data we want from x. In this case we just save the whole x. The Nothing in the right element specifies that we want deletion. Once fully applied, alterF then gives us (Maybe v, Map k v), where the context (left element) is whatever we saved in the anonymous function and the right element is the mutated map. Then we wrap this stateful operation in state.
alterF is quite powerful: lots of operations can be built out of it simply by choosing the correct "context" functor. E.g. insert and delete come from using Identity, and lookup comes from using Const (Maybe v). A specialized function for lookupDelete is not necessary when we have alterF. One way to understand why alterF is so powerful is to recognize its type:
flip alterF k :: Functor f => (Maybe a -> f (Maybe a)) -> Map k a -> f (Map k a)
Things with types in this pattern
SomeClass f => (a -> f b) -> s -> f t
are called "optics" (when SomeClass is Functor, they're called "lenses"), and they represent how to "find" and "mutate" and "collate" "fields" inside "structures", because they let us focus on part of a structure, modify it (with the function argument), and save some information through a context (by letting us choose f). See the lens package for other uses of this pattern. (As the docs for alterF note, it's basically at from lens.)
There is no function specifically for "delete and lookup". Instead you use a more general tool: updateLookupWithKey is "lookup and update", where update can be delete or modify.
updateLookupWithKey :: Ord k =>
(k -> a -> Maybe a) -> k -> Map k a -> (Maybe a, Map k a)
lookupDelete k = do
(ret, m) <- gets $ updateLookupWithKey (\_ _ -> Nothing) k
put m
pure ret

How to iterate a stream in Ocaml

I am trying to iterate through a stream in order to print the content.
type 'a stream = Nil | Cons of 'a * 'a stream thunk and 'a thunk = unit -> 'a
This is where my function is called
|> iter_stream ~f:(fun (f,c,l) -> printf "%s %s %s\n" f c l)
And this is the type
let rec iter_stream st ~f
(* val iter_stream : 'a stream -> ('a -> unit) -> unit *)
I can't seem to find any examples on how to implement it. The only idea I have is to think about it like a list which is obviously wrong since I get type errors.
let rec iter_stream st ~f =
match st with
| None -> ()
| Some(x, st') -> f x; iter_stream st' ~f
Your stream is extremely similar to a list, except that you need to call a function to get the tail of the list.
Your proposed code has many flaws. The main two flaws that I see are:
You're using the constructors None and Some while a stream has constructors Nil and Cons.
You're not calling a function to get the tail of the stream. Note that in Cons (a, b), b is a "stream thunk", i.e., it's a function that you can call to get a stream.
(Perhaps these are the only two flaws :-)
I hope this helps.

Understanding side effects with monadic traversal

I am trying to properly understand how side effects work when traversing a list in F# using monadic style, following Scott's guide here
I have an AsyncSeq of items, and a side-effecting function that can return a Result<'a,'b> (it is saving the items to disk).
I get the general idea - split the head and tail, apply the func to the head. If it returns Ok then recurse through the tail, doing the same thing. If an Error is returned at any point then short circuit and return it.
I also get why Scott's ultimate solution uses foldBack rather than fold - it keeps the output list in the same order as the input as each processed item is prepended to the previous.
I can also follow the logic:
The result from the list's last item (processed first as we are using foldback) will be passed as the accumulator to the next item.
If it is an Error and the next item is Ok, the next item is discarded.
If the next item is an Error, it replaces any previous results and becomes the accumulator.
That means by the time you have recursed over the entire list from right to left and ended up at the start, you either have an Ok of all of the results in the correct order or the most recent Error (which would have been the first to occur if we had gone left to right).
The thing that confuses me is that surely, since we are starting at the end of the list, all side effects of processing every item will take place, even if we only get back the last Error that was created?
This seems to be confirmed here as the print output starts with [5], then [4,5], then [3,4,5] etc.
The thing that confuses me is that this isn't what I see happening when I use AsyncSeq.traverseChoiceAsync from the FSharpx lib (which I wrapped to process Result instead of Choice). I see side effects happening from left to right, stopping on the first error, which is what I want to happen.
It also looks like Scott's non-tail recursive version (which doesn't use foldBack and just recurses over the list) goes from left to right? The same goes for the AsyncSeq version. That would explain why I see it short circuit on the first error but surely if it completes Ok then the output items would be reversed, which is why we normally use foldback?
I feel I am misunderstanding or misreading something obvious! Could someone please explain it to me? :)
Edit:
rmunn has given a really great comprehensive explanation of the AsyncSeq traversal below. The TLDR was that
Scott's initial implementation and the AsyncSeq traverse both do go from left to right as I thought and so only process until they hit an error
they keep their contents in order by prepending the head to the processed tail rather than prepending each processed result to the previous (which is what the built in F# fold does).
foldback would keep things in order but would indeed execute every case (which could take forever with an async seq)
It's pretty simple: traverseChoiceAsync isn't using foldBack. Yes, with foldBack the last item would be processed first, so that by the time you get to the first item and discover that its result is Error you'd have triggered the side effects of every item. Which is, I think, precisely why whoever wrote traverseChoiceAsync in FSharpx chose not to use foldBack, because they wanted to ensure that side effects would be triggered in order, and stop at the first Error (or, in the case of the Choice version of the function, the first Choice2Of2 — but I'll pretend from this point on that that function was written to use the Result type.)
Let's look at the traverseChoieAsync function in the code you linked to, and read through it step-by-step. I'll also rewrite it to use Result instead of Choice, because the two types are basically identical in function but with different names in the DU, and it'll be a little easier to tell what's going on if the DU cases are called Ok and Error instead of Choice1Of2 and Choice2Of2. Here's the original code:
let rec traverseChoiceAsync (f:'a -> Async<Choice<'b, 'e>>) (s:AsyncSeq<'a>) : Async<Choice<AsyncSeq<'b>, 'e>> = async {
let! s = s
match s with
| Nil -> return Choice1Of2 (Nil |> async.Return)
| Cons(a,tl) ->
let! b = f a
match b with
| Choice1Of2 b ->
return! traverseChoiceAsync f tl |> Async.map (Choice.mapl (fun tl -> Cons(b, tl) |> async.Return))
| Choice2Of2 e ->
return Choice2Of2 e }
And here's the original code rewritten to use Result. Note that it's a simple rename, and none of the logic needs to be changed:
let rec traverseResultAsync (f:'a -> Async<Result<'b, 'e>>) (s:AsyncSeq<'a>) : Async<Result<AsyncSeq<'b>, 'e>> = async {
let! s = s
match s with
| Nil -> return Ok (Nil |> async.Return)
| Cons(a,tl) ->
let! b = f a
match b with
| Ok b ->
return! traverseChoiceAsync f tl |> Async.map (Result.map (fun tl -> Cons(b, tl) |> async.Return))
| Error e ->
return Error e }
Now let's step through it. The whole function is wrapped inside an async { } block, so let! inside this function means "unwrap" in an async context (essentially, "await").
let! s = s
This takes the s parameter (of type AsyncSeq<'a>) and unwraps it, binding the result to a local name s that henceforth will shadow the original parameter. When you await the result of an AsyncSeq, what you get is the first element only, while the rest is still wrapped in an async that needs to be further awaited. You can see this by looking at the result of the match expression, or by looking at the definition of the AsyncSeq type:
type AsyncSeq<'T> = Async<AsyncSeqInner<'T>>
and AsyncSeqInner<'T> =
| Nil
| Cons of 'T * AsyncSeq<'T>
So when you do let! x = s when s is of type AsyncSeq<'T>, the value of x will either be Nil (when the sequence has run to its end) or it will be Cons(head, tail) where head is of type 'T and tail is of type AsyncSeq<'T>.
So after this let! s = s line, our local name s now refers to an AsyncSeqInner type, which contains the head item of the sequence (or Nil if the sequence was empty), and the rest of the sequence is still wrapped in an AsyncSeq so it has yet to be evaluated (and, crucially, its side effects have not yet happened).
match s with
| Nil -> return Ok (Nil |> async.Return)
There's a lot happening in this line, so it'll take a bit of unpacking, but the gist is that if the input sequence s had Nil as its head, i.e. had reached its end, then that's not an error, and we return an empty sequence.
Now to unpack. The outer return is in an async keyword, so it takes the Result (whose value is Ok something) and turns it into an Async<Result<something>>. Remembering that the return type of the function is declared as Async<Result<AsyncSeq>>, the inner something is clearly an AsyncSeq type. So what's going on with that Nil |> async.Return? Well, async isn't an F# keyword, it's the name of an instance of AsyncBuilder. Inside a computation expression foo { ... }, return x is translated into foo.Return(x). So calling async.Return x is just the same as writing async { return x }, except that it avoids nesting a computation expression inside another computation expression, which would be a little nasty to try and parse mentally (and I'm not 100% sure the F# compiler allows it syntactically). So Nil |> async.Return is async.Return Nil which means it produces a value of Async<x> where x is the type of the value Nil. And as we just saw, this Nil is a value of type AsyncSeqInner, so Nil |> async.Return produces an Async<AsyncSeqInner>. And another name for Async<AsyncSeqInner> is AsyncSeq. So this whole expression produces an Async<Result<AsyncSeq>> that has the meaning of "We're done here, there are no more items in the sequence, and there was no error".
Phew. Now for the next line:
| Cons(a,tl) ->
Simple: if the next item in the AsyncSeq named s was a Cons, we deconstruct it so that the actual item is now called a, and the tail (another AsyncSeq) is called tl.
let! b = f a
This calls f on the value we just got out of s, and then unwraps the Async part of f's return value, so that b is now a Result<'b, 'e>.
match b with
| Ok b ->
More shadowed names. Inside this branch of the match, b now names a value of type 'b rather than a Result<'b, 'e>.
return! traverseResultAsync f tl |> Async.map (Result.map (fun tl -> Cons(b, tl) |> async.Return))
Hoo boy. That's too much to tackle at once. Let's write this as if the |> operators were lined up on separate lines, and then we'll go through each step one at a time. (Note that I've wrapped an extra pair of parentheses around this, just to clarify that it's the final result of this whole expression that will be passed to the return! keyword).
return! (
traverseResultAsync f tl
|> Async.map (
Result.map (
fun tl -> Cons(b, tl) |> async.Return)))
I'm going to tackle this expression from the inside out. The inner line is:
fun tl -> Cons(b, tl) |> async.Return
The async.Return thing we've already seen. This is a function that takes a tail (we don't currently know, or care, what's inside that tail, except that by the necessity of the type signature of Cons it must be an AsyncSeq) and turns it into an AsyncSeq that is b followed by the tail. I.e., this is like b :: tl in a list: it sticks b onto the front of the AsyncSeq.
One step out from that innermost expression is:
Result.map
Remember that the function map can be thought of in two ways: one is "take a function and run it against whatever is "inside" this wrapper". The other is "take a function that operates on 'T and make it into a function that operates on Wrapper<'T>". (If you don't have both of those clear in your mind yet, https://sidburn.github.io/blog/2016/03/27/understanding-map is a pretty good article to help grok that concept). So what this is doing is taking a function of type AsyncSeq -> AsyncSeq and turning it into a function of type Result<AsyncSeq> -> Result<AsyncSeq>. Alternately, you could think of it as taking a Result<tail> and calling fun tail -> ... against that tail result, then re-wrapping the result of that function in a new Result. Important: Because this is using Result.map (Choice.mapl in the original) we know that if tail is an Error value (or if the Choice was a Choice2Of2 in the original), the function will not be called. So if traverseResultAsync produces a result that starts with an Error value, it's going to produce an <Async<Result<foo>>> where the value of Result<foo> is an Error, and so the value of the tail will be discarded. Keep that in mind for later.
Okay, next step out.
Async.map
Here, we have a Result<AsyncSeq> -> Result<AsyncSeq> function produced by the inner expression, and this converts it to an Async<Result<AsyncSeq>> -> Async<Result<AsyncSeq>> function. We've just talked about this, so we don't need to go over how map works again. Just remember that the effect of this Async<Result<AsyncSeq>> -> Async<Result<AsyncSeq>> function that we've built up will be the following:
Await the outer async.
If the result is Error, return that Error.
If the result is Ok tail, produce an Ok (Cons (b, tail)).
Next line:
traverseResultAsync f tl
I probably should have started with this, because this will actually run first, and then its value will be passed into the Async<Result<AsyncSeq>> -> Async<Result<AsyncSeq>> function that we've just analysed.
So what this whole thing will do is to say "Okay, we took the first part of the AsyncSeq we were handed, and passed it to f, and f produced an Ok result with a value we're calling b. So now we need to process the rest of the sequence similarly, and then, if the rest of the sequence produces an Ok result, we'll stick b on the front of it and return an Ok sequence with contents b :: tail. BUT if the rest of the sequence produces an Error, we'll throw away the value of b and just return that Error unchanged."
return!
This just takes the result we just got (either an Error or an Ok (b :: tail), already wrapped in an Async) and returns it unchanged. But note that the call to traverseResultAsync is NOT tail-recursive, because its value had to be passed into the Async.map (...) expression first.
And now we still have one more bit of traverseResultAsync to look at. Remember when I said "Keep that in mind for later"? Well, that time has arrived.
| Error e ->
return Error e }
Here we're back in the match b with expression. If b was an Error result, then no further recursive calls are made, and the whole traverseResultAsync returns an Async<Result> where the Result value is Error. And if we were currently nested deep inside a recursion (i.e., we're in the return! traverseResultAsync ... expression), then our return value will be Error, which means the result of the "outer" call, as we've kept in mind, will also be Error, discarding any other Ok results that might have happened "before".
Conclusion
And so the effect of all of that is:
Step through the AsyncSeq, calling f on each item in turn.
The first time f returns Error, stop stepping through, throw away any previous Ok results, and return that Error as the result of the whole thing.
If f never returns Error and instead returns Ok b every time, return an Ok result that contains an AsyncSeq of all those b values, in their original order.
Why are they in their original order? Because the logic in the Ok case is:
If sequence was empty, return an empty sequence.
Split into head and tail.
Get value b from f head.
Process the tail.
Stick value b in front of the result of processing the tail.
So if we started with (conceptually) [a1; a2; a3], which actually looks like Cons (a1, Cons (a2, Cons (a3, Nil))) we'll end up with Cons (b1, Cons (b2, Cons (b3, Nil))) which translates to the conceptual sequence [b1; b2; b3].
See #rmunn's great answer above for the explanation. I just wanted to post a little helper for anyone that reads this in the future, it allows you to use the AsyncSeq traverse with Results instead of the old Choice type it was written with:
let traverseResultAsyncM (mapping : 'a -> Async<Result<'b,'c>>) source =
let mapping' =
mapping
>> Async.map (function
| Ok x -> Choice1Of2 x
| Error e -> Choice2Of2 e)
AsyncSeq.traverseChoiceAsync mapping' source
|> Async.map (function
| Choice1Of2 x -> Ok x
| Choice2Of2 e -> Error e)
Also here is a version for non-async mappings:
let traverseResultM (mapping : 'a -> Result<'b,'c>) source =
let mapping' x = async {
return
mapping x
|> function
| Ok x -> Choice1Of2 x
| Error e -> Choice2Of2 e
}
AsyncSeq.traverseChoiceAsync mapping' source
|> Async.map (function
| Choice1Of2 x -> Ok x
| Choice2Of2 e -> Error e)

The API design philosophy in OCaml

After learning OCaml for like half year, I am still struggling at the functional programming and imperative programming bits.
It is not about using list or array, but about API design.
For example, if I am about to write stack for a user, should I present it in functional or imperative way?
stack should have a function called pop, which means return the last element to user and remove it from the stack. So if I design my stack in functional way, then for pop, I should return a tuple (last_element, new_stack), right? But I think it is ugly.
At the same time, I feel functional way is more natural in Functional Programming.
So, how should I handle this kind of design problem?
Edit
I saw stack's source code and they define the type like this:
type 'a t = { mutable c : 'a list }
Ok, internally the standard lib uses list which is immutable, but the encapsulate it in a mutable record.
I understand this as in this way, for the user, it is always one stack and so no need for a tuple to return to the client.
But still, it is not a functional way, right?
Mutable structures are sometimes more efficient, but they're not persistent, which is useful in various situations (mostly for backtracking a failed computation). If the immutable interface has no or little performance overhead over the mutable interface, you should absolutely prefer the immutable one.
Functionally (i.e. without mutability), you can either define it exactly like List by using head/tail rather than pop, or you can, as you suggest, let the API handle state change by returning a tuple. This is comparable to how state monads are built.
So either it is the responsibility of the parent scope to handle the stack's state (e.g. through recursion), in which case stacks are exactly like lists, or some of this responsibility is loaded to the API through tuples.
Here is a quick attempt (pretending to know O'Caml syntax):
module Stack =
struct
type 'a stack = 'a list
let empty _ = ((), [])
let push x stack = ((), x::stack)
let pop (x::stack) = (x, stack)
| pop _ = raise EmptyStack
end
One use case would then be:
let (_, st) = Stack.empty ()
let (_, st) = Stack.push 1 Stack.empty
let (_, st) = Stack.push 2 st
let (_, st) = Stack.push 3 st
let (x, st) = Stack.pop st
Instead of handling the tuples explicitly, you may want to hide passing on st all the time and invent an operator that makes the following syntax possible:
let (x, st) = (Stack.empty >>= Stack.push 1 >>=
Stack.push 2 >>= Stack.push 3 >>= Stack.pop) []
If you can make this operator, you have re-invented the state monad. :)
(Because all of the functions above take a state as its curried last argument, they can be partially applied. To expand on this, so it is more apparent what is going on, but less readable, see the rewrite below.)
let (x, st) = (fun st -> Stack.empty st >>= fun st -> Stack.push 1 st
>>= fun st -> Stack.push 2 st
>>= fun st -> Stack.push 3 st
>>= fun st -> Stack.pop) []
This is one idiomatic way to deal with state and immutable data structures, at least.

Does "Value Restriction" practically mean that there is no higher order functional programming?

Does "Value Restriction" practically mean that there is no higher order functional programming?
I have a problem that each time I try to do a bit of HOP I get caught by a VR error. Example:
let simple (s:string)= fun rq->1
let oops= simple ""
type 'a SimpleType= F of (int ->'a-> 'a)
let get a = F(fun req -> id)
let oops2= get ""
and I would like to know whether it is a problem of a prticular implementation of VR or it is a general problem that has no solution in a mutable type-infered language that doesn't include mutation in the type system.
Does “Value Restriction” mean that there is no higher order functional programming?
Absolutely not! The value restriction barely interferes with higher-order functional programming at all. What it does do is restrict some applications of polymorphic functions—not higher-order functions—at top level.
Let's look at your example.
Your problem is that oops and oops2 are both the identity function and have type forall 'a . 'a -> 'a. In other words each is a polymorphic value. But the right-hand side is not a so-called "syntactic value"; it is a function application. (A function application is not allowed to return a polymorphic value because if it were, you could construct a hacky function using mutable references and lists that would subvert the type system; that is, you could write a terminating function type type forall 'a 'b . 'a -> 'b.
Luckily in almost all practical cases, the polymorphic value in question is a function, and you can define it by eta-expanding:
let oops x = simple "" x
This idiom looks like it has some run-time cost, but depending on the inliner and optimizer, that can be got rid of by the compiler—it's just the poor typechecker that is having trouble.
The oops2 example is more troublesome because you have to pack and unpack the value constructor:
let oops2 = F(fun x -> let F f = get "" in f x)
This is quite a but more tedious, but the anonymous function fun x -> ... is a syntactic value, and F is a datatype constructor, and a constructor applied to a syntactic value is also a syntactic value, and Bob's your uncle. The packing and unpacking of F is all going to be compiled into the identity function, so oops2 is going to compile into exactly the same machine code as oops.
Things are even nastier when you want a run-time computation to return a polymorphic value like None or []. As hinted at by Nathan Sanders, you can run afoul of the value restriction with an expression as simple as rev []:
Standard ML of New Jersey v110.67 [built: Sun Oct 19 17:18:14 2008]
- val l = rev [];
stdIn:1.5-1.15 Warning: type vars not generalized because of
value restriction are instantiated to dummy types (X1,X2,...)
val l = [] : ?.X1 list
-
Nothing higher-order there! And yet the value restriction applies.
In practice the value restriction presents no barrier to the definition and use of higher-order functions; you just eta-expand.
I didn't know the details of the value restriction, so I searched and found this article. Here is the relevant part:
Obviously, we aren't going to write the expression rev [] in a program, so it doesn't particularly matter that it isn't polymorphic. But what if we create a function using a function call? With curried functions, we do this all the time:
- val revlists = map rev;
Here revlists should be polymorphic, but the value restriction messes us up:
- val revlists = map rev;
stdIn:32.1-32.23 Warning: type vars not generalized because of
value restriction are instantiated to dummy types (X1,X2,...)
val revlists = fn : ?.X1 list list -> ?.X1 list list
Fortunately, there is a simple trick that we can use to make revlists polymorphic. We can replace the definition of revlists with
- val revlists = (fn xs => map rev xs);
val revlists = fn : 'a list list -> 'a list list
and now everything works just fine, since (fn xs => map rev xs) is a syntactic value.
(Equivalently, we could have used the more common fun syntax:
- fun revlists xs = map rev xs;
val revlists = fn : 'a list list -> 'a list list
with the same result.) In the literature, the trick of replacing a function-valued expression e with (fn x => e x) is known as eta expansion. It has been found empirically that eta expansion usually suffices for dealing with the value restriction.
To summarise, it doesn't look like higher-order programming is restricted so much as point-free programming. This might explain some of the trouble I have when translating Haskell code to F#.
Edit: Specifically, here's how to fix your first example:
let simple (s:string)= fun rq->1
let oops= (fun x -> simple "" x) (* eta-expand oops *)
type 'a SimpleType= F of (int ->'a-> 'a)
let get a = F(fun req -> id)
let oops2= get ""
I haven't figured out the second one yet because the type constructor is getting in the way.
Here is the answer to this question in the context of F#.
To summarize, in F# passing a type argument to a generic (=polymorphic) function is a run-time operation, so it is actually type-safe to generalize (as in, you will not crash at runtime). The behaviour of thusly generalized value can be surprising though.
For this particular example in F#, one can recover generalization with a type annotation and an explicit type parameter:
type 'a SimpleType= F of (int ->'a-> 'a)
let get a = F(fun req -> id)
let oops2<'T> : 'T SimpleType = get ""

Resources