Invoking a continuation vs a function call - functional-programming

Let us consider a simple example of a factorial function, written in a pseudo-like CPS-style (enumeration&ordering of intermediate results is omitted, as will be very noisy):
(def (fact n k) (if (eq? n 0) (k 1) (fact (- n 1) (\result (k (* n result))))))
Does invoking a continuation (k 1) (i.e. "returning" a value) is somehow technically different to a "normal" function call, as in else-branch? The one thing that comes to my mind is that in this case, a continuation is the only thing that does not have another continuation in it's argument :)
Also, can you say that this computation is resembling a DFS tree walk of a dynamically constructed tree, with current computation being the currently explored node and "other unexplored branches" is a call stack/continuation?

Well, you've answered your first question: the technical difference is there's no more passing continuations: this is the point where the tension built up to this point gets released, i.e. where the accumulated computations will actually be performed [by a sequence of beta reductions].
And of course from the point of view of the language's semantics, this is an ordinary function application.
As of second question, I may understand you wrong, but I'd say that every computation is a leaf-ward tree walk, but the tree is not dynamically constructed, but rather is a static (most often infinite) object [uniquiely] defined by the program. But then, the call stack [even if it's trivial, because of tail-calls] is what you already walked through (from root to current node), while the continuation is your future path, from the point of applying continuation to something (i.e. while you're applying this fact function, the next node is fact again unless n is 0).
(fact _ id)
/ \
(= _ 0)? otherwise
| \
(id 1) (fact _ (λ (x) (id (* 2 x))))
| / \
1 (= _ 0)? otherwise
| \
(id (* 2 1))) (fact _ (λ (x) (id (* 2 (* 1 x)))))
| / \
(id 2) (= _ 0)? otherwise
| | \
2 (id (* 2 (* 1 1))) ...
|
(id (* 2 1))
|
(id 2)
|
2
If you like thinking in these directions, you might like to read about process trees in e.g.
Hatcliff's "An Introduction to Online and Offline Partial Evaluation" http://repository.readscheme.org/ftp/papers/pe98-school/hatcliff-DIKU-PE-summerschool.pdf -- btw the topic of PE is damn interesting),
and perhaps you might like (at least the first 20 or so pages) of Scott's "Lattice of flow diagrams" (https://www.cs.ox.ac.uk/files/3223/PRG03.pdf -- actually imho this paper translates "more natural" to applicative functional languages).
Hope that gives you some insights.

Related

Minimal `set cover` solution in Clojure

I've been trying ways to distill (a lot of) database index suggestions into a set of indexes that are applicable to most databases. To do that it turns out I need to solve a pretty basic but NP complete set theory problem: the minimum set cover problem.
This means that given a set of sets, select a subset of sets s that covers a certain domain u, but if u isn't given make it the union of s. The most optimal subset of sets is the one reaching a certain minimum, usually the minimal amount of sets, but it could also be a minimum in total weight if the sets are weighted.
(def s #{#{1 4 7} #{1 2} #{2 5 6} #{2 5} #{3} #{8 6}})
(def u (apply set/union s))
(set-cover s u)
=> (#{7 1 4} #{6 2 5} #{3} #{6 8})
I implemented a naive version of this by using clojure.math.combinatorics, relying on it returning subsets in order of increasing amounts of sets.
(defn set-cover
([s]
(set-cover s (apply set/union s)))
([s u]
(->> s
(combo/subsets)
(filter (fn [s] (= u (apply set/union s))))
first)))
However this is very slow on larger s, because of the NP nature and the recurring unions (even optimized ones). For my use-case a version supporting weighted sets would also be preferable.
Looking into optimized versions most trails ended up in thesis-land, which I'm regrettably not smart enough for. I found this small python implementation on SO
def setCover(setList,target=None):
if not setList: return None
if target is None: target = set.union(*setList)
bestCover = []
for i,values in enumerate(setList):
remaining = target - values
if remaining == target: continue
if not remaining: return [values]
subCover = setCover(setList[i+1:],remaining)
if not subCover: continue
if not bestCover or len(subCover)<len(bestCover)-1:
bestCover = [values] + subCover
return bestCover
It ticks many boxes:
work recursively
compares partial results as optimization
seems suitable for different minimum definitions: count or weight
has additional optimizations I can grok
which can be done outside of the basic algorithm
sorting input sets on high minimum score (size, weight)
identifying unique singleton sets in u not found in other sets
I have been trying to translate this into Clojure as a loop-recur function, but couldn't get the basic version of it to work, since there are niggling paradigm gaps between the two languages.
Does anyone have suggestions how I could go about solving this problem in Clojure, either by tips how to convert the python algorithm successfully, or which other Clojure (or even Java) libraries I could use and how ?
Here's a Clojure version of the greedy set cover algorithm i.e. selects a set which covers the most uncovered elements at each iteration. Rather than use loop/recur to build the complete result, it lazily returns each result element using lazy-seq:
(defn greedy-set-cover
([xs]
(greedy-set-cover xs (apply set/union xs)))
([xs us]
(lazy-seq
(when (seq us)
(let [x (apply max-key #(count (set/intersection us %)) xs)
xs' (disj xs x)
us' (set/difference us x)]
(cons x (greedy-set-cover xs' us')))))))
(def s #{#{1 4 7} #{1 2} #{2 5 6} #{2 5} #{3} #{8 6}})
(greedy-set-cover s) ;; = (#{7 1 4} #{6 2 5} #{6 8} #{3})

Learning Clojure: recursion for Hidden Markov Model

I'm learning Clojure and started by copying the functionality of a Python program that would create genomic sequences by following an (extremely simple) Hidden Markov model.
In the beginning I stuck with my known way of serial programming and used the def keyword a lot, thus solving the problem with tons of side effects, kicking almost every concept of Clojure right in the butt. (although it worked as supposed)
Then I tried to convert it to a more functional way, using loop, recur, atom and so on. Now when I run I get an ArityException, but I can't read the error message in a way that shows me even which function throws it.
(defn create-model [name pA pC pG pT pSwitch]
; converts propabilities to cumulative prop's and returns a char
(with-meta
(fn []
(let [x (rand)]
(if (<= x pA)
\A
(if (<= x (+ pA pC))
\C
(if (<= x (+ pA pC pG))
\G
\T))))) ; the function object
{:p-switch pSwitch :name name})) ; the metadata, used to change model
(defn create-genome [n]
; adds random chars from a model to a string and switches models randomly
(let [models [(create-model "std" 0.25 0.25 0.25 0.25 0.02) ; standard model, equal props
(create-model "cpg" 0.1 0.4 0.4 0.1 0.04)] ; cpg model
islands (atom 0) ; island counter
m (atom 0)] ; model index
(loop [result (str)]
(let [model (nth models #m)]
(when (< (rand) (:p-switch (meta model))) ; random says "switch model!"
; (swap! m #(mod (inc #m) 2)) ; swap model used in next iteration
(swap! m #(mod (inc %) 2)) ; EDIT: correction
(if (= #m 1) ; on switch to model 1, increase island counter
; (swap! islands #(inc #islands)))) ; EDIT: my try, with error
(swap! islands inc)))) ; EDIT: correction
(if (< (count result) n) ; repeat until result reaches length n
(recur (format "%s%c" result (model)))
result)))))
Running it works, but calling (create-genome 1000) leads to
ArityException Wrong number of args (1) passed to: user/create-genome/fn--772 clojure.lang.AFn.throwArity (AFn.java:429)
My questions:
(obviously) what am I doing wrong?
how exactly do I have to understand the error message?
Information I'd be glad to receive
how can the code be improved (in a way a clojure-newb can understand)? Also different paradigms - I'm grateful for suggestions.
Why do I need to put pound-signs # before the forms I use in changing the atoms' states? I saw this in an example, the function wouldn't evaluate without it, but I don't understand :)
Since you asked for ways to improve, here's one approach I often find myself going to : Can I abstract this loop into a higher order pattern?
In this case, your loop is picking characters at random - this can be modelled as calling a fn of no arguments that returns a character - and then accumulating them together until it has enough of them. This fits very naturally into repeatedly, which takes functions like that and makes lazy sequences of their results to whatever length you want.
Then, because you have the entire sequence of characters all together, you can join them into a string a little more efficiently than repeated formats - clojure.string/join should fit nicely, or you could apply str over it.
Here's my attempt at such a code shape - I tried to also make it fairly data-driven and that may have resulted in it being a bit arcane, so bear with me:
(defn make-generator
"Takes a probability distribution, in the form of a map
from values to the desired likelihood of that value appearing in the output.
Normalizes the probabilities and returns a nullary producer fn with that distribution."
[p-distribution]
(let[sum-probs (reduce + (vals p-distribution))
normalized (reduce #(update-in %1 [%2] / sum-probs) p-distribution (keys p-distribution) )]
(fn [] (reduce
#(if (< %1 (val %2)) (reduced (key %2)) (- %1 (val %2)))
(rand)
normalized))))
(defn markov-chain
"Takes a series of states, returns a producer fn.
Each call, the process changes to the next state in the series with probability :p-switch,
and produces a value from the :producer of the current state."
[states]
(let[cur-state (atom (first states))
next-states (atom (cycle states))]
(fn []
(when (< (rand) (:p-switch #cur-state))
(reset! cur-state (first #next-states))
(swap! next-states rest))
((:producer #cur-state)))))
(def my-states [{:p-switch 0.02 :producer (make-generator {\A 1 \C 1 \G 1 \T 1}) :name "std"}
{:p-switch 0.04 :producer (make-generator {\A 1 \C 4 \G 4 \T 1}) :name "cpg"}])
(defn create-genome [n]
(->> my-states
markov-chain
(repeatedly n)
clojure.string/join))
To hopefully explain a little of the complexity:
The let in make-generator is just making sure the probabilities sum to 1.
make-generator makes heavy use of another higher-order looping pattern, namely reduce. This essentially takes a function of 2 values and threads a collection through it. (reduce + [4 5 2 9]) is like (((4 + 5) + 2) + 9). I chiefly use it to do a similar thing to your nested ifs in create-model, but without naming how many values are in the probability distribution.
markov-chain makes two atoms, cur-state to hold the current state and next-states, which holds an infinite sequence (from cycle) of the next states to switch to. This is to work like your m and models, but for arbitrary numbers of states.
I then use when to check if the random state switch should occur, and if it does perform the two side effects I need to keep the state atoms up to date. Then I just call the :producer of #cur-state with no arguments and return that.
Now obviously, you don't have to do this exactly this way, but looking for those patterns certainly does tend to help me.
If you want to go even more functional, you could also consider moving to a design where your generators take a state (with seeded random number generator) and return a value plus a new state. This "state monad" approach would make it possible to be fully declarative, which this design isn't.
Ok, it's a long shot, but it looks like your atom-updating functions:
#(mod (inc #m) 2)
and
#(inc #islands)
are of 0-arity, and they should be of arity at least 1.
This leads to the answer to your last question: the #(body) form is a shortcut for (fn [...] (body)). So it creates an anonymous function.
Now the trick is that if body contains % or %x where x is a number, the position where it appears will be substituted for the referece to the created function's argument number x (or the first argument if it's only %).
In your case that body doesn't contain references to the function arguments, so #() creates an anonymous function that takes no arguments, which is not what swap! expects.
So swap tries to pass an argument to something that doesn't expect it and boom!, you get an ArityException.
What you really needed in those cases was:
(swap! m #(mod (inc %) 2)) ; will swap value of m to (mod (inc current-value-of-m) 2) internally
and
(swap! islands inc) ; will swap value of islands to (inc current-value-of-islands) internally
respectively
Your mistake has to do with what you asked about the hashtag macro #.
#(+ %1 %2) is shorthand for (fn [x y] (+ x y)). It can be without arguments too: #(+ 1 1). That's how you are using it. The error you are getting is because swap! needs a function that accepts a parameter. What it does is pass the atom's current value to your function. If you don't care about its state, use reset!: (reset! an-atom (+ 1 1)). That will fix your error.
Correction:
I just took a second look at your code and realised that you are actually using working on the atoms' states. So what you want to do is this:
(swap! m #(mod (inc %) 2)) instead of (swap! m #(mod (inc #m) 2)).
As far as style goes, you are doing good. I write my functions differently every day of the week, so maybe I'm not one to give advice on that.

Summing all the multiples of three recursively in Clojure

Hi I am a bit new to Clojure/Lisp programming but I have used recursion before in C like languages, I have written the following code to sum all numbers that can be divided by three between 1 to 100.
(defn is_div_by_3[number]
(if( = 0 (mod number 3))
true false)
)
(defn sum_of_mult3[step,sum]
(if (= step 100)
sum
)
(if (is_div_by_3 step)
(sum_of_mult3 (+ step 1 ) (+ sum step))
)
)
My thought was to end the recursion when step reaches sum, then I would have all the multiples I need in the sum variable that I return, but my REPL seems to returning nil for both variables what might be wrong here?
if is an expression not a statement. The result of the if is always one of the branches. In fact Clojure doesn't have statements has stated here:
Clojure programs are composed of expressions. Every form not handled specially by a special form or macro is considered by the compiler to be an expression, which is evaluated to yield a value. There are no declarations or statements, although sometimes expressions may be evaluated for their side-effects and their values ignored.
There is a nice online (and free) book for beginners: http://www.braveclojure.com
Other thing, the parentheses in Lisps are not equivalent to curly braces in the C-family languages. For example, I would write your is_div_by_3 function as:
(defn div-by-3? [number]
(zero? (mod number 3)))
I would also use a more idiomatic approach for the sum_of_mult3 function:
(defn sum-of-mult-3 [max]
(->> (range 1 (inc max))
(filter div-by-3?)
(apply +)))
I think that this code is much more expressive in its intention then the recursive version. The only trick thing is the ->> thread last macro. Take a look at this answer for an explanation of the thread last macro.
There are a few issues with this code.
1) Your first if in sum_of_mult3 is a noop. Nothing it returns can effect the execution of the function.
2) the second if in sum_of_mult3 has only one condition, a direct recursion if the step is a multiple of 3. For most numbers the first branch will not be taken. The second branch is simply an implicit nil. Which your function is guaranteed to return, regardless of input (even if the first arg provided is a multiple of three, the next recurred value will not be).
3) when possible use recur instead of a self call, self calls consume the stack, recur compiles into a simple loop which does not consume stack.
Finally, some style issues:
1) always put closing parens on the same line with the block they are closing. This makes Lisp style code much more readable, and if nothing else most of us also read Algol style code, and putting the parens in the right place reminds us which kind of language we are reading.
2) (if (= 0 (mod number 3)) true false) is the same as (= 0 (mod number 3) which in turn is identical to (zero? (mod number 3))
3) use (inc x) instead of (+ x 1)
4) for more than two potential actions, use case, cond, or condp
(defn sum-of-mult3
[step sum]
(cond (= step 100) sum
(zero? (mod step 3)) (recur (inc step) (+ sum step))
:else (recur (inc step) sum))
In addition to Rodrigo's answer, here's the first way I thought of solving the problem:
(defn sum-of-mult3 [n]
(->> n
range
(take-nth 3)
(apply +)))
This should be self-explanatory. Here's a more "mathematical" way without using sequences, taking into account that the sum of all numbers up to N inclusive is (N * (N + 1)) / 2.
(defn sum-of-mult3* [n]
(let [x (quot (dec n) 3)]
(* 3 x (inc x) 1/2)))
Like Rodrigo said, recursion is not the right tool for this task.

What to return in a collection when using map

I read a lot of documentation about Clojure (and shall need to read it again) and read several Clojure questions here on SO to get a "feel" of the language. Besides a few tiny functions in elisp I've never written in any Lisp language before. I wrote my first project Euler solution in Clojure and before going further I'd like to better understand something about map and reduce.
Using a lambda, I ended up with the following (to sum all multiple of either 3 or 5 or both between 1 and 1000 inclusive):
(reduce + (map #(if (or (= 0 (mod %1 3)) (= 0 (mod %1 5))) %1 0) (range 1 1000)))
I put it on one line because I wrote it on the REPL (and it gives the correct solution).
Without the lambda, I wrote this:
(defn val [x] (if (or (= 0 (mod x 3)) (= 0 (mod x 5))) x 0))
And then I compute the solution doing this:
(reduce + (map val (range 1 1000)))
In both cases, my question concerns what the map should return, before doing the reduce. After doing the map I noticed I ended up with a list looking like this: (0 0 3 0 5 6 ...).
I tried removing the '0' at the end of the val definition but then I received a list made of (nil nil 3 nil 5 6 etc.). I don't know if the nil are an issue or not. I figured out that I was going to sum while doing a fold-left anyway so that the zero weren't really an issue.
But still: what's a sensible map to return? (0 0 3 0 5 6 ...) or (nil nil 3 nil 5 6...) or (3 5 6 ...) (how would I go about this last one?) or something else?
Should I "filter out" the zeroes / nils and if so how?
I know I'm asking a basic question but map/reduce is obviously something I'll be using a lot so any help is welcome.
It sounds like you already have an intuative undestanding of the need to seperate mapping concerns form the reducing It's perfectly natural to have data produced by map that is not used by the reduce. infact using the fact that zero is the identity value for addition make this even more elegant.
mappings job is to produce the new data (in this case 3 5 or "ignore")
reduces job is to decide what to include and to produce the final result.
what you started with is idiomatic clojure and there is no need to complicate it any more,
so this next example is just to illustrate the point of having map decide what to include:
(reduce #(if-not (zero? %1) (+ %1 %2) %2) (map val (range 10)))
in this contrived example the reduce function ignores the zeros. In typical real world code if the idea was as simple as filtering out some value then people tend to just use the filter function
(reduce + (filter #(not (zero? %)) (map val (range 10))))
you can also just start with filter and skip the map:
(reduce + (filter #(or (zero? (rem % 3)) (zero? (rem % 5))) (range 10)))
The watchword is clarity.
Use filter, not map. Then you don't have to choose a null
value that you later have to decide not to act on.
Naming the filtering/mapping function can help. Do so with let
or letfn, not defn, unless you have use for the function elsewhere.
Acting on this advice brings us to ...
(let [divides-by-3-or-5? (fn [n] (or (zero? (mod n 3)) (zero? (mod n 5))))]
(reduce + (filter divides-by-3-or-5? (range 1 1000))))
You may want to stop here for now.
This reads well, but the divides-by-3-or-5? function sticks in the throat. Change the factors and we need a completely new function. And that repeated phrase (zero? (mod n ...)) jars. So ...
We want a function, that - given a list (or other collection) of possible factors - tells us whether any of them apply to a given number. In other words, we want
a function of a collection of numbers - the possible factors - ...
that returns a function of one number - the candidate - ...
that tells us whether the candidate is divisible by any of the possible factors.
One such function is
(fn [ns] (fn [n] (some (fn [x] (zero? (mod n x))) ns)))
... which we can employ thus
(let [divides-by-any? (fn [ns] (fn [n] (some (fn [x] (zero? (mod n x))) ns)))]
(reduce + (filter (divides-by-any? [3 5]) (range 1 1000))))
Notes
This "improvement" has made the program a little slower.
divides-by-any? might prove useful enough to be promoted to a
defn.
If the operation were critical, you could consider stripping out
redundant factors. For example [2 3 6] could be reduced to [6].
If the operation were really critical, and the factors were supplied
as constants, you could consider creating the filter function with a
macro that went back to using or.
This is a bit of a shaggy-dog story, but it recounts the thoughts prompted by the problem you refer to.
In your case I would use keep instead of map. It is similar to map except that it keeps only the non-nil values.

In which functional programming function would one grow a set of items?

Which of the three (if any (please provide an alternative)) would be used to add elements to a list of items?
Fold
Map
Filter
Also; how would items be added? (appended to the end / inserted after working item / other)
A list in functional programming is usually defined as a recursive data structure that is either a special empty value, or is composed of a value (dubbed "head") and another list (dubbed "tail"). In Haskell:
-- A* = 1 + A x A*
-- there is a builtin list type:
data [a] = [] | (a : [a])
To add an element at the head, you can use "cons": the function that takes a head and a tail, and produces the corresponding list.
-- (:) is "cons" in Haskell
(:) :: a -> [a] -> [a]
x = [1,2,3] -- this is short for (1:(2:(3:[])))
y = 0 : x -- y = [0,1,2,3]
To add elements at the end, you need to recurse down the list to add it. You can do this easily with a fold.
consAtEnd :: a -> [a] -> [a]
consAtEnd x = foldr [x] (:)
-- this "rebuilds" the whole list with cons,
-- but uses [x] in the place of []
-- effectively adding to the end
To add elements in the middle, you need to use a similar strategy:
consAt :: Int -> a -> [a] -> [a]
consAt n x l = consAtEnd (take n l) ++ drop n l
-- ++ is the concatenation operator: it joins two lists
-- into one.
-- take picks the first n elements of a list
-- drop picks all but the first n elements of a list
Notice that except for insertions at the head, these operations cross the whole list, which may become a performance issue.
"cons" is the low-level operation used in most functional programming languages to construct various data structure including lists. In lispy syntax it looks like this:
(cons 0 (cons 1 (cons 2 (cons 3 nil))))
Visually this is a linked list
0 -> 1 -> 2 -> 3 -> nil
Or perhaps more accurately
cons -- cons -- cons -- cons -- nil
| | | |
0 1 2 3
Of course you could construct various "tree"-like data structures with cons as well.
A tree like structure might look something like this
(cons (cons 1 2) (cons 3 4))
I.e. Visually:
cons
/ \
cons cons
/ \ / \
1 2 3 4
However most functional programming languages will provide many "higher level" functions for manipulating lists.
For example, in Haskell there's
Append: (++) :: [a] -> [a] -> [a]
List comprehension: [foo c | c <- s]
Cons: (:) :: a -> [a] -> [a] (as Martinho already mentioned)
And many many more
Just to offer a concluding remark, you wouldn't often operate on individual elements in a list in the way that you're probably thinking, this is an imperative mindset. You're more likely to copy the entire structure using a recursive function or something in that line. The compiler/virtual machine is responsible recognizing when the memory can be modified in place and updating pointers etc.

Resources