I need to implement quicksort in SML for a homework assignment, and I'm lost. I was previously unfamiliar with how quicksort was implemented, so I read up on that, but every implementation I read about was in an imperative one. These don't look too difficult, but I had no idea how to implement quicksort functionally.
Wikipedia happens to have quicksort code in Standard ML (which is the language required for my assignment), but I don't understand how it's working.
Wikipedia code:
val filt = List.filter
fun quicksort << xs = let
fun qs [] = []
| qs [x] = [x]
| qs (p::xs) = let
val lessThanP = (fn x => << (x, p))
in
qs (filt lessThanP xs) # p :: (qs (filt (not o lessThanP) xs))
end
in
qs xs
end
In particular, I don't understand this line: qs (filt lessThanP xs) # p :: (qs (filt (not o lessThanP) xs)). filt will return a list of everything in xs less than p*, which is concatenated with p, which is cons-ed onto everything >= p.*
*assuming the << (x, p) function returns true when x < p. Of course it doesn't have to be that.
Actually, typing this out is helping me understand what's going on a bit. Anyways, I'm trying to compare that SML function to wiki's quicksort pseudocode, which follows.
function quicksort(array, 'left', 'right')
// If the list has 2 or more items
if 'left' < 'right'
// See "Choice of pivot" section below for possible choices
choose any 'pivotIndex' such that 'left' ≤ 'pivotIndex' ≤ 'right'
// Get lists of bigger and smaller items and final position of pivot
'pivotNewIndex' := partition(array, 'left', 'right', 'pivotIndex')
// Recursively sort elements smaller than the pivot
quicksort(array, 'left', 'pivotNewIndex' - 1)
// Recursively sort elements at least as big as the pivot
quicksort(array, 'pivotNewIndex' + 1, 'right')
Where partition is defined as
// left is the index of the leftmost element of the array
// right is the index of the rightmost element of the array (inclusive)
// number of elements in subarray = right-left+1
function partition(array, 'left', 'right', 'pivotIndex')
'pivotValue' := array['pivotIndex']
swap array['pivotIndex'] and array['right'] // Move pivot to end
'storeIndex' := 'left'
for 'i' from 'left' to 'right' - 1 // left ≤ i < right
if array['i'] < 'pivotValue'
swap array['i'] and array['storeIndex']
'storeIndex' := 'storeIndex' + 1
swap array['storeIndex'] and array['right'] // Move pivot to its final place
return 'storeIndex'
So, where exactly is the partitioning happening? Or am I thinking about SMLs quicksort wrongly?
So, where exactly is the partitioning happening? Or am I thinking about SMLs quicksort wrongly?
A purely functional implementation of quicksort works by structural recursion on the input list (IMO, this is worth mentioning). Moreover, as you see, the two calls to "filt" allow you to partition the input list into two sublists (say A and B), which then can be treated individually. What is important here is that:
all elements of A are less than or equal to the pivot element ("p" in the code)
all elements of B are greater than the pivot element
An imperative implementation works in-place, by swapping elements in the same array. In the pseudocode you've provided, the post-invariant of the "partition" function is that you have two subarrays, one starting at 'left' of input array (and ending at 'pivotIndex'), and another starting right after 'pivotIndex' and ending at 'right'. What is important here is that the two subarrays can be seen as representations of the sublists A and B.
I think that by now, you have the idea where the partitioning step is happening (or conversely, how the imperative and functional are related).
You said this:
filt will return a list of everything in xs less than p*, which is concatenated with p, which is cons-ed onto everything >= p.*
That's not quite accurate. filt will return a list of everything in xs less than p, but that new list isn't immediately concatenated with p. The new list is in fact passed to qs (recursively), and whatever qs returns is concatenated with p.
In the pseudocode version, the partitioning happens in-place in the array variable. That's why you see swap in the partition loop. Doing the partitioning in-place is much better for performance than making a copy.
Related
I encountered this competitive programming problem:
nums is a vector of integers (length n)
ops is a vector of strings containing + and - (length n-1)
It can be solved with the reduce operation in Kotlin like this:
val op_iter = ops.iterator();
nums.reduce {a, b ->
when (op_iter.next()) {
"+" -> a+b
"-" -> a-b
else -> throw Exception()
}
}
reduce is described as:
Accumulates value starting with the first element and applying operation from left to right to current accumulator value and each element.
It looks like Rust vectors do not have a reduce method. How would you achieve this task?
Edited: since Rust version 1.51.0, this function is called reduce
Be aware of similar function which is called fold. The difference is that reduce will produce None if iterator is empty while fold accepts accumulator and will produce accumulator's value if iterator is empty.
Outdated answer is left to capture the history of this function debating how to name it:
There is no reduce in Rust 1.48. In many cases you can simulate it with fold but be aware that the semantics of the these functions are different. If the iterator is empty, fold will return the initial value whereas reduce returns None. If you want to perform multiplication operation on all elements, for example, getting result 1 for empty set is not too logical.
Rust does have a fold_first function which is equivalent to Kotlin's reduce, but it is not stable yet. The main discussion is about naming it. It is a safe bet to use it if you are ok with nightly Rust because there is little chance the function will be removed. In the worst case, the name will be changed. If you need stable Rust, then use fold if you are Ok with an illogical result for empty sets. If not, then you'll have to implement it, or find a crate such as reduce.
Kotlin's reduce takes the first item of the iterator for the starting point while Rust's fold and try_fold let you specify a custom starting point.
Here is an equivalent of the Kotlin code:
let mut it = nums.iter().cloned();
let start = it.next().unwrap();
it.zip(ops.iter()).try_fold(start, |a, (b, op)| match op {
'+' => Ok(a + b),
'-' => Ok(a - b),
_ => Err(()),
})
Playground
Or since we're starting from a vector, which can be indexed:
nums[1..]
.iter()
.zip(ops.iter())
.try_fold(nums[0], |a, (b, op)| match op {
'+' => Ok(a + b),
'-' => Ok(a - b),
_ => Err(()),
});
Playground
i'm studying the programming language Standard ML and i am wondering how i can iterate a list with a check condition.
In other languages we have for loops like :
var input;
for(var i = 0; i < arr.length; i++) {
if(arr[i] == input) {
//you have arrived at your condition...
} else {
//other case
}
}
f.ex
i want to iterate through a list and check if the input variable matches a existing element in the list.
i = 5
xs = [1,5,2,3,6] --> the element matches after one iteration.
fun check i nil = []
| check i (x::xs) = if i=x
then //dowork
else //iterate;
I've gone through many documentations on how to implement this without success.
It would be really helpful if someone could give me some explaining regarding how i can use let val A in B end; inside or outside of if conditions for this kind of work.
how i can iterate a list with a check condition
fun check i nil = []
| check i (x::xs) = if i=x
then //dowork
else //iterate;
i want to iterate through a list and check if the input variable matches a existing element in the list.
I would call this a predicate combinator. It already exists in the standard library and is called List.exists. But you can also make it yourself:
fun exists p [] = false
| exists p (x::xs) = p x orelse exists p xs
This is a simplification of the if-then-else you're attempting, which would look like:
fun exists p [] = false
| exists p (x::xs) = if p x then true else exists p xs
If-then-else isn't really necessary when the result type is a boolean, since orelse, andalso and not are short-circuiting (will not evaluate their second operand if the result can be determined with the first).
Using this List.exists function to check if a list contains a specific element, you have to construct a p that compares the list element with some given value, e.g.:
fun check y xs = List.exists (fn x => ...) xs
This may seem a bit more complicated than simply writing check recursively from scratch,
fun check y [] = false
| check y (x::xs) = ... orelse check y xs
but a solution using higher-order functions is preferred for several reasons.
One is that a seasoned reader will quickly detect what you're doing when seeing List.exists: Ah, you're scanning a list for an element given a predicate. Whereas if your function is explicitly recursive, the reader will have to read the entire recursion scheme: OK, the function doesn't do anything funky, which I'd have known if I'd seen e.g. List.exists.
I have a Vec<Vec<char>> and I want to find all the x,y positions of a specific character, let's say 'x'. I can use a double for loop with enumerate and manually build up the solution (and I would guess this is the sane thing to do), but is there a nice way to do it with nothing but iterators?
More or less I'm looking for ways to clean this up:
let locs: Vec<(usize, (usize, &char))> = grid.iter()
.enumerate()
.flat_map(|(ind, row)|
iter::repeat(ind)
.zip(row.iter()
.enumerate()))
.filter(|&(x, (y, ch))| ch == 'x')
.collect();
For one, is there a way to flatten the tuples?
Here's my attempt, which does flatten the tuples:
let locs: Vec<(usize, usize, char)> = grid.iter()
.enumerate()
.flat_map(|(y, row)| {
row.iter()
.enumerate()
.map(move |(x, &c)| (x,y,c))
})
.filter(|&(_,_,c)| c == 'x')
.collect();
println!("{:?}", locs)
Playground
My approach was to first flatten to (x,y,c) and then filter. I took the liberty of returning the actual chars rather than references.
The move closure was needed because otherwise the inner closure (which lives longer, inside the iterator, than the outer closure) had a reference to the outer y.
If I wanted to do this more often, I would write an Iterator implementation which let me do:
let locs: Vec<(usize, usize, char) =
iter2d(grid)
.filter(&|_,_,c| c == 'x')
.collect();
The implementation is left as an exercise for the reader. :-)
For a list, you can do pattern matching and iterate until the nth element, but for a tuple, how would you grab the nth element?
TL;DR; Stop trying to access directly the n-th element of a t-uple and use a record or an array as they allow random access.
You can grab the n-th element by unpacking the t-uple with value deconstruction, either by a let construct, a match construct or a function definition:
let ivuple = (5, 2, 1, 1)
let squared_sum_let =
let (a,b,c,d) = ivuple in
a*a + b*b + c*c + d*d
let squared_sum_match =
match ivuple with (a,b,c,d) -> a*a + b*b + c*c + d*d
let squared_sum_fun (a,b,c,d) =
a*a + b*b + c*c + d*d
The match-construct has here no virtue over the let-construct, it is just included for the sake of completeness.
Do not use t-uples, Don¹
There are only a few cases where using t-uples to represent a type is the right thing to do. Most of the times, we pick a t-uple because we are too lazy to define a type and we should interpret the problem of accessing the n-th field of a t-uple or iterating over the fields of a t-uple as a serious signal that it is time to switch to a proper type.
There are two natural replacements to t-uples: records and arrays.
When to use records
We can see a record as a t-uple whose entries are labelled; as such, they are definitely the most natural replacement to t-uples if we want to access them directly.
type ivuple = {
a: int;
b: int;
c: int;
d: int;
}
We then access directly the field a of a value x of type ivuple by writing x.a. Note that records are easily copied with modifications, as in let y = { x with d = 0 }. There is no natural way to iterate over the fields of a record, mostly because a record do not need to be homogeneous.
When to use arrays
A large² homogeneous collection of values is adequately represented by an array, which allows direct access, iterating and folding. A possible inconvenience is that the size of an array is not part of its type, but for arrays of fixed size, this is easily circumvented by introducing a private type — or even an abstract type. I described an example of this technique in my answer to the question “OCaml compiler check for vector lengths”.
Note on float boxing
When using floats in t-uples, in records containing only floats and in arrays, these are unboxed. We should therefore not notice any performance modification when changing from one type to the other in our numeric computations.
¹ See the TeXbook.
² Large starts near 4.
Since the length of OCaml tuples is part of the type and hence known (and fixed) at compile time, you get the n-th item by straightforward pattern matching on the tuple. For the same reason, the problem of extracting the n-th element of an "arbitrary-length tuple" cannot occur in practice - such a "tuple" cannot be expressed in OCaml's type system.
You might still not want to write out a pattern every time you need to project a tuple, and nothing prevents you from generating the functions get_1_1...get_i_j... that extract the i-th element from a j-tuple for any possible combination of i and j occuring in your code, e.g.
let get_1_1 (a) = a
let get_1_2 (a,_) = a
let get_2_2 (_,a) = a
let get_1_3 (a,_,_) = a
let get_2_3 (_,a,_) = a
...
Not necessarily pretty, but possible.
Note: Previously I had claimed that OCaml tuples can have at most length 255 and you can simply generate all possible tuple projections once and for all. As #Virgile pointed out in the comments, this is incorrect - tuples can be huge. This means that it is impractical to generate all possible tuple projection functions upfront, hence the restriction "occurring in your code" above.
It's not possible to write such a function in full generality in OCaml. One way to see this is to think about what type the function would have. There are two problems. First, each size of tuple is a different type. So you can't write a function that accesses elements of tuples of different sizes. The second problem is that different elements of a tuple can have different types. Lists don't have either of these problems, which is why you can have List.nth.
If you're willing to work with a fixed size tuple whose elements are all the same type, you can write a function as shown by #user2361830.
Update
If you really have collections of values of the same type that you want to access by index, you should probably be using an array.
here is a function wich return you the string of the ocaml function you need to do that ;) very helpful I use it frequently.
let tup len n =
if n>=0 && n<len then
let rec rep str nn = match nn<1 with
|true ->""
|_->str ^ (rep str (nn-1))in
let txt1 ="let t"^(string_of_int len)^"_"^(string_of_int n)^" tup = match tup with |" ^ (rep "_," n) ^ "a" and
txt2 =","^(rep "_," (len-n-2)) and
txt3 ="->a" in
if n = len-1 then
print_string (txt1^txt3)
else
print_string (txt1^txt2^"_"^txt3)
else raise (Failure "Error") ;;
For example:
tup 8 6;;
return:
let t8_6 tup = match tup with |_,_,_,_,_,_,a,_->a
and of course:
val t8_6 : 'a * 'b * 'c * 'd * 'e * 'f * 'g * 'h -> 'g = <fun>
I found an answer on SO that explained how to write a randomly weighted drop system for a game. I would prefer to write this code in a more functional-programming style but I couldn't figure out a way to do that for this code. I'll inline the pseudo code here:
R = (some random int);
T = 0;
for o in os
T = T + o.weight;
if T > R
return o;
How could this be written in a style that's more functional? I am using CoffeeScript and underscore.js, but I'd prefer this answer to be language agnostic because I'm having trouble thinking about this in a functional way.
Here are two more functional versions in Clojure and JavaScript, but the ideas here should work in any language that supports closures. Basically, we use recursion instead of iteration to accomplish the same thing, and instead of breaking in the middle we just return a value and stop recursing.
Original pseudo code:
R = (some random int);
T = 0;
for o in os
T = T + o.weight;
if T > R
return o;
Clojure version (objects are just treated as clojure maps):
(defn recursive-version
[r objects]
(loop [t 0
others objects]
(let [obj (first others)
new_t (+ t (:weight obj))]
(if (> new_t r)
obj
(recur new_t (rest others))))))
JavaScript version (using underscore for convenience).
Be careful, because this could blow out the stack.
This is conceptually the same as the clojure version.
var js_recursive_version = function(objects, r) {
var main_helper = function(t, others) {
var obj = _.first(others);
var new_t = t + obj.weight;
if (new_t > r) {
return obj;
} else {
return main_helper(new_t, _.rest(others));
}
};
return main_helper(0, objects);
};
You can implement this with a fold (aka Array#reduce, or Underscore's _.reduce):
An SSCCE:
items = [
{item: 'foo', weight: 50}
{item: 'bar', weight: 35}
{item: 'baz', weight: 15}
]
r = Math.random() * 100
{item} = items.reduce (memo, {item, weight}) ->
if memo.sum > r
memo
else
{item, sum: memo.sum + weight}
, {sum: 0}
console.log 'r:', r, 'item:', item
You can run it many times at coffeescript.org and see that the results make sense :)
That being said, i find the fold a bit contrived, as you have to remember both the selected item and the accumulated weight between iterations, and it doesn't short-circuit when the item is found.
Maybe a compromise solution between pure FP and the tedium of reimplementing a find algorithm can be considered (using _.find):
total = 0
{item} = _.find items, ({weight}) ->
total += weight
total > r
Runnable example.
I find (no pun intended) this algorithm much more accessible than the first one (and it should perform better, as it doesn't create intermediate objects, and it does short-circuiting).
Update/side-note: the second algorithm is not "pure" because the function passed to _.find is not referentially transparent (it has the side effect of modifying the external total variable), but the whole of the algorithm is referentially transparent. If you were to encapsulate it in a findItem = (items, r) -> function, the function will be pure and will always return the same output for the same input. That's a very important thing, because it means that you can get the benefits of FP while using some non-FP constructs (for performance, readability, or whatever reason) under the hoods :D
I think the underlying task is randomly selecting 'events' (objects) from array os with a frequency defined by their respective weights. The approach is to map (i.e. search) a random number (with uniform distribution) onto the stairstep cumulative probability distribution function.
With positive weights, their cumulative sum is increasing from 0 to 1. The code you gave us simply searches starting at the 0 end. To maximize speed with repeated calls, pre calculate sums, and order the events so the largest weights are first.
It really doesn't matter whether you search with iteration (looping) or recursion. Recursion is nice in a language that tries to be 'purely functional' but doesn't help understanding the underlying mathematical problem. And it doesn't help you package the task into a clean function. The underscore functions are another way of packaging the iterations, but don't change the basic functionality. Only any and all exit early when the target is found.
For small os array this simple search is sufficient. But with a large array, a binary search will be faster. Looking in underscore I find that sortedIndex uses this strategy. From Lo-Dash (an underscore dropin), "Uses a binary search to determine the smallest index at which the value should be inserted into array in order to maintain the sort order of the sorted array"
The basic use of sortedIndex is:
os = [{name:'one',weight:.7},
{name:'two',weight:.25},
{name:'three',weight:.05}]
t=0; cumweights = (t+=o.weight for o in os)
i = _.sortedIndex(cumweights, R)
os[i]
You can hide the cumulative sum calculation with a nested function like:
osEventGen = (os)->
t=0; xw = (t+=y.weight for y in os)
return (R) ->
i = __.sortedIndex(xw, R)
return os[i]
osEvent = osEventGen(os)
osEvent(.3)
# { name: 'one', weight: 0.7 }
osEvent(.8)
# { name: 'two', weight: 0.25 }
osEvent(.99)
# { name: 'three', weight: 0.05 }
In coffeescript, Jed Clinger's recursive search could be written like this:
foo = (x, r, t=0)->
[y, x...] = x
t += y
return [y, t] if x.length==0 or t>r
return foo(x, r, t)
An loop version using the same basic idea is:
foo=(x,r)->
t=0
while x.length and t<=r
[y,x...]=x # the [first, rest] split
t+=y
y
Tests on jsPerf http://jsperf.com/sortedindex
suggest that sortedIndex is faster when os.length is around 1000, but slower than the simple loop when the length is more like 30.