operation between two lists - dictionary

In common lisp there is map, which lets you do this kind of thing:
(map (lambda (x y) (/ x y)) (list 2 4 6 8 10 12) (list 1 2 3 4 5 6))
returning (2 2 2 2 2 2)
However now I am working at ACL2 and there is no such a thing as map.
So in my mind the only choice left I have is doing recursion to calculate what I want, unless there is another simpler and/or more efficient way of doing it.
... Which is exactly my question. Is there a better way of doing it than to create a recursive function called something like divide-two-lists? It just feels like something that a lisp-based language should naturally do instead of having you to create another function specifically just for it, hence why I am asking.

You could pretty easily write your own map. From the GNU Emacs guide:
(defun mapcar* (function &rest args)
"Apply FUNCTION to successive cars of all ARGS.
Return the list of results."
;; If no list is exhausted,
(if (not (memq nil args))
;; apply function to cars.
(cons (apply function (mapcar 'car args))
(apply 'mapcar* function
;; Recurse for rest of elements.
(mapcar 'cdr args)))))
(mapcar* 'cons '(a b c) '(1 2 3 4))
⇒ ((a . 1) (b . 2) (c . 3))
I'm unfamiliar with acl2, so you might have to change some functions (e.g. memq), or deal differently with how apply or &rest arguments work, but this is the meat of the code.

ACL2 is based on first order logic. In first order logic, statements like
(define (P R A) (R A))
are not allowed because R is being used as both a parameter and a function.
It is theoretically possible to get around this limitation by literally defining your own language within first order logic that includes the constructs for higher order logic. Otherwise, you are correct, your best option is to define something like divide-two-lists every single time you want to use a map function.
That's tedious, but it is how ACL2 was meant to be used.

This isn't exactly suitable to your question, but it's related, and so I mention it in case it helps someone else who is looking at your question.
Consider the book "std/util/defprojection", which provides a macro that lets you map a function across a list.

Related

How to understand clojure's lazy-seq

I'm trying to understand clojure's lazy-seq operator, and the concept of lazy evaluation in general. I know the basic idea behind the concept: Evaluation of an expression is delayed until the value is needed.
In general, this is achievable in two ways:
at compile time using macros or special forms;
at runtime using lambda functions
With lazy evaluation techniques, it is possible to construct infinite data structures that are evaluated as consumed. These infinite sequences utilizes lambdas, closures and recursion. In clojure, these infinite data structures are generated using lazy-seq and cons forms.
I want to understand how lazy-seq does it's magic. I know it is actually a macro. Consider the following example.
(defn rep [n]
(lazy-seq (cons n (rep n))))
Here, the rep function returns a lazily-evaluated sequence of type LazySeq, which now can be transformed and consumed (thus evaluated) using the sequence API. This API provides functions take, map, filter and reduce.
In the expanded form, we can see how lambda is utilized to store the recipe for the cell without evaluating it immediately.
(defn rep [n]
(new clojure.lang.LazySeq (fn* [] (cons n (rep n)))))
But how does the sequence API actually work with LazySeq?
What actually happens in the following expression?
(reduce + (take 3 (map inc (rep 5))))
How is the intermediate operation map applied to the sequence,
how does take limit the sequence and
how does terminal operation reduce evaluate the sequence?
Also, how do these functions work with either a Vector or a LazySeq?
Also, is it possible to generate nested infinite data structures?: list containing lists, containing lists, containing lists... going infinitely wide and deep, evaluated as consumed with the sequence API?
And last question, is there any practical difference between this
(defn rep [n]
(lazy-seq (cons n (rep n))))
and this?
(defn rep [n]
(cons n (lazy-seq (rep n))))
That's a lot of questions!
How does the seq API actually works with LazySeq?
If you take a look at LazySeq's class source code you will notice that it implements ISeq interface providing methods like first, more and next.
Functions like map, take and filter are built using lazy-seq (they produce lazy sequences) and first and rest (which in turn uses more) and that's how they can work with lazy seq as their input collection - by using first and more implementations of LazySeq class.
What actually happens in the following expression?
(reduce + (take 3 (map inc (rep 5))))
The key is to look how LazySeq.first works. It will invoke the wrapped function to obtain and memoize the result. In your case it will be the following code:
(cons n (rep n))
Thus it will be a cons cell with n as its value and another LazySeq instance (result of a recursive call to rep) as its rest part. It will become the realised value of this LazySeq object and first will return the value of the cached cons cell.
When you call more on it, it will in the same way ensure that the value of the particular LazySeq object is realised (or reused memoized value) and call more on it (in this case more on the cons cell containing another LazySeq object).
Once you obtain another instance of LazySeq object with more the story repeats when you call first on it.
map and take will create another lazy-seq that will call first and more of the collection passed as their argument (just another lazy seq) so it will be similar story. The difference will be only in how the values passed to cons are generated (e.g. calling f to a value obtained by first invoked on the LazySeq value mapped over in map instead of a raw value like n in your rep function).
With reduce it's a bit simpler as it will use loop with first and more to iterate over the input lazy seq and apply the reducing function to produce the final result.
As the actual implementation looks like for map and take I encourage you to check their source code - it's quite easy to follow.
How seq API can works with different collection types (e.g. lazy seq and persistent vector)?
As mentioned above, map, take and other functions work in terms of first and rest (reminder - rest is implemented on top of more). Thus we need to explain how first and rest/more can work with different collection types: they check if the collection implements ISeq (and then it implement those functions directly) or they try to create a seq view of the collection and coll its implementation of first and more.
Is it possible to generate nested infinite data structures?
It's definitely possible but I am not sure what the exact data shape you would like to get. Do you mean getting a lazy seq which generates another sequence as it's value (instead of a single value like n in your rep) but returns it as a flat sequence?
(defn nested-cons [n]
(lazy-seq (cons (repeat n n) (nested-cons (inc n)))))
(take 3 (nested-cons 1))
;; => ((1) (2 2) (3 3 3))
that would rather return (1 2 2 3 3 3)?
For such cases you might use concat instead of cons which creates a lazy sequence of two or more sequences:
(defn nested-concat [n]
(lazy-seq (concat (repeat n n) (nested-concat (inc n)))))
(take 6 (nested-concat 1))
;; => (1 2 2 3 3 3)
Is there any practical difference with this
(defn rep [n]
(lazy-seq (cons n (rep n))))
and this?
(defn rep [n]
(cons n (lazy-seq (rep n))))
In this particular case not really. But in the case where a cons cell doesn't wrap a raw value but a result of a function call to calculate it, the latter form is not fully lazy. For example:
(defn calculate-sth [n]
(println "Calculating" n)
n)
(defn rep1 [n]
(lazy-seq (cons (calculate-sth n) (rep1 (inc n)))))
(defn rep2 [n]
(cons (calculate-sth n) (lazy-seq (rep2 (inc n)))))
(take 0 (rep1 1))
;; => ()
(take 0 (rep2 1))
;; Prints: Calculating 1
;; => ()
Thus the latter form will evaluate its first element even if you might not need it.

Mapcan's behavior on atoms

I am pretty much of a newbie to common-lisp and only use it for fun. But I assume to know the difference between mapper and mapcan, as the documentation in the hyperspec and other places is pretty clear.
But what happens if the function mapcan calls on the list elements evaluates to an atom instead of a list? As mapcan uses nconc to append lists, I had expected that there would be an error if there is no list.
But if I try
(mapcan (lambda (x) (+ 2 x)) '(1 2 3 4))
it evaluates to '6' in sbcl and clisp. (There might not be a practical need for this example; I am just curious) I see the point that returning a value might be nicer than a simpler error but could find anything about mapcan returning the last value if there are no lists to nconc.
Is there a reason for this behavior?
According to the documentation for mapcan (mapcan (lambda (x) (+ 2 x)) '(1 2 3 4)) Should do the same as (apply #'nconc (mapcar (lambda (x) (+ 2 x)) '(1 2 3 4))) and it signals an error the error *** - NCONC: 5 is not a list in clisp.
The hyperspec only shows what nconc should be doing with proper lists and nil as the arguments before last. It does not have anything else described so what you are seeing is that sbcl and clisp perhaps share the algorithm from a lisp in public domain or that they have implemented it so similar they have the same implementation specific results.
You probably cannot assume other implementations will do the same so you should make sure the function passed to mapcan always return a fresh list or nil that can be nconc-ed within the specification.

move for a chess game

I have a move procedure that applies a legal move to a chess piece on the board by passing a pair:
(cons source dest) so (cons 1 2) takes a piece on position 1 from the board and moves it to position 2.
I'm trying to make a procedure that applies the same move it made before. I tried to do
(move (reverse move)) which would pass in (cons 2 1) thereby moving the piece back.
unfortunately, reverse doesnt work for pairs. I can't convert it to a list because that would have to change a lot of the code to accommodate for the null at the end.
Can anyone think of anything?
I'm using MIT Scheme by the way.
You need to implement your own reverse-pair procedure for this, it can be as simple as this:
(define (reverse-pair p)
(cons (cdr p) (car p)))
Or this, a bit fancier but less readable:
(define (reverse-pair p)
`(,(cdr p) . ,(car p)))
Either way it works as intended:
(reverse-pair '(1 . 2))
=> '(2 . 1)
If the reverse function doesn't work on pairs, write your own reverse-pair function then. I don't remember the scheme syntax for that but I think you already have the tools for that, since you would basically need to know how to read the two values from the pair (something you already do on your "move" function) and then how to build a new tuple based on that data.
I don't see why you think this would complicate things too much either. As far as the code outside the new function goes, it would look just the same as the original version you proposed using the "reverse" function.
If you limit yourself to pairs of `(a . b) then it's pretty easy to flip them. Something as simple as
(define reverse-pair
(lambda (p)
(cons (cdr p) (car p))))
Will do it. From your context I gather that your not going to be in the position where you have '(1 2 3 4 . 5) and need to reverse that so you should be fine with the one above.

Scheme: Procedures that return another inner procedure

This is from the SICP book that I am sure many of you are familiar with. This is an early example in the book, but I feel an extremely important concept that I am just not able to get my head around yet. Here it is:
(define (cons x y)
(define (dispatch m)
(cond ((= m 0) x)
((= m 1) y)
(else (error "Argument not 0 or 1 - CONS" m))))
dispatch)
(define (car z) (z 0))
(define (cdr z) (z 1))
So here I understand that car and cdr are being defined within the scope of cons, and I get that they map some argument z to 1 and 0 respectively (argument z being some cons). But say I call (cons 3 4)...how are the arguments 3 and 4 evaluated, when we immediately go into this inner-procedure dispatch which takes some argument m that we have not specified yet? And, maybe more importantly, what is the point of returning 'dispatch? I don't really get that part at all. Any help is appreciated, thanks!
This is one of the weirder (and possibly one of the more wonderful) examples of exploiting first-class functions in Scheme. Something similar is also in the Little Schemer, which is where I first saw it, and I remember scratching my head for days over it. Let me see if I can explain it in a way that makes sense, but I apologize if it's not clear.
I assume you understand the primitives cons, car, and cdr as they are implemented in Scheme already, but just to remind you: cons constructs a pair, car selects the first component of the pair and returns it, and cdr selects the second component and returns it. Here's a simple example of using these functions:
> (cons 1 2)
(1 . 2)
> (car (cons 1 2))
1
> (cdr (cons 1 2))
2
The version of cons, car, and cdr that you've pasted should behave exactly the same way. I'll try to show you how.
First of all, car and cdr are not defined within the scope of cons. In your snippet of code, all three (cons, car, and cdr) are defined at the top-level. The function dispatch is the only one that is defined inside cons.
The function cons takes two arguments and returns a function of one argument. What's important about this is that those two arguments are visible to the inner function dispatch, which is what is being returned. I'll get to that in a moment.
As I said in my reminder, cons constructs a pair. This version of cons should do the same thing, but instead it's returning a function! That's ok, we don't really care how the pair is implemented or laid out in memory, so long as we can get at the first and second components.
So with this new function-based pair, we need to be able to call car and pass the pair as an argument, and get the first component. In the definition of car, this argument is called z. If you were to execute the same REPL session I had above with these new cons ,car, and cdr functions, the argument z in car will be bound to the function-based pair, which is what cons returns, which is dispatch. It's confusing, but just think it through carefully and you'll see.
Based on the implementation of car, it appears to be that it take a function of one argument, and applies it to the number 0. So it's applying dispatch to 0, and as you can see from the definition of dispatch, that's what we want. The cond inside there compares m with 0 and 1 and returns either x or y. In this case, it returns x, which is the first argument to cons, in other words the first component of the pair! So car selects the first component, just as the normal primitive does in Scheme.
If you follow this same logic for cdr, you'll see that it behaves almost the same way, but returns the second argument to cons, y, which is the second component of the pair.
There are a couple of things that might help you understand this better. One is to go back to the description of the substitution model of evaluation in Chapter 1. If you carefully and meticulously follow that substitution model for some very simple example of using these functions, you'll see that they work.
Another way, which is less tedious, is to try playing with the dispatch function directly at the REPL. Below, the variable p is defined to refer to the dispatch function returned by cons.
> (define p (cons 1 2))
#<function> ;; what the REPL prints here will be implementation specific
> (p 0)
1
> (p 1)
2
The code in the question shows how to redefine the primitive procedure cons that creates a cons-cell (a pair of two elements: the car and the cdr), using only closures and message-dispatching.
The dispatch procedure acts as a selector for the arguments passed to cons: x and y. If the message 0 is received, then the first argument of cons is returned (the car of the cell). Likewise, if 1 is received, then the second argument of cons is returned (the cdr of the cell). Both arguments are stored inside the closure defined implicitly for the dispatch procedure, a closure that captures x and y and is returned as the product of invoking this procedural implementation of cons.
The next redefinitions of car and cdr build on this: car is implemented as a procedure that passes 0 to a closure as returned in the above definition, and cdr is implemented as a procedure that passes 1 to the closure, in each case ultimately returning the original value that was passed as x and y respectively.
The really nice part of this example is that it shows that the cons-cell, the most basic unit of data in a Lisp system can be defined as a procedure, therefore blurring the distinction between data and procedure.
This is the "closure/object isomorphism", basically.
The outer function (cons) is a class constructor. It returns an object, which is a function of one argument, where the argument is equivalent to the name of a method. In this case, the methods are getters, so they evaluate to values. You could just as easily have stored more procedures in the object returned by the constructor.
In this case, numbers where chosen as method names and sugary procedures defined outside the object itself. You could have used symbols:
(define (cons x y)
(lambda (method)
(cond ((eq? method 'car) x)
((eq? method 'cdr) y)
(else (error "unknown method")))))
In which case what you have more closely resembles OO:
# (define p (cons 1 2))
# (p 'car)
1
# (p 'cdr)
2

Accumulators, conj and recursion

I've solved 45 problems from 4clojure.com and I noticed a recurring problem in the way I try to solve some problems using recursion and accumulators.
I'll try to explain the best I can what I'm doing to end up with fugly solutions hoping that some Clojurers would "get" what I'm not getting.
For example, problem 34 asks to write a function (without using range) taking two integers as arguments and creates a range (without using range). Simply put you do (... 1 7) and you get (1 2 3 4 5 6).
Now this question is not about solving this particular problem.
What if I want to solve this using recursion and an accumulator?
My thought process goes like this:
I need to write a function taking two arguments, I start with (fn [x y] )
I'll need to recurse and I'll need to keep track of a list, I'll use an accumulator, so I write a 2nd function inside the first one taking an additional argument:
(fn
[x y]
((fn g [x y acc] ...)
x
y
'())
(apparently I can't properly format that Clojure code on SO!?)
Here I'm already not sure I'm doing it correctly: the first function must take exactly two integer arguments (not my call) and I'm not sure: if I want to use an accumulator, can I use an accumulator without creating a nested function?
Then I want to conj, but I cannot do:
(conj 0 1)
so I do weird things to make sure I've got a sequence first and I end up with this:
(fn
[x y]
((fn g [x y acc] (if (= x y) y (conj (conj acc (g (inc x) y acc)) x)))
x
y
'()))
But then this produce this:
(1 (2 (3 4)))
Instead of this:
(1 2 3 4)
So I end up doing an additional flatten and it works but it is totally ugly.
I'm beginning to understand a few things and I'm even starting, in some cases, to "think" in a more clojuresque way but I've got a problem writing the solution.
For example here I decided:
to use an accumulator
to recurse by incrementing x until it reaches y
But I end up with the monstrosity above.
There are a lot of way to solve this problem and, once again, it's not what I'm after.
What I'm after is how, after I decided to cons/conj, use an accumulator, and recurse, I can end up with this (not written by me):
#(loop [i %1
acc nil]
(if (<= %2 i)
(reverse acc)
(recur (inc i) (cons i acc))))
Instead of this:
((fn
f
[x y]
(flatten
((fn
g
[x y acc]
(if (= x y) acc (conj (conj acc (g (inc x) y acc)) x)))
x
y
'())))
1
4)
I take it's a start to be able to solve a few problems but I'm a bit disappointed by the ugly solutions I tend to produce...
i think there are a couple of things to learn here.
first, a kind of general rule - recursive functions typically have a natural order, and adding an accumulator reverses that. you can see that because when a "normal" (without accumulator) recursive function runs, it does some work to calculate a value, then recurses to generate the tail of the list, finally ending with an empty list. in contrast, with an accumulator, you start with the empty list and add things to the front - it's growing in the other direction.
so typically, when you add an accumulator, you get a reversed order.
now often this doesn't matter. for example, if you're generating not a sequence but a value that is the repeated application of a commutative operator (like addition or multiplication). then you get the same answer either way.
but in your case, it is going to matter. you're going to get the list backwards:
(defn my-range-0 [lo hi] ; normal recursive solution
(if (= lo hi)
nil
(cons lo (my-range-0 (inc lo) hi))))
(deftest test-my-range-1
(is (= '(0 1 2) (my-range-0 0 3))))
(defn my-range-1 ; with an accumulator
([lo hi] (my-range-1 lo hi nil))
([lo hi acc]
(if (= lo hi)
acc
(recur (inc lo) hi (cons lo acc)))))
(deftest test-my-range-1
(is (= '(2 1 0) (my-range-1 0 3)))) ; oops! backwards!
and often the best you can do to fix this is just reverse that list at the end.
but here there's an alternative - we can actually do the work backwards. instead of incrementing the low limit you can decrement the high limit:
(defn my-range-2
([lo hi] (my-range-2 lo hi nil))
([lo hi acc]
(if (= lo hi)
acc
(let [hi (dec hi)]
(recur lo hi (cons hi acc))))))
(deftest test-my-range-2
(is (= '(0 1 2) (my-range-2 0 3)))) ; back to the original order
[note - there's another way of reversing things below; i didn't structure my argument very well]
second, as you can see in my-range-1 and my-range-2, a nice way of writing a function with an accumulator is as a function with two different sets of arguments. that gives you a very clean (imho) implementation without the need for nested functions.
also you have some more general questions about sequences, conj and the like. here clojure is kind-of messy, but also useful. above i've been giving a very traditional view with cons based lists. but clojure encourages you to use other sequences. and unlike cons lists, vectors grow to the right, not the left. so another way to reverse that result is to use a vector:
(defn my-range-3 ; this looks like my-range-1
([lo hi] (my-range-3 lo hi []))
([lo hi acc]
(if (= lo hi)
acc
(recur (inc lo) hi (conj acc lo)))))
(deftest test-my-range-3 ; except that it works right!
(is (= [0 1 2] (my-range-3 0 3))))
here conj is adding to the right. i didn't use conj in my-range-1, so here it is re-written to be clearer:
(defn my-range-4 ; my-range-1 written using conj instead of cons
  ([lo hi] (my-range-4 lo hi nil))
  ([lo hi acc]
    (if (= lo hi)
      acc
      (recur (inc lo) hi (conj acc lo)))))
(deftest test-my-range-4
(is (= '(2 1 0) (my-range-4 0 3))))
note that this code looks very similar to my-range-3 but the result is backwards because we're starting with an empty list, not an empty vector. in both cases, conj adds the new element in the "natural" position. for a vector that's to the right, but for a list it's to the left.
and it just occurred to me that you may not really understand what a list is. basically a cons creates a box containing two things (its arguments). the first is the contents and the second is the rest of the list. so the list (1 2 3) is basically (cons 1 (cons 2 (cons 3 nil))). in contrast, the vector [1 2 3] works more like an array (although i think it's implemented using a tree).
so conj is a bit confusing because the way it works depends on the first argument. for a list, it calls cons and so adds things to the left. but for a vector it extends the array(-like thing) to the right. also, note that conj takes an existing sequence as first arg, and thing to add as second, while cons is the reverse (thing to add comes first).
all the above code available at https://github.com/andrewcooke/clojure-lab
update: i rewrote the tests so that the expected result is a quoted list in the cases where the code generates a list. = will compare lists and vectors and return true if the content is the same, but making it explicit shows more clearly what you're actually getting in each case. note that '(0 1 2) with a ' in front is just like (list 0 1 2) - the ' stops the list from being evaluated (without it, 0 would be treated as a command).
After reading all that, I'm still not sure why you'd need an accumulator.
((fn r [a b]
(if (<= a b)
(cons a (r (inc a) b))))
2 4)
=> (2 3 4)
seems like a pretty intuitive recursive solution. the only thing I'd change in "real" code is to use lazy-seq so that you won't run out of stack for large ranges.
how I got to that solution:
When you're thinking of using recursion, I find it helps to try and state the problem with the fewest possible terms you can think up, and try to hand off as much "work" to the recursion itself.
In particular, if you suspect you can drop one or more arguments/variables, that is usually the way to go - at least if you want the code to be easy to understand and debug; sometimes you end up compromising simplicity in favor of execution speed or reducing memory usage.
In this case, what I thought when I started writing was: "the first argument to the function is also the start element of the range, and the last argument is the last element". Recursive thinking is something you kind of have to train yourself to do, but a fairly obvious solution then is to say: a range [a, b] is a sequence starting with element a followed by a range of [a + 1, b]. So ranges can indeed be described recursively. The code I wrote is pretty much a direct implementation of that idea.
addendum:
I've found that when writing functional code, accumulators (and indexes) are best avoided. Some problems require them, but if you can find a way to get rid of them, you're usually better off if you do.
addendum 2:
Regarding recursive functions and lists/sequences, the most useful way to think when writing that kind of code is to state your problem in terms of "the first item (head) of a list" and "the rest of the list (tail)".
I cannot add to the already good answers you have received, but I will answer in general. As you go through the Clojure learning process, you may find that many but not all solutions can be solved using Clojure built-ins, like map and also thinking of problems in terms of sequences. This doesn't mean you should not solve things recursively, but you will hear -- and I believe it to be wise advice -- that Clojure recursion is for solving very low level problems you cannot solve another way.
I happen to do a lot of .csv file processing, and recently received a comment that nth creates dependencies. It does, and use of maps can allow me to get at elements for comparison by name and not position.
I'm not going to throw out the code that uses nth with clojure-csv parsed data in two small applications already in production. But I'm going to think about things in a more sequency way the next time.
It is difficult to learn from books that talk about vectors and nth, loop .. recur and so on, and then realize learning Clojure grows you forward from there.
One of the things I have found that is good about learning Clojure, is the community is respectful and helpful. After all, they're helping someone whose first learning language was Fortran IV on a CDC Cyber with punch cards, and whose first commercial programming language was PL/I.
If I solved this using an accumulator I would do something like:
user=> (defn my-range [lb up c]
(if (= lb up)
c
(recur (inc lb) up (conj c lb))))
#'user/my-range
then call it with
#(my-range % %2 [])
Of course, I'd use letfn or something to get around not having defn available.
So yes, you do need an inner function to use the accumulator approach.
My thought process is that once I'm done the answer I want to return will be in the accumulator. (That contrasts with your solution, where you do a lot of work on finding the ending-condition.) So I look for my ending-condition and if I've reached it, I return the accumulator. Otherwise I tack on the next item to the accumulator and recur for a smaller case. So there are only 2 things to figure out, what the end-condition is, and what I want to put in the accumulator.
Using a vector helps a lot because conj will append to it and there's no need to use reverse.
I'm on 4clojure too, btw. I've been busy so I've fallen behind lately.
It looks like your question is more about "how to learn" then a technical/code problem. You end up writing that kind of code because from whatever way or source you learned programming in general or Clojure in specific has created a "neural highway" in your brain that makes you thinking about the solutions in this particular way and you end up writing code like this. Basically whenever you face any problem (in this particular case recursion and/or accumulation) you end up using that "neural highway" and always come up with that kind of code .
The solution for getting rid of this "neural highway" is to stop writing code for the moment, keep that keyboard away and start reading a lot of existing clojure code (from existing solutions of 4clojure problem to open source projects on github) and think about it deeply (even read a function 2-3 times to really let it settle down in your brain). This way you would end up destroying your existing "neural highway" (which produce the code that you write now) and will create a new "neural highway" that would produce the beautiful and idiomatic Clojure code. Also, try not to jump to typing code as soon as you saw a problem, rather give yourself some time to think clearly and deeply about the problem and solutions.

Resources