Are there any Clojure Principles? - functional-programming

Are there any Principles for Clojure ?
a. Like the S.O.L.I.D. Object-Oriented Design Principles for OO languages like Java ?
b. or others more heuristic, like "Tell don't ask", "Favor Composition vs Inheritance", "Talk to Interfaces" ?
Are there any design patterns (for flexible code) ?
What is the counter part of the basic of functional-programming like encapsulation for object-oriented ?
Know of any resources for these ?

To your first question: no. †
Clojure is here to help you get things done correctly, quickly, and enjoyably. Everything after that is gravy.
And there's a lot of gravy. I don't presume to know the Clojure way, even if there is one, but here are some guidelines I've been told and have discovered while writing in Clojure:
First, get something working. Then you can examine, test, and optimize if necessary. There's a reason that the time macro is in the core language. Correctness before quickness, to be cute.
Abstract. If you are repeating yourself, you're probably not doing it right. Compose, curry and combine functions.
Separate side-effects from your logic. e.g. if you want to format and save a string, format it in one function, then use another function to save it, however you need to.
3a. Don't go too crazy with this. Sometimes it's better to have a a few anonymous functions than a bunch of one-line defns littering your code.
Test. Rich gave you a REPL for a reason; use the hell out of that REPL.
Don't clutter your namespace. We're clean in Clojure-land. Qualify your uses or use :only what you need. Make your code readable.
Know the core library. Not just clojure.core, but clojure.java.io, clojure.string, clojure.set and everything in between. If you think Clojure should have a function to do X, it probably does. You can use apropos (from, yes, another core library: clojure.repl).
Document your code. Docstrings are a beautiful thing. If you have a tendency to be verbose, the doctsring is the place to let loose. But, know too that good code often "documents itself". If the code is self-explanatory, there's no need to be pedantic.
This is a functional language. When you can, use a function. Protocols, macros and records are all great: but when you can get away with it, use a function. You can compose, combine, map, reduce, iterate (the list goes on, and on, and on…) functions. That's really nice.
Above all, break the above rules if it makes sense. But be prepared to rethink and refactor. If you've kept your code modular enough, refactoring your code should be a matter of reorganization and recombination.
Some other tips: read other people's code. If you begin to read code, and become good at reading code, you'll become better at writing your own, and you're likely to learn new things, too: there's more than one way to do just about everything.
Finally, read though the Clojure Library Coding Standards to get an idea of what's expected in production Clojure code.
† At least, not yet.

Hard question.
Clojure is very flexible.
So they are best practices, but they aren't nearly as important as for java.
I write here a few examples of advices from the most general to the most particular families.
There are advices for programming in general:
write a lot of tests, write something correct and nice, profile and optimize when needed
There are advices for functional programming:
write small functions, write pure functions, compose the small functions, factor your code through functions, try to use combinators when possible...
There are advices for LISP:
use macro to factor out repetitive patterns, build your program bottom-up. (See Paul Graham's 'on LISP' for a better explanation than mine)
There are also some advices specifically for Clojure:
follow the careful analysis of state and identity ( http://clojure.org/state , for a very good explanation), try to use seqs and their functions when possible, write doc strings for functions
A good source for more advices are the Clojure Library Coding Standard
http://www.assembla.com/wiki/show/clojure/Clojure_Library_Coding_Standards
But all these advices are just advices, and Clojure can be used by someone that do not want to follow these advices, because, as a Lisp, it is very flexible.
As far as design pattern are concerned, functional programmer rarely think in these terms, as most of them has been designed for OO languages and do not apply in a functional language.
Peter Norvig has interesting slides on Design Pattern and LISP/Dylan:
http://norvig.com/design-patterns/
Hope that helps.

1a) I don't know of something like that but every book about FP will do something like that.
1b)
"Favor Composition vs Inheritance" --> is already taken care of because you started with FP
"talk to abstractions" --> more general
"be lazy when you can"
"avoid state"
"Use PURE FUNCTIONS!!!"
List item
....
2.) You can use some of the some same design patterns they are just much easier to implement. Some of them make less sense but normally. FP folks don't make a big deal out of it.
(This is about GoF patterns I only know them)
Look at the observer-pattern for example. In Clojure you can use add-watcher function witch make the observer-pattern obsolete.
3.)You can use encapsulation in name spaces look at defn- or you can hide your function in other functions. In the Joy of Clojure are some examples. You can push it as far as you want.

The Don't Repeat Yourself (DRY) principal applies very well to clojure. The language is very flexible and really promotes composing abstractions in ways that can really reduce the amount of boiler place code very close to zero.
some examples of ways to remove duplicated code are:
only use lazy-cons when generating original data where no variant of map will do. If you find your self writing (lazy-seq (cons (do-something data) (call-myself (rest data))), check to see if map or iterate will do what you want.
use small functions and compose them liberally. if a function needs to format some data, wrap it in XML, and send it over a network. write these in three small functions and then use the (def send-formated-xml (comp send xml format)) this way you can map the format function onto some data later etc.
when the language absolutely can not express you repeated pattern, use a macro. Its better* to use a macro than to repeat your self. and always remember the first rule of macro club is do not use a macro.

This video presents the SOLID principles, and how to apply them in Clojure.
It shows how these principles hold in the Functional world as much as in OOP, because we still need to solve the same underlying problems. Overall, it made me think Functional Programming is better suited for SOLID design.

there is a post in a blog mentioning "thinking in clojure" here and it gives some pointers to the book The Joy Of Clojure, and to some other books (and even links to some videos)
now, i got the book The Joy Of Clojure, read a bit of it, and it promises to teach me "The Way Of Clojure". hope it turns out telling me what i'm looking for, some principles...
this book is work it progress but you can buy an "early access edition" from manning here and get 40% of with code "s140". see info here

Related

functional programming and self-commenting code - is this really possible?

As I got a lot of spare time to spend ATM I read a few threads/comments on code-comments and documentation here.
As most people here I too think that you should write your code so that it's easy to read and self-commenting as far as it's possible.
On the other hand I am a huge FP-fanboy - and yes if you write your code the right way it will be very readable in FP - or so I thought.
Problem is that tiny things make a awful lot of difference in FP-world. If your colleague doesn't fully understand FP he might be able to "read" the indentation of the code but won't be able to modify or fully understand it. That stands for languagues like Haskell, where a '.' or '$' makes a big difference and also for languages like F# or even C# of VB.NET with lots of LINQ statements.
At first glance the problem might be, that your peer just doesn't get the language and it's not the codes fault - on the other hand: who truly gets all of FP? Look at some papers concerning Haskell - the code is beautifully crafted and self-commenting but just as in math you may have to chew on a line for several minutes before you get it.
Of course in those papers there will be a text-block trying to clarify just after the code ....
So IMHO you have to comment your FP-code as long as you work in a shop where not every colleague has a PhD in CS ;)
What do you think?
PS: first post here - really looked for answers concerning this questions but didn't find any - please be gentle if I just didn't look hard enough :)
Functional languages greatly favor the development of self-documenting code, because you can freely rearrange the order of functions, and easily abstract out any given part of the code, assigning it an explanatory name.
Abstract, abstract, abstract, is the keyword to master code complexity, and that's where the functional style shines. But there will be always things that cannot be expressed within the code itself.
One clear example is code for algorithms. It is unlikely that one can easily understand a complex algorithm just by looking at the implementation. Yes, functional languages make understanding simpler, becasue many gory details (trivial example: memory management) do not have to be coded explicitly, thus exposing the underlying logic more clearly.
However this is no substitute for an explanation in natural language, which conveys in an intuitive way how it works (and sometimes a picture is worth a thousand words). This is becasue our brain needs to observe difficult concepts from different point of views in order to understand them fully.
What to comment also depends on your audience. Beginners, average programmers or wizards? There is no one-fits-all solution.
E.g. you should explain the meaning of a "." (function composition) in Haskell if you are writing tutorial code, but certainly that would be a redundant explanation for anyone who has gone past chapter one/two of any Haskell book.
On the other hand some specific algorithm, like say red-black trees, could be a given for some programmers, and something very mysterious for others. In the second case you should add many comments to the code, or point to a document with further explanations.
Finally, one should notice that there is no consensus even among the masters. E.g. Dennis Ritchie is famous for being extremely parsimonious with comments, instead Don Knuth is an advocate of "Literate programming", where comments are as important as code itself. A set of rules will never be a substitute for personal taste.

Real life examples of functional programming spirits applied in imperative languages?

Most people say that even functional programming is less likely to land you a job, you can become a better imperative/OO programmer by learning it.
For me, it's mostly about writing "non member non friend" functions that have no side effects. But I couldn't come up with more examples where functional programming can be effectively applied in imperative languages, because working around languages' lack of features is often too cumbersome.
So what are some more (specific) examples/techniques that you actually applied in non-functional languages that were inspired by functional programming?
Another of my own experience
This one is quite abstract, but due to the lack of "objects" in most FP languages, the culture there tends favor rigorous data structure design. Usually, in OOP languages, because stuffing an extra variable in a class is too easy, things tend to go mess up rather quickly. Though the same could be done using OCaml's and Haskell's record syntax, that kind of approach somehow feels out of place in FP.
Data Transformation
In my experience thinking on how to solve a problem functionally makes you think more about what data gets transformed to what - and not what state needs to be changed in order to keep the damn thing running...
Thinking of problems as transformations makes them appear different all by itself - which leads to different and most likely more elegant solutions.
Update: In c++ there is the <functional> header, and std::transform in <algorithm>.
Most Ruby Enumerable methods are inspired by Higher Order Functions from Functional programming
The new-ish JavaScript array functions, filter, map, every, some, reduce, and reduceRight, are functional-inspired.
Functional Java was already mentioned in the comments, but there is also some functional-ish stuff in Apache Commons Collections. See the org.apache.commons.collections.functors package.

What are some impressive examples of functional code?

I'm getting a bit tired of having to code explicitly for multicore if I want more speed, particularly when I'm just writing a one-off script. My dev box already has 8 cores and that number is going up a lot faster than the clock speed. Functional languages seem to offer a potential escape hatch, but I haven't put in the effort to master one of them yet.
I'd love to see some sample chunks of real-world code that are much better and/or more parallelizable than non-functional alternatives. I'm not picky about the language -- I'm more interested in the concepts.
Thanks!
How about MapReduce? It's incredibly parallelizable and even though it's not implemented in functional languages as far as the paper goes, it's inspired by Lisp's map and reduce.
This (long, but very good) video gives both an intro to F# and a compelling demo of how easy it is to parallelize code in the language:
http://channel9.msdn.com/pdc2008/TL11/
Your question is asking for material right at the state of the art. I think your best introduction to this field, with examples, is the book Implicit Parallel Programming in pH by Nikhil and Arvind.
LINQ is a nice example of functional programming in mainstream languages. Reified code and monads? In MY C#? :) Anyways, w.r.t. threading, there's mention of Parallel LINQ. By using immutability and higher order functions (and Expression, perhaps), libraries can parallelize things for us.
And another link to F# with async workflows. What's impressive is the ability to take sync code, and with a few small annotations turn it into async code. The code retains a lot of the imperative qualities you might be using. You don't have to completely change things to take advantage of this; the compiler via handles it all.
There's an extended example of a text indexer/searcher using mapreduce in Chapter 20 ("Programming Multi-core CPUs") of Programming Erlang. I don't know how impressive that is, but it looks like code mortals can write.
Purely Functional Data Structures (long PDF), by Chris Okasaki.
A teacher of mine used to joke that the greatest example of functional code is the code that is not written.

How do I get my brain moving in "lisp mode?"

My professor told us that we could choose a programming language for our next programming assignment. I've been meaning to try out a functional language, so I figured I'd try out clojure. The problem is that I understand the syntax and understand the basic concepts, but I'm having problems getting everything to "click" in my head. Does anyone have any advice? Or am I maybe picking the wrong language to start functional programming with?
It's a little like riding a bike, it just takes practice. Try solving some problems with it, maybe ProjectEuler and eventually it'll click.
Someone mentioned the book "The Little Schemer" and this is a pretty good read. Although it targets Scheme the actual problems will be worth working through.
Good luck!
Well, for me, I encountered the same problem as you do in the beginning when I started doing OCAML, but the trick is that you must start thinking about what you want from the code and not how to do it!!!
For example, to calculate the square of list's elements, forget about the length of the list and such tricks, just think mathematically like that:
if the list is empty -> I am done
if not, then the list must have a head and tail -> you calculate the square of the head, then ask your function to do the same with the tail.
Just think about the general case and the base one, and that you are emitting data and not modifying it (unless you want to modify it ;) ).
Good luck!
You could check out The Little Schemer.
How about this: http://www.defmacro.org/ramblings/lisp.html
It is a very simple, step-by-step introduction to thinking in lisp from the point of view of a regular imperative programmer (Java, C#, etc.).
For educational purposes I would recommend PLT Scheme. It is a portable and powerful environment with very good examples and an even better documentation. It will help you to discover the thoughts behind functional programming step by step and in a very clean way. Choosing a little application to implement will help you learning the new language.
http://www.plt-scheme.org/
Additionally "Structure and Interpretation of Computer Programs" of H. Abelssn, G. Sussman, and J. Sussman is a very good book regarding Scheme (and programming).
Regards
mue
Take a look at 99 Lispy problems
Some thoughts on Lisps, not specific to Clojure (I'm not a Lisp expert, so I hope they're mostly correct and useful):
Coding in AST
I know little about compiler or interpreter theory, but every time I code in Lisp, it amazes me that it feels like directly building an AST.
That's part of what "code = data" means, coding in Lisp is a lot like filling data structures (nested lists) with AST nodes. Amazing, and it's easy to read too (with the right text editor).
A Programmable Programming Language
So code chunks are just nested lists, and list operations are part of the language. So you can very easily write Lisp code that generates Lisp code (see Lisp macros). This makes Lisp a programmable (in itself!) programming language.
This makes building a DSL or an interpreter in Lisp is very easy (see also meta-circular evaluation).
Never reboot anything
And in most Lisp systems, code (including documentation) can be introspected and hot swapped at run time.
Advanced OOP
Then, most Lisp Systems have some sort of Object System derived from CLOS, which is an advanced (compared to many OOP implementations) and configurable Object System (see The Art of the Metaobject Protocol).
All these features where invented long ago, but I'm not sure they are available in many other programming languages (although most are catching up, e.g. with closures), so you have to "rediscover" and get used to these by practicing (see the books in other answers).
Just remember: it's all data!
Write some simple classic functions that Lisp is good at, like
reverse a list
tell if an atom is somewhere in an s-expression
write EQUAL to tell if 2 s-expressions are equal
write FRINGE to get the list of atoms at the fringe of an s-expression
write SUBST, then write SUBLIS
Symbolic differentiation
Algebraic simplification
write a simple EVAL and/or APPLY
Understand that Lisp is good for these kinds of no-side-effect functional programs.
It is also useful for stateful side-effect (non-functional) programs, but those are more like "programs" than "functions".
Which is better for a given application depends on the application. In general, it should contain no less, and no more, state information than necessary.
Easy!
M-x lisp-mode
OK, OK, so you might not have Emacs for a brain. In all seriousness, what you need to do is to get really good at recursion. This can be quite a brain warp initially when trying to extend the concept of recursion beyond the canonical examples, but ultimately it will result in more fluid, lispy code.
Also, a lot of people get hung up on the parenthesis, and I don't really know why - the syntax is very simple and consistent and can be mastered in minutes. For me, I came to Scheme after having learned C++ and Java, and I always thought that the difference between "functions" and "operators" was a false dichotomy, and it was refreshing to see that distinction eliminated.
As far as functional programming goes, as long as you can wrap your head around the fact that a function is a first-class value and can be passed both into and out of other functions you should be fine. The usefulness of this will become clear over time, but it's enough that you can write function-taking and function-returning functions.
Finally, I'm not sure what support Clojure has for macros, but they're considered an essential part of lisp. However, I wouldn't worry about learning them until you're deeply familiar with the above items - though macros are incredibly useful and versatile, they're also used less often than the other techniques I mentioned.
I'd start with a language that can be interpreted. I found Moscow ML to be fairly easy. It is a lightweight implementation of Standard ML.
My personal practice is to find a small project (something that might take 3-5 nights hacking away) and implement it. How about a blog filter tool? Maybe just a Towers of Hanoi or Linked List implementation (those are usually 1-night projects).
The way it usually works out is I implement it poorly the first time, throw away what I had, and it finally clicks a few hours in.
A HUGE help is taking a course in something like... um... LISP! :) The homework will force you to confront a lot of the concepts and it clicked for me long before the semester ended.
Good luck!!
Good luck. It took me until about halfway through the "Programming Languages" course in college before Scheme "clicked". Once that happened, though, everything just made sense, and I fell in love with functional programming.
Write a Lisp interpreter in Lisp.
If you haven't alrady, read up on what makes lisp a unique language. If you don't do this first, you'll be trying to do the same things you could do in some other programming languages.
Then try to implement some small thing (try to make it useful to you or you might not have the motivation).
Lisp in a box is a great way to get your feet wet.
For me the important thing is to make sure you do everything in a 'lisp-y' way. Don't be tempted to think 'In Java I'd use a for loop here, how do I do for loops in Lisp?' but to go through enough examples and tutorials (as someone pointed out, SICP is perfect for this) that you can start to spot when code looks 'Lisp-y' and recognise common language paradigms.
I certainly know the feeling of looking at some code I've just written and intuitively knowing that it's correctly idiomatic for that language and platform/framework - that, I think, is when it 'clicks'.
Edit: And kudos for choosing a functional language, lesser students would have just done it in Java :)
Who said it is going to click? I'm always confused
But if you think about how much abstraction it is possible to hide away, behind lisp macros. Then your brain will explode.
:)
I'd check out Programming Clojure. It's a great book for non-lispers.
In addition to what other SO'ers have already suggested, here are my 2 cents:
Start learning the language and try out a few simple numerical/hobby problems in the language
IMPORTANT: Post the solution/code to StackOverflow, asking for people's opinion if that is really the LISPy way to do it.
Best of luck!

Concepts that surprised you when you read SICP?

SICP - "Structure and Interpretation of Computer Programs"
Explanation for the same would be nice
Can some one explain about Metalinguistic Abstraction
SICP really drove home the point that it is possible to look at code and data as the same thing.
I understood this before when thinking about universal Turing machines (the input to a UTM is just a representation of a program) or the von Neumann architecture (where a single storage structure holds both code and data), but SICP made the idea much more clear. Scheme (Lisp) helped here, as the syntax for a program is exactly the same as the syntax for lists in general, namely S-expressions.
Once you have the "equivalence" of code and data, suddenly a lot of things become easy. For example, you can write programs that have different evaluation methods (lazy, nondeterministic, etc). Previously, I might have thought that this would require an extension to the programming language; in reality, I can just add it on to the language myself, thus allowing the core language to be minimal. As another example, you can similarly implement an object-oriented framework; again, this is something I might have naively thought would require modifying the language.
Incidentally, one thing I wish SICP had mentioned more: types. Type checking at compilation time is an amazing thing. The SICP implementation of object-oriented programming did not have this benefit.
I didn't read that book yet, I have only looked at the video courses, but it taught me a lot. Functions as first class citizens was mind blowing for me. Executing a "variable" was something very new to me. After watching those videos the way I now see JavaScript and programming in general has greatly changed.
Oh, I think I've lied, the thing that really struck me was that + was a function.
I think the most surprising thing about SICP is to see how few primitives are actually required to make a Turing complete language--almost anything can be built from almost nothing.
Since we are discussing SICP, I'll put in my standard plug for the video lectures at http://groups.csail.mit.edu/mac/classes/6.001/abelson-sussman-lectures/, which are the best Introduction to Computer Science you could hope to get in 20 hours.
The one that I thought was really cool was streams with delayed evaluation. The one about generating primes was something I thought was really neat. Like a "PEZ" dispenser that magically dispenses the next prime in the sequence.
One example of "the data and the code are the same thing" from A. Rex's answer got me in a very deep way.
When I was taught Lisp back in Russia, our teachers told us that the language was about lists: car, cdr, cons. What really amazed me was the fact that you don't need those functions at all - you can write your own, given closures. So, Lisp is not about lists after all! That was a big surprise.
A concept I was completely unfamiliar with was the idea of coroutines, i.e. having two functions doing complementary work and having the program flow control alternate between them.
I was still in high school when I read SICP, and I had focused on the first and second chapters. For me at the time, I liked that you could express all those mathematical ideas in code, and have the computer do most of the dirty work.
When I was tutoring SICP, I got impressed by different aspects. For one, the conundrum that data and code are really the same thing, because code is executable data. The chapter on metalinguistic abstractions is mind-boggling to many and has many take-home messages. The first is that all the rules are arbitrary. This bothers some students, specially those who are physicists at heart. I think the beauty is not in the rules themselves, but in studying the consequence of the rules. A one-line change in code can mean the difference between lexical scoping and dynamic scoping.
Today, though SICP is still fun and insightful to many, I do understand that it's becoming dated. For one, it doesn't teach debugging skills and tools (I include type systems in there), which is essential for working in today's gigantic systems.
I was most surprised of how easy it is to implement languages. That one could write interpreter for Scheme onto a blackboard.
I felt Recursion in different sense after reading some of the chapters of SICP
I am right now on Section "Sequences as Conventional Interfaces" and have found the concept of procedures as first class citizens quite fascinating. Also, the application of recursion is something I have never seen in any language.
Closures.
Coming from a primarily imperative background (Java, C#, etc. -- I only read SICP a year or so ago for the first time, and am re-reading it now), thinking in functional terms was a big revelation for me; it totally changed the way I think about my work today.
I read most part of the book (without exercise). What I have learned is how to abstract the real world at a specific level, and how to implement a language.
Each chapter has ideas surprise me:
The first two chapters show me two ways of abstracting the real world: abstraction with the procedure, and abstraction with data.
Chapter 3 introduces time in the real world. That results in states. We try assignment, which raises problems. Then we try streams.
Chapter 4 is about metalinguistic abstraction, in other words, we implement a new language by constructing an evaluator, which determines the meaning of expressions.
Since the evaluator in Chapter 4 is itself a Lisp program, it inherits the control structure of the underlying Lisp system. So in Chapter 5, we dive into the step-by-step operation of a real computer with the help of an abstract model, register machine.
Thanks.

Resources