Related
I'm struggling with understanding the difference between functional and imperative programming. From reading https://www.sitepoint.com/what-is-functional-programming/ I see that there are a number of principles in functional programming that I use all the time in what I thought was an imperative programming.
I've read that functional programs use pure functions, so does that mean every time I make or use a pure function I'm writing in the functional paradigm?
I've also passed functions in and used them as first class objects, does that mean I was writing in the functional paradigm?
I pretty much use all of the functional paradigm principles in my code, but I never thought I was doing functional programming. Is the act of using any of these functional programming principles considered the functional programming paradigm?
Functional principles are just techniques and ideas. These are the bread and butter of the functional programming paradigm, which is what happens when you take these tools and use their unique advantages to gain systemic advantages.
A pure function is just a function with no side effects. You've written a million of these. But now, if you write only pure functions, your app can be split across processing cores with no effort or risk.
You've used constants before. But if you almost always use constants, then the things that are variable are the only things you have to think about when tracing code, and that is quite an advantage.
And you've chained functions before, but when you make everything pipe-able your entire language begins to feel like wiring up data flows, rather than giving the computer step-by-step instructions. This is much easier for humans to reason about and is less error-prone.
The techniques always have their advantages. When they become baseline assumptions, those advantages multiply. That's the functional paradigm.
Moving my comments here for clarity:
good question! In this article medium.com/#charlesbailey333/… it talked about how Rust had advantages over C++ because it incorporates functional programming ideas better. The evidence they give is that it supports Map, Reduce, and Filter. It almost seems like they're saying that those functions are "functional programming functions", but I don't think those functions are anything special. – Joshua Segal 16 mins ago
Okay great! This I can help with. SO this author is struggling to use the actual term for what they're referencing. It's called "expressiveness". Basically it means how close is the code I'm writing to the mental model of what I'm doing? For example, you want to give someone directions on how to get from A to B. Ideally, you do this by expressing it in turns and street names. However, if your language forces you to express this using the angle of the accelerator pedal and the angle of the steering wheel, this is much clunkier to do. C++ did it clunky. Rust did it elegantly and expressively.
In general, the functional and declarative languages tend to be much better at "expressing" your ideas in code and visually. You have branching paths? Your code literally looks like a branching tree. You have a data flow with a transformer? Guess what friend, that's just a function that transforms X to Y and a some sort of pipe operator that takes care of looping and new info.
The thing is, expressiveness isn't a statistic or something you can optimize for. It's an emergent "feeling" when using the language. The paradigms are general principles that tend to be internally consistent that produce useful "feelings". FP feels like flowing pipes and transformations. OOP feels like gadgets and features that talk to each other. The different mental models have different uses. FP is better for data processing. OOP can be good for UI and stateful services. At the boundaries they can clash a little, which is where the clunk comes from in C++.
At this point anything that is "completely OOP" or "completely FP" is usually shit, to be frank, so it can be a little hard to see the identities of the two when they are so merged. If you do complete OOP you can't compose anything and you have to write a million connector classes. If you do complete FP you can't modify state or have side effects (like... uh... showing stuff on screen?). These are genres. What makes something a house beat? If something else uses a house beat is it automatically house music? Does anyone care about the categorization?
I am curious how functional languages compare (in general) to more "traditional" languages such as C# and Java for large programs. Does program flow become difficult to follow more quickly than if a non-functional language is used? Are there other issues or things to consider when writing a large software project using a functional language?
Thanks!
Functional programming aims to reduce the complexity of large systems, by isolating each operation from others. When you program without side-effects, you know that you can look at each function individually - yes, understanding that one function may well involve understanding other functions too, but at least you know it won't interfere with some other piece of system state elsewhere.
Of course this is assuming completely pure functional programming - which certainly isn't always the case. You can use more traditional languages in a functional way too, avoiding side-effects where possible. But the principle is an important one: avoiding side-effects leads to more maintainable, understandable and testable code.
Does program flow become difficult to follow more quickly than if a >non-functional language is used?
"Program flow" is probably the wrong concept to analyze a large functional program. Control flow can become baroque because there are higher-order functions, but these are generally easy to understand because there is rarely any shared mutable state to worry about, so you can just think about arguments and results. Certainly my experience is that I find it much easier to follow an aggressively functional program than an aggressively object-oriented program where parts of the implementation are smeared out over many classes. And I find it easier to follow a program written with higher-order functions than with dynamic dispatch. I also observe that my students, who are more representative of programmers as a whole, have difficulties with both inheritance and dynamic dispatch. They do not have comparable difficulties with higher-order functions.
Are there other issues or things to consider when writing a large
software project using a functional language?
The critical thing is a good module system. Here is some commentary.
The most powerful module system I know of the unit system of PLT Scheme designed by Matthew Flatt and Matthias Felleisen. This very powerful system unfortunately lacks static types, which I find a great aid to programming.
The next most powerful system is the Standard ML module system. Unfortunately Standard ML, while very expressive, also permits a great many questionable constructs, so it is easy for an amateur to make a real mess. Also, many programmers find it difficult to use Standard ML modules effectively.
The Objective Caml module system is very similar, but there are some differences which tend to mitigate the worst excesses of Standard ML. The languages are actually very similar, but the styles and idioms of Objective Caml make it significantly less likely that beginners will write insane programs.
The least powerful/expressive module system for a functional langauge is the Haskell module system. This system has a grave defect that there are no explicit interfaces, so most of the cognitive benefit of having modules is lost. Another sad outcome is that while the Haskell module system gives users a hierarchical name space, use of this name space (import qualified, in case you're an insider) is often deprecated, and many Haskell programmers write code as if everything were in one big, flat namespace. This practice amounts to abandoning another of the big benefits of modules.
If I had to write a big system in a functional language and had to be sure that other people understood it, I'd probably pick Standard ML, and I'd establish very stringent programming conventions for use of the module system. (E.g., explicit signatures everywhere, opague ascription with :>, and no use of open anywhere, ever.) For me the simplicity of the Standard ML core language (as compared with OCaml) and the more functional nature of the Standard ML Basis Library (as compared with OCaml) are more valuable than the superior aspects of the OCaml module system.
I've worked on just one really big Haskell program, and while I found (and continue to find) working in Haskell very enjoyable, I really missed not having explicit signatures.
Do functional languages cope well with complexity?
Some do. I've found ML modules and module types (both the Standard ML and Objective Caml) flavors invaluable tools for managing complexity, understanding complexity, and placing unbreachable firewalls between different parts of large programs. I have had less good experiences with Haskell
Final note: these aren't really new issues. Decomposing systems into modules with separate interfaces checked by the compiler has been an issue in Ada, C, C++, CLU, Modula-3, and I'm sure many other languages. The main benefit of a system like Standard ML or Caml is the that you get explicit signatures and modular type checking (something that the C++ community is currently struggling with around templates and concepts). I suspect that these issues are timeless and are going to be important for any large system, no matter the language of implementation.
I'd say the opposite. It is easier to reason about programs written in functional languages due to the lack of side-effects.
Usually it is not a matter of "functional" vs "procedural"; it is rather a matter of lazy evaluation.
Lazy evaluation is when you can handle values without actually computing them yet; rather, the value is attached to an expression which should yield the value if it is needed. The main example of a language with lazy evaluation is Haskell. Lazy evaluation allows the definition and processing of conceptually infinite data structures, so this is quite cool, but it also makes it somewhat more difficult for a human programmer to closely follow, in his mind, the sequence of things which will really happen on his computer.
For mostly historical reasons, most languages with lazy evaluation are "functional". I mean that these language have good syntaxic support for constructions which are typically functional.
Without lazy evaluation, functional and procedural languages allow the expression of the same algorithms, with the same complexity and similar "readability". Functional languages tend to value "pure functions", i.e. functions which have no side-effect. Order of evaluation for pure function is irrelevant: in that sense, pure functions help the programmer in knowing what happens by simply flagging parts for which knowing what happens in what order is not important. But that is an indirect benefit and pure functions also appear in procedural languages.
From what I can say, here are the key advantages of functional languages to cope with complexity :
Functional programming hates side-effects.
You can really black-box the different layers
and you won't be afraid of parallel processing
(actor model like in Erlang is really easier to use
than locks and threads).
Culturally, functional programmer
are used to design a DSL to express
and solve a problem. Identifying the fundamental
primitives of a problem is a radically
different approach than rushing to the brand
new trendy framework.
Historically, this field has been led by very smart people :
garbage collection, object oriented, metaprogramming...
All those concepts were first implemented on functional platform.
There is plenty of literature.
But the downside of those languages is that they lack support and experience in the industry. Having portability, performance and interoperability may be a real challenge where on other platform like Java, all of this seems obvious. That said, a language based on the JVM like Scala could be a really nice fit to benefit from both sides.
Does program flow become difficult to
follow more quickly than if a
non-functional language is used?
This may be the case, in that functional style encourages the programmer to prefer thinking in terms of abstract, logical transformations, mapping inputs to outputs. Thinking in terms of "program flow" presumes a sequential, stateful mode of operation--and while a functional program may have sequential state "under the hood", it usually isn't structured around that.
The difference in perspective can be easily seen by comparing imperative vs. functional approaches to "process a collection of data". The former tends to use structured iteration, like a for or while loop, telling the program "do this sequence of tasks, then move to the next one and repeat, until done". The latter tends to use abstracted recursion, like a fold or map function, telling the program "here's a function to combine/transform elements--now use it". It isn't necessary to follow the recursive program flow through a function like map; because it's a stateless abstraction, it's sufficient to think in terms of what it means, not what it's doing.
It's perhaps somewhat telling that the functional approach has been slowly creeping into non-functional languages--consider foreach loops, Python's list comprehensions...
I thought the whole idea was to have only computation with NO state and NO side effects. Now if a Clojure app(or even worse, a reusable Clojure library ) can use and create any Java object, how can I be sure I don't get side effects or state ?
FP is a paradigm, a concept, but not necessarily a dogma. Clojure trusts the programmer to make thoughtful decisions about where he'll depart from FP. In exchange, Clojure offers the staggering cornucopia of code that is available in the form of Java libraries. This makes it relatively easy and painless to write a GUI app in Clojure, say, or a Web server or any of the things covered by Java library code.
Note that the Java "hole" is not the only escape hatch Clojure offers from FP: References and atoms hold state and Clojure offers functions to change it under controlled conditions. I think this pragmatic approach makes Clojure useful and will help make it popular.
You cannot be sure, apart from consulting documentation or using a java decompiler(?). This ability certainly defies the idea of pure functional programming, but the real world is not a particularly pure place and purely functional languages can't get much traction against it. Witness all the contortionism with monads in Haskell. Besides, mutable state is very powerful computationally — many algorithms become much faster and much more economical of memory when implemented with mutable state.
Clojure is not a pure functional programming language. What you said would stand in Haskell, but not in Clojure. Clojure encourages functional programming, but it doesn't force it. Clojure is built to help you program in a functional style, but also to allow you to actually get stuff done. Clojure makes sure that when you use state, you have to be explicit about it. If you want to be sure that you're programming purely functional, you have to make sure yourself. Clojure isn't pure, so it doesn't promise purity.
Because Clojure is meant for the real world it makes compromises, and therefore it isn't a pure functional language.
Haskell was made as a proof that it was even possible to make a pure functional programming language that could work in the real world, so if pureness is what you desire, your journey should take you there.
Referential transparency (which is a consequence of the lack of side effect) isn't the only motivation for functional programming. The concept of lazy evaluation is thought to be one of the central features of the functional style since it allows you to modularly construct programs.
In other words functional programming is at least as much about generic programming as it is about providing static safety guarantees. I'm pretty sure you already knew this, but I thought it might be appropriate to articulate the idea.
Allowing side effects is a bit of a trade-off which you need to justify for yourself. Many applications do need to deal with quite a lot of stateful computation, some languages are just more strict about dealing with this than others.
Functional programming has been around for years and years in varying degrees of "purity" sort of waiting for a killer app. Clojure explicitly embraces a specific application of functional programming, that is it focuses on single address space parallel programming and it's FP paradigm really shines in this area. Much of the java world is single threaded and hence does not need this tool.
So yes you are absolutely correct Clojure breaks the functional paradigm when it calls to java, because it doesn't really need FP for these parts and because the rest of the world provides so very much good code that also does not need Functional Programming.
My professor told us that we could choose a programming language for our next programming assignment. I've been meaning to try out a functional language, so I figured I'd try out clojure. The problem is that I understand the syntax and understand the basic concepts, but I'm having problems getting everything to "click" in my head. Does anyone have any advice? Or am I maybe picking the wrong language to start functional programming with?
It's a little like riding a bike, it just takes practice. Try solving some problems with it, maybe ProjectEuler and eventually it'll click.
Someone mentioned the book "The Little Schemer" and this is a pretty good read. Although it targets Scheme the actual problems will be worth working through.
Good luck!
Well, for me, I encountered the same problem as you do in the beginning when I started doing OCAML, but the trick is that you must start thinking about what you want from the code and not how to do it!!!
For example, to calculate the square of list's elements, forget about the length of the list and such tricks, just think mathematically like that:
if the list is empty -> I am done
if not, then the list must have a head and tail -> you calculate the square of the head, then ask your function to do the same with the tail.
Just think about the general case and the base one, and that you are emitting data and not modifying it (unless you want to modify it ;) ).
Good luck!
You could check out The Little Schemer.
How about this: http://www.defmacro.org/ramblings/lisp.html
It is a very simple, step-by-step introduction to thinking in lisp from the point of view of a regular imperative programmer (Java, C#, etc.).
For educational purposes I would recommend PLT Scheme. It is a portable and powerful environment with very good examples and an even better documentation. It will help you to discover the thoughts behind functional programming step by step and in a very clean way. Choosing a little application to implement will help you learning the new language.
http://www.plt-scheme.org/
Additionally "Structure and Interpretation of Computer Programs" of H. Abelssn, G. Sussman, and J. Sussman is a very good book regarding Scheme (and programming).
Regards
mue
Take a look at 99 Lispy problems
Some thoughts on Lisps, not specific to Clojure (I'm not a Lisp expert, so I hope they're mostly correct and useful):
Coding in AST
I know little about compiler or interpreter theory, but every time I code in Lisp, it amazes me that it feels like directly building an AST.
That's part of what "code = data" means, coding in Lisp is a lot like filling data structures (nested lists) with AST nodes. Amazing, and it's easy to read too (with the right text editor).
A Programmable Programming Language
So code chunks are just nested lists, and list operations are part of the language. So you can very easily write Lisp code that generates Lisp code (see Lisp macros). This makes Lisp a programmable (in itself!) programming language.
This makes building a DSL or an interpreter in Lisp is very easy (see also meta-circular evaluation).
Never reboot anything
And in most Lisp systems, code (including documentation) can be introspected and hot swapped at run time.
Advanced OOP
Then, most Lisp Systems have some sort of Object System derived from CLOS, which is an advanced (compared to many OOP implementations) and configurable Object System (see The Art of the Metaobject Protocol).
All these features where invented long ago, but I'm not sure they are available in many other programming languages (although most are catching up, e.g. with closures), so you have to "rediscover" and get used to these by practicing (see the books in other answers).
Just remember: it's all data!
Write some simple classic functions that Lisp is good at, like
reverse a list
tell if an atom is somewhere in an s-expression
write EQUAL to tell if 2 s-expressions are equal
write FRINGE to get the list of atoms at the fringe of an s-expression
write SUBST, then write SUBLIS
Symbolic differentiation
Algebraic simplification
write a simple EVAL and/or APPLY
Understand that Lisp is good for these kinds of no-side-effect functional programs.
It is also useful for stateful side-effect (non-functional) programs, but those are more like "programs" than "functions".
Which is better for a given application depends on the application. In general, it should contain no less, and no more, state information than necessary.
Easy!
M-x lisp-mode
OK, OK, so you might not have Emacs for a brain. In all seriousness, what you need to do is to get really good at recursion. This can be quite a brain warp initially when trying to extend the concept of recursion beyond the canonical examples, but ultimately it will result in more fluid, lispy code.
Also, a lot of people get hung up on the parenthesis, and I don't really know why - the syntax is very simple and consistent and can be mastered in minutes. For me, I came to Scheme after having learned C++ and Java, and I always thought that the difference between "functions" and "operators" was a false dichotomy, and it was refreshing to see that distinction eliminated.
As far as functional programming goes, as long as you can wrap your head around the fact that a function is a first-class value and can be passed both into and out of other functions you should be fine. The usefulness of this will become clear over time, but it's enough that you can write function-taking and function-returning functions.
Finally, I'm not sure what support Clojure has for macros, but they're considered an essential part of lisp. However, I wouldn't worry about learning them until you're deeply familiar with the above items - though macros are incredibly useful and versatile, they're also used less often than the other techniques I mentioned.
I'd start with a language that can be interpreted. I found Moscow ML to be fairly easy. It is a lightweight implementation of Standard ML.
My personal practice is to find a small project (something that might take 3-5 nights hacking away) and implement it. How about a blog filter tool? Maybe just a Towers of Hanoi or Linked List implementation (those are usually 1-night projects).
The way it usually works out is I implement it poorly the first time, throw away what I had, and it finally clicks a few hours in.
A HUGE help is taking a course in something like... um... LISP! :) The homework will force you to confront a lot of the concepts and it clicked for me long before the semester ended.
Good luck!!
Good luck. It took me until about halfway through the "Programming Languages" course in college before Scheme "clicked". Once that happened, though, everything just made sense, and I fell in love with functional programming.
Write a Lisp interpreter in Lisp.
If you haven't alrady, read up on what makes lisp a unique language. If you don't do this first, you'll be trying to do the same things you could do in some other programming languages.
Then try to implement some small thing (try to make it useful to you or you might not have the motivation).
Lisp in a box is a great way to get your feet wet.
For me the important thing is to make sure you do everything in a 'lisp-y' way. Don't be tempted to think 'In Java I'd use a for loop here, how do I do for loops in Lisp?' but to go through enough examples and tutorials (as someone pointed out, SICP is perfect for this) that you can start to spot when code looks 'Lisp-y' and recognise common language paradigms.
I certainly know the feeling of looking at some code I've just written and intuitively knowing that it's correctly idiomatic for that language and platform/framework - that, I think, is when it 'clicks'.
Edit: And kudos for choosing a functional language, lesser students would have just done it in Java :)
Who said it is going to click? I'm always confused
But if you think about how much abstraction it is possible to hide away, behind lisp macros. Then your brain will explode.
:)
I'd check out Programming Clojure. It's a great book for non-lispers.
In addition to what other SO'ers have already suggested, here are my 2 cents:
Start learning the language and try out a few simple numerical/hobby problems in the language
IMPORTANT: Post the solution/code to StackOverflow, asking for people's opinion if that is really the LISPy way to do it.
Best of luck!
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What do you think the benefits of functional programming are? And how do they apply to programmers today?
What are the greatest differences between functional programming and OOP?
The style of functional programming is to describe what you want, rather than how to get it. ie: instead of creating a for-loop with an iterator variable and marching through an array doing something to each cell, you'd say the equivalent of "this label refers to a version of this array where this function has been done on all the elements."
Functional programming moves more basic programming ideas into the compiler, ideas such as list comprehensions and caching.
The biggest benefit of Functional programming is brevity, because code can be more concise. A functional program doesn't create an iterator variable to be the center of a loop, so this and other kinds of overhead are eliminated from your code.
The other major benefit is concurrency, which is easier to do with functional programming because the compiler is taking care of most of the operations which used to require manually setting up state variables (like the iterator in a loop).
Some performance benefits can be seen in the context of a single-processor as well, depending on the way the program is written, because most functional languages and extensions support lazy evaluation. In Haskell you can say "this label represents an array containing all the even numbers". Such an array is infinitely large, but you can ask for the 100,000th element of that array at any moment without having to know--at array initialization time--just what the largest value is you're going to need. The value will be calculated only when you need it, and no further.
The biggest benefit is that it's not what you're used to. Pick a language like Scheme and learn to solve problems with it, and you'll become a better programmer in languages you already know. It's like learning a second human language. You assume that others are basically a variation on your own because you have nothing to compare it with. Exposure to others, particular ones that aren't related to what you already know, is instructive.
Why Functional Programming Matters
http://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf
Abstract
As software becomes more and more complex, it is more and
more important to structure it well. Well-structured software is easy
to write and to debug, and provides a collection of modules that can
be reused to reduce future programming costs.
In this paper we show
that two features of functional languages in particular, higher-order
functions and lazy evaluation, can contribute significantly to
modularity. As examples, we manipulate lists and trees, program
several numerical algorithms, and implement the alpha-beta heuristic
(an algorithm from Artificial Intelligence used in game-playing
programs). We conclude that since modularity is the key to successful
programming, functional programming offers important advantages for
software development.
A good starting point therefore would be to try to understand some things that are not possible in imperative languages but possible in functional languages.
If you're talking about computability, there is of course nothing that is possible in functional but not imperative programming (or vice versa).
The point of different programming paradigms isn't to make things possible that weren't possible before, it's to make things easy that were hard before.
Functional programming aims to let you more easily write programs that are concise, bug-free and parallelizable.
I think the most practical example of the need for functional programming is concurrency - functional programs are naturally thread safe and given the rise of multi core hardware this is of uttermost importance.
Functional programming also increases the modularity - you can often see methods/functions in imperative that are far too long - you'll almost never see a function more than a couple of lines long. And since everything is decoupled - re-usability is much improved and unit testing is very very easy.
It doesn't have to be one or the other: using a language like C#3.0 allows you to mix the best elements of each. OO can be used for the large scale structure at class level and above, Functional style for the small scale structure at method level.
Using the Functional style allows code to be written that declares its intent clearly, without being mixed up with control flow statements, etc. Because of the principles like side-effect free programming, it is much easier to reason about code, and check its correctness.
Once the program grows, the number of commands in our vocabulary becomes too high, making it very difficult to use. This is where object-oriented programming makes our life easier, because it allows us to organize our commands in a better way.
We can associate all commands that involve customer with some customer entity (a class), which makes the description a lot clearer. However, the program is still a sequence of commands specifying how it should proceed.
Functional programming provides a completely different way of extending the vocabulary. Not limited to adding new primitive commands; we can also add new control structures–primitives that specify how we can put commands together to create a program. In imperative languages, we were able to compose commands in a sequence or using a limited number of built in constructs such as loops, but if you look at typical programs, you'll still see many recurring structures; common ways of combining commands
Do not think of functional programming in terms of a "need". Instead, think of it as another programming technique that will open up your mind just as OOP, templates, assembly language, etc may have completely changed your way of thinking when (if) you learned them. Ultimately, learning functional programming will make you a better programmer.
If you don't already know functional programming then learning it gives you more ways to solve problems.
FP is a simple generalization that promotes functions to first class values whereas OOP is for large-scale structuring of code. There is some overlap, however, where OOP design patterns can be represented directly and much more succinctly using first-class functions.
Many languages provide both FP and OOP, including OCaml, C# 3.0 and F#.
Cheers,
Jon Harrop.