Basic cons-cell sticks together two arbitrary things and is the basic unit that allows a construction of linked lists and arbitrary data objects. Question: is there a reason to stick with this simplistic language design descision (for instance, in all lisp families)?
Why not use fixed length arrays for this purpose (or some nested stacks)? I can't foresee any problems with that, but there are clear advantages of a more "packed" memory, less pointer resolution and less "dead-weight" cons-cells to define hierarchy of the data.
You have titled your question “Functional Programming: why pair as a basic constructed unit?”, but this title does not reflect correctly the fact that many important and well known functional languages (e.g. Haskell, F#, Scala, SML, Clojure etc.) have either algebraic data types or different collection of data structures, in which the pair is just one of the different type of constructors, if even available. The situation is similar for other multiparadigm languages, that have support for functional programming, like C++, Java, Objective-C, Swift, etc.
In all these cases the pair, if present, is exactly “basic” as an array, a record, or list, or any other type of data constructor.
What is left is the family of Lisp languages, notably Common Lisp and Scheme, that, beside having a rich set of data structures, like those cited in the comment of Rainer Joswig, use the pair for an important task: as basic data constructor to represent programs.
The fact that Lisp code is a s-expression (that is a list of lists and atoms) has foundamental consequences, the most notable of all being the rising of macro systems, that allow programmers to create easily new syntax, or even new domain-specific languages.
Renzo's answer about other structures in functional programming is spot on. Function programming is about aligning programming with logic and mathematics, where expressions denote values, and there's no such thing as a side effect. (Of course, in practice, we need side effects for I/O, etc.) Functional programming doesn't require the singly linked list as a fundamental construct.
Lists are one of the things that make Lisps lispy, though.
One of the reasons that pairs are so common in the Lisp family of languages may be that ordered pairs are very easy to implement in the lambda calculus that Lisps are inspired by. (I say "inspired by" rather than "based on" because after the syntax and use of lambda to denote anonymous functions, there's plenty of difference, and it's best not to assume that things about one apply to the other.) See the answer to Use of lambda for cons/car/cdr definition in SICP for a quick lesson in how cons, car, and cdr can implemented using nothing but lexical closures.
Related
I am aware that declarative programming just passes the input and expects the output without stating the procedure how it is done. In functional programming, is a programming paradigm, which takes an input and returns an output. When I checked the Higher order functional programming, we pass a function to map/reduce, which does not reveal the procedure how it is done. So is higher order functional programming and declarative programming the same thing??
Short answer: No.
Wikipedia defines declarative programming as:
In computer science, declarative programming is a programming
paradigm - a style of building the structure and elements of computer
programs - that expresses the logic of a computation without describing
its control flow.
Or to state it a bit boldly: "Say what you want, not how you want it.".
This is thus in contrast with imperative programming languages where a program is seen as a set of instructions that are done one after another. The fact that map, etc. do not reveal the procedure does not make it declarative: one can use a lot of C libraries that are proprietary and do not allow you to inspect the source code. That however, does not mean that these are declarative.
The definition of functional programming on the other hand is:
In computer science, functional programming is a programming paradigm
- a style of building the structure and elements of computer programs - that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. It is a declarative
programming paradigm, which means programming is done with expressions
or declarations instead of statements.
Based on these definitions one could say that functional programming is a subset of declarative programming. In a practical sense however if we follow the strict definitions, no programming language nowadays is purely, and un-ambigously declarative or functional. One can however say that Haskell is more declarative than Java.
Declarative programming is usually considered to be "safer" since people tend to have trouble managing side-effects. A lot of programming errors are the result of not taking all side effects into account. On the other hand it is hard to
design a language that allows a programmer to describe what he wants without going into details on how to do it;
implement a compiler that will generate - based on such programs - an efficient implementation; and
some problems have inherent side effects. For instance if you work with a database, a network connection or a file system, then reading/writing to a file for instance is supposed to have side effects. One can of course decide not to make this part of the programming language (for instance many constraint programming languages do not allow these type of actions, and are a "sub language" in a larger system).
There have been several attempts to design such language. The most popular are - in my opinion - logic programming, functional programming, and constraint programming. Each has its merits and problems. We can also observe this declarative approach in for instance databases (like SQL) and text/XML processing (with XSLT, XPath, regular expressions,...) where one does not specify how a query is resolved, but simply specifies through for instance the regular expression what one is looking for.
Whether a programming language is however declarative, is a bit of a fuzzy discussion. Although programming languages, modeling languages and libraries like Haskell, Prolog, Gecode,... have definitely made programming more declarative, these are probably not declarative in the most strict sense. In the most strict sense, one should think that regardless how you write the logic, the compiler will always come up with the same result (although it might take a bit longer).
Say for instance we want to check whether a list is empty in Haskell. We can write this like:
is_empty1 :: [a] -> Bool
is_empty1 [] = True
is_empty1 (_:_) = False
We can however write it like this as well:
is_empty2 :: [a] -> Bool
is_empty2 l = length l == 0
Both should give the same result for the same queries. If we however give it an infinite list, is_empty1 (repeat 0) will return False whereas is_empty2 (repeat 0) will loop forever. So that means that we somehow still wrote some "control flow" into the program: we have defined - to some extent - how Haskell should evaluate this. Although lazy programming will result in the fact that a programmer does not really specify what should be evaluated first, there are still specifications how Haskell will evaluate this.
According to some people, this is the difference between programming and specifying. One of my professors once stated that according to him, the difference is that when you program something, you have somehow control about how something is evaluated, whereas when you specify something, you have no control. But again, this is only one of the many definitions.
Not entirely, functional programming emphasises more on what to compute rather than how to compute. However, there are patterns available in functional programming that are pretty much control flow patterns you would commonly associate with declarative programming, take for example the following control flow:
let continue = ref true in
while !continue do
...
if cond then continue := false
else
...
done
Looks familiar huh? Here you can see some declarative constructs but this time round we are in more control.
What is a combinator??
Is it "a function or definition with no free variables" (as defined on SO)?
Or how about this: according to John Hughes in his well-known paper on Arrows, "a combinator is a function which builds program fragments from program fragments", which is advantageous because "... the programmer using combinators constructs much of the desired program automatically, rather than writing every detail by hand". He goes on to say that map and filter are two common examples of such combinators.
Some combinators which match the first definition:
S
K
Y
others from To Mock a Mockingbird (I may be wrong -- I haven't read this book)
Some combinators which match the second definition:
map
filter
fold/reduce (presumably)
any of >>=, compose, fmap ?????
I'm not interested in the first definition -- those would not help me to write a real program (+1 if you convince me I'm wrong). Please help me understand the second definition. I think map, filter, and reduce are useful: they allow me to program at a higher level -- fewer mistakes, shorter and clearer code. Here are some of my specific questions about combinators:
What are more examples of combinators such as map, filter?
What combinators do programming languages often implement?
How can combinators help me design a better API?
How do I design effective combinators?
What are combinators similar to in a non-functional language (say, Java), or what do these languages use in place of combinators?
Update
Thanks to #C. A. McCann, I now have a somewhat better understanding of combinators. But one question is still a sticking point for me:
What is the difference between a functional program written with, and one written without, heavy use of combinators?
I suspect the answer is that the combinator-heavy version is shorter, clearer, more general, but I would appreciate a more in-depth discussion, if possible.
I'm also looking for more examples and explanations of complex combinators (i.e. more complex than fold) in common programming languages.
I'm not interested in the first definition -- those would not help me to write a real program (+1 if you convince me I'm wrong). Please help me understand the second definition. I think map, filter, and reduce are useful: they allow me to program at a higher level -- fewer mistakes, shorter and clearer code.
The two definitions are basically the same thing. The first is based on the formal definition and the examples you give are primitive combinators--the smallest building blocks possible. They can help you to write a real program insofar as, with them, you can build more sophisticated combinators. Think of combinators like S and K as the machine language of a hypothetical "combinatory computer". Actual computers don't work that way, of course, so in practice you'll usually have higher-level operations implemented behind the scenes in other ways, but the conceptual foundation is still a useful tool for understanding the meaning of those higher-level operations.
The second definition you give is more informal and about using more sophisticated combinators, in the form of higher-order functions that combine other functions in various ways. Note that if the basic building blocks are the primitive combinators above, everything built from them is a higher-order function and a combinator as well. In a language where other primitives exist, however, you have a distinction between things that are or are not functions, in which case a combinator is typically defined as a function that manipulates other functions in a general way, rather than operating on any non-function things directly.
What are more examples of combinators such as map, filter?
Far too many to list! Both of those transform a function that describes behavior on a single value into a function that describes behavior on an entire collection. You can also have functions that transform only other functions, such as composing them end-to-end, or splitting and recombining arguments. You can have combinators that turn single-step operations into recursive operations that produce or consume collections. Or all kinds of other things, really.
What combinators do programming languages often implement?
That's going to vary quite a bit. There're relatively few completely generic combinators--mostly the primitive ones mentioned above--so in most cases combinators will have some awareness of any data structures being used (even if those data structures are built out of other combinators anyway), in which case there are typically a handful of "fully generic" combinators and then whatever various specialized forms someone decided to provide. There are a ridiculous number of cases where (suitably generalized versions of) map, fold, and unfold are enough to do almost everything you might want.
How can combinators help me design a better API?
Exactly as you said, by thinking in terms of high-level operations, and the way those interact, instead of low-level details.
Think about the popularity of "for each"-style loops over collections, which let you abstract over the details of enumerating a collection. These are just map/fold operations in most cases, and by making that a combinator (rather than built-in syntax) you can do things such as take two existing loops and directly combine them in multiple ways--nest one inside the other, do one after the other, and so on--by just applying a combinator, rather than juggling a whole bunch of code around.
How do I design effective combinators?
First, think about what operations make sense on whatever data your program uses. Then think about how those operations can be meaningfully combined in generic ways, as well as how operations can be broken down into smaller pieces that are connected back together. The main thing is to work with transformations and operations, not direct actions. When you have a function that just does some complicated bit of functionality in an opaque way and only spits out some sort of pre-digested result, there's not much you can do with that. Leave the final results to the code that uses the combinators--you want things that take you from point A to point B, not things that expect to be the beginning or end of a process.
What are combinators similar to in a non-functional language (say, Java), or what do these languages use in place of combinators?
Ahahahaha. Funny you should ask, because objects are really higher-order thingies in the first place--they have some data, but they also carry around a bunch of operations, and quite a lot of what constitutes good OOP design boils down to "objects should usually act like combinators, not data structures".
So probably the best answer here is that instead of combinator-like things, they use classes with lots of getter and setter methods or public fields, and logic that mostly consists of doing some opaque, predefined action.
In object-oriented programming, we might say the core concepts are:
encapsulation
inheritance,
polymorphism
What would that be in functional programming?
There's no community consensus on what are the essential concepts in functional programming. In
Why Functional Programming Matters (PDF), John Hughes argues that they are higher-order functions and lazy evaluation. In Wearing the Hair Shirt: A Retrospective on Haskell, Simon Peyton Jones says the real essential is not laziness but purity. Richard Bird would agree. But there's a whole crowd of Scheme and ML programmers who are perfectly happy to write programs with side effects.
As someone who has practiced and taught functional programming for twenty years, I can give you a few ideas that are widely believed to be at the core of functional programming:
Nested, first-class functions with proper lexical scoping are at the core. This means you can create an anonymous function at run time, whose free variables may be parameters or local variables of an enclosing function, and you get a value you can return, put into data structures, and so on. (This is the most important form of higher-order functions, but some higher-order functions (like qsort!) can be written in C, which is not a functional language.)
Means of composing functions with other functions to solve problems. Nobody does this better than John Hughes.
Many functional programmers believe purity (freedom from effects, including mutation, I/O, and exceptions) is at the core of functional programming. Many functional programmers do not.
Polymorphism, whether it is enforced by the compiler or not, is a core value of functional programmers. Confusingly, C++ programmers call this concept "generic programming." When polymorphism is enforced by the compiler it is generally a variant of Hindley-Milner, but the more powerful System F is also a powerful basis for functional languages. And with languages like Scheme, Erlang, and Lua, you can do functional programming without a static type system.
Finally, a large majority of functional programmers believe in the value of inductively defined data types, sometimes called "recursive types". In languages with static type systems these are generally known as "algebraic data types", but you will find inductively defined data types even in material written for beginning Scheme programmers. Inductively defined types usually ship with a language feature called pattern matching, which supports a very general form of case analysis. Often the compiler can tell you if you have forgotten a case. I wouldn't want to program without this language feature (a luxury once sampled becomes a necessity).
In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. Functional programming has its roots in the lambda calculus, a formal system developed in the 1930s to investigate function definition, function application, and recursion. Many functional programming languages can be viewed as embellishments to the lambda calculus. - Wikipedia
In a nutshell,
Lambda Calculus
Higher Order Functions
Immutability
No side-effects
Not directly an answer to your question, but I'd like to point out that "object-oriented" and functional programming aren't necessarily at odds. The "core concepts" you cite have more general counterparts which apply just as well to functional programming.
Encapsulation, more generally, is modularisation. All purely functional languages that I know of support modular programming. You might say that those languages implement encapsulation better than the typical "OO" variety, since side-effects break encapsulation, and pure functions have no side-effects.
Inheritance, more generally, is logical implication, which is what a function represents. The canonical subclass -> superclass relation is a kind of implicit function. In functional languages, this is expressed with type classes or implicits (I consider implicits to be the more general of these two).
Polymorphism in the "OO" school is achieved by means of subtyping (inheritance). There is a more general kind of polymorphism known as parametric polymorphism (a.k.a. generics), which you will find to be supported by pure-functional programming languages. Additionally, some support "higher kinds", or higher-order generics (a.k.a. type constructor polymorphism).
What I'm trying to say is that your "core concepts of OO" aren't specific to OO in any way. I, for one, would argue that there aren't any core concepts of OO, in fact.
Let me repeat the answer I gave at one discussion in the Bangalore Functional Programming group:
A functional program consists only of functions. Functions compute
values from their inputs. We can contrast this with imperative
programming, where as the program executes, values of mutable
locations change. In other words, in C or Java, a variable called X
refers to a location whose value change. But in functional
programming X is the name of a value (not a location). Any where that
X is in scope, it has the same value (i.e, it is referentially
transparent). In FP, functions are also values. They can be passed as
arguments to other functions. This is known as higher-order functional
programming. Higher-order functions let us model an amazing variety of
patterns. For instance, look at the map function in Lisp. It
represents a pattern where the programmer needs to do 'something' to
every element of a list. That 'something' is encoded as a function and
passed as an argument to map.
As we saw, the most notable feature of FP is it's side-effect
freeness. If a function does something more than computing a value
from it's input, then it is causing a side-effect. Such functions are
not allowed in pure FP. It is easy to test side-effect free functions.
There is no global state to set-up before running the test and there
is no global state to check after running the test. Each function can
be tested independently just by providing it's input and examining the
return value. This makes it easy to write automated tests. Another
advantage of side-effect freeness is that it gives you better control
on parallelism.
Many FP languages treat recursion and iteration correctly. They does this by
supporting something called tail-recursion. What tail-recursion is -
if a function calls itself, and it is the last thing it does, it
removes the current stack frame right away. In other words, if a
function calls itself tail-recursively a 1000 times, it does not grow
the stack a 1000 deep. This makes special looping constructs
unnecessary in these languages.
Lambda Calculus is the most boiled down version of an FP language.
Higher level FP languages like Haskell get compiled to Lambda
Calculus. It has only three syntactic constructs but still it is
expressive enough to represent any abstraction or algorithm.
My opinion is that FP should be viewed as a meta-paradigm. We can
write programs in any style, including OOP, using the simple
functional abstractions provided by the Lambda Calculus.
Thanks,
-- Vijay
Original discussion link: http://groups.google.co.in/group/bangalore-fp/browse_thread/thread/4c2cfa7985d7eab3
Abstraction, the process of making a function by parameterizing over some part of an expression.
Application, the process of evaluating a function by replacing its parameters with specific values.
At some level, that's all there is to it.
Though the question is older, thought of sharing my view as reference.
Core Concept in FP is "FUNCTION"
FP gives KISS(Keep It Simple Sxxxxx) programming paradigm (once you get the FP ideas, you will literally start hating the OO paradigm)
Here is my simple FP comparison with OO Design Patterns. Its my perspective of seeing FP and pls correct me if there is any discrepancy from actual.
There are many methods for representing structure of a program (like UML class diagrams etc.). I am interested if there is a convention which describes programs in a strict, mathematical way. I am especially interested in the use of mathematical notation for this purpose.
An example: Classes are represented as sets (fields, properties) and functions (operating on the elements of sets). A parent class' fields are a subset of child class'. Functions are described in pseudocode which has to look like this and that...
I know that Z Notation has been used to some extent in the formal verification of software, such as the Tokeneer project.
Z Notation
Z Reference Manual
http://www.amazon.com/Concrete-Mathematics-Foundation-Computer-Science/dp/0201558025
Yes, there is, Floyd-Hoare Logic.
There are a lot of way, but i think most of them are inconvenient for expressing the structure since the structure is often not expressable in default mathematical concepts. The main exception is of course functional programing languages. Think about folds (catamorphisme), groups, algebra's etc.
For imperative programming I know of the existence of Z, which uses (pure and extended) lambda calculus set theory and (first order) predicate logic. However, i dont think it's very convenient. The only upside of using mathematics to express structure is the fact that you can prove stuff about it. But if you want to do that, take a look at JML, Spec# or Eiffel.
Depends on what you're trying to accomplish, but going down this road with specific languages can get you into trouble.
For example, see the circle-ellipse discussion on C++ FAQ Lite.
This book applies the deductive method
to programming by affiliating programs
with the abstract mathematical
theories that enable them work. [...]
I believe that Elements of Programming by Alexander Stepanov and Paul McJones, is pretty close to what you are looking for.
Concepts
A concept is a description of
requirements on one or more types
stated in terms of the existence and
properties of procedures, type
attributes, and type functions defined
on the types.
Z, which has already been mentioned, is pretty much what you describe. There are some variants of it for object-oriented modelling, but I think you can get quite far with "standard Z's" schemas if you wish to model classes.
There's also Alloy, which is newer and inspired by Z. Its notation is perhaps a bit closer to object-orientation. It is also analysable, i.e. you can check the models you create whether they fulfill certain conditions, but it cannot prove that properties hold, just attempt to refute within a finite scope.
The article Dependable Software by Design is a nice introduction to Alloy and its ilk, along with a table of available similar tools.
You are looking for functional programming. There are several functional programming languages, and they are all based on a fundamental mathematical theory called the Lambda calculus. Programs written in a functional programming language such as LISP are a mathematical representation of themselves. ;-)
There is a mathematical language which actually describes a program or rather it's operations. You take the initial state and then transform this state until you reach the desired target state. The transformations yield the program code which must be executed.
See the Wikipedia article about Hoare logic.
The basic idea is that for every function (no matter if you put that into a class or into an old style function), you have a pre- and a post-condition. For example, the precondition can be that you have an array which has >= 0 elements. the post-condition is that every element[i] must by <= element[j] for every i <= j.
The usual description would be "the function sorts the array". But the mathematical terms allow you to transform the input (which must match the precondition) into the output (which must match the postcondition).
It's a bit unwieldy to use, especially for more complex programs but some of the examples are pretty impressive. Often, you get really compact code as the result which looks quite complex but works at first try.
I'd like to suggest Algebra of Programming. It's a calculational approach to programs, using Relational Algebra, and Galois Connections.
If you have further interest on this topic, you can find an amazing paper, here, by Shin-Cheng Mu, and José Nuno Oliveira (slides).
Using Relational Algebra and First-Order Logic, also has a nice synergy with Alloy, Functional Programming, and Design by Contract (easily applied to Object-Oriented Programming).
I can enumerate many features of functional programming, but when my friend asked me Could you define functional programming for me? I couldn't.
I would say that the defining point of pure functional programming is that all computation is done in functions with no side effects. That is, functions take inputs and return values, but do not change any hidden state, In this paradigm, functions more closely model their mathematical cousins.
This was nailed down for me when I started playing with Erlang, a language with a write-once stack. However, it should be clarified that there is a difference between a programming paradigm, and a programming language. Languages that are generally referred to as functional provide a number of features that encourage or enforce the functional paradigm (e.g., Erlang with it's write-once stack, higher order functions, closures, etc.). However the functional programming paradigm can be applied in many languages (with varying degrees of pain).
A lot of the definitions so far have emphasized purity, but there are many languages that are considered functional that are not at all pure (e.g., ML, Scheme). I think the key properties that make a language "functional" are:
Higher-order functions. Functions are a built-in datatype no different from integers and booleans. Anonymous functions are easy to create and idiomatic (e.g., lambdas).
Everything is an expression. In imperative languages, a distinction is made between statements, which mutate state and affect control flow, and expressions, which yield values. In functional languages (even impure functional languages), expression evaluation is the fundamental unit of execution.
Given these two properties, you naturally get the behavior we think of as functional (e.g., expressing computations in terms of folds and maps). Eliminating mutable state is a way to make things even more functional.
From wikipedia:
In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast with the imperative programming style that emphasizes changes in state.
Using a functional approach gives the following benefits:
Concurrent programming is much easier in functional languages.
Functions in FP can never cause side effects - this makes unit testing much easier.
Hot Code Deployment in production environments is much easier.
Functional languages can be reasoned about mathematically.
Lazy evaluation provides potential for performance optimizations.
More expressive - closures, pattern matching, advanced type systems etc. allow programmers to 'say what they mean' more readily.
Brevity - for some classes of program a functional solution is significantly more concise.
There is a great article with more detail here.
Being able to enumerate the features is more useful than trying to define the term itself, as people will use the term "functional programming" in a variety of contexts with many shades of meaning across a continuum, whereas the individual features have individually crisper definitions that are more universally agreed upon.
Below are the features that come to mind. Most people use the term "functional programming" to refer to some subset of those features (the most common/important ones being "purity" and "higher-order functions").
FP features:
Purity (a.k.a. immutability, eschewing side-effects, referential transparency)
Higher-order functions (e.g. pass a function as a parameter, return it as a result, define anonymous function on the fly as a lambda expression)
Laziness (a.k.a. non-strict evaluation, most useful/usable when coupled with purity)
Algebraic data types and pattern matching
Closures
Currying / partial application
Parametric polymorphism (a.k.a. generics)
Recursion (more prominent as a result of purity)
Programming with expressions rather than statements (again, from purity)
...
The more features from the above list you are using, the more likely someone will label what you are doing "functional programming" (and the first two features--purity and higher-order functions--are probably worth the most extra bonus points towards your "FP score").
I have to add that functional programming tends to also abstract control structures of your program as well as the domain - e.g., you no longer do a 'for loop' on some list of things, but you 'map' it with some function to produce the output.
i think functional programming is a state of mind as well as the definition given above.
There are two separate definitions:
The older definition (first-class functions) has been given by Chris Conway.
The newer definition (avoiding side effects like mutation) has been given by John Stauffer. This is more generally known as purely functional programming.
This is a source of much confusion...
It's like drawing a picture by using vectors instead of bitmaps - tell the painter how to change the picture instead of what the picture looks like at each step.
It's application of functions as opposed to changing the state.
I think John Stauffer mostly has the definition. I would also add that you need to be able to pass functions around. Essentially you need high order functions, meaning you can pass functions around easily (although passing blocks is good enough).
For example a very popular functional call is map. It is basically equivalent to
list is some list of items
OutList is some empty list
foreach item in list
OutList.append(function(item))
return OutList
so that code is expressed as map(function, list). The revolutionary concept is that function is a function. Javascript is a great example of a language with high order functions. Basically functions can be treated like a variable and passed into functions or returned from functions. C++ and C have function pointers which can be used similarly. .NET delegates can also be used similarly.
then you can think of all sorts of cool abstractions...
Do you have a function AddItemsInList, MultiplyItemsInList, etc..?
Each function takes (List) and returns a single result
You could create (note, many languages do not allow you to pass + around as a function but it seems the clearest way to express the concept)....
AggregateItemsInList(List, combinefunction, StepFunction)
Increment functions work on indexes...better would be to make them work on list using list operations like next and for incTwo next next if it exists....
function incNormal(x) {
return x + 1
}
function incTwo(x) {
return x + 2
}
AggregateItemsInList(List, +, incNormal)
Want to do every other item?
AggegateItemsInList(List, +, incTwo)
Want to multiply?
AggregateItemsInList(List, *, incNormal)
Want to add exam scores together?
function AddScores (studenta, studentb) {
return studenta.score + studentb.score
}
AggregateItemsInList(ListOfStudents, AddScores, incOne)
High order functions are a very powerful abstraction. Instead of having to write custom methods for numbers, strings, students, etc.. you have one aggregate method that loops through a list of anything and you just have to create the addition operation for each data type.