What is the difference between functions and data? - functional-programming

In functional programming, we tend to distinguish between data and functions, but what is the difference?
If I consider a constant, I could think of it as a function, which just returns the same value:
(def x 5)
So what is the distinction between data and a function? I fail to see the difference.

Data
Data is a value (with a specific type).
For example, 5 is a value of type Integer, and "abc" is a value of type String. A composite value such as [5 "abc"] has the type Vector.
Two data values of the same type can always be compared for equality.
Data is never executed. That is, the thread of control (aka program counter or PC) never enters the data structure.
Function (aka "code")
A function's only type is "code".
Two functions are never equal, even if they are duplicates of each other.
A function produces a value (with a specific type) when it is executed (possibly with arguments).
Execution means the thread of control enters the code data structure. The code and data values encountered there have complete control over any side-effects that occur, as well as the return value.
Both compiled and interpreted code produce the same results. The only difference between them are implementation details that trade off complexity vs speed.
Eval
The (eval ...) special form accepts data as input and returns a function as output. The returned function can be executed (i.e. invoked) so the thread of control enters the function.
For clarity, the above elides details such as the reader, etc.
Macros are best viewed as a compiler extension embedded within the code, and do not affect the data vs code distinction.
Postscript
It occurred to me that the original question has not been fully answered. Consider the following:
; A Clojure Var pointing to the value 5
(def five 5)
; A Clojure Var pointing to a function that always returns the value 5
(def ->five (fn [& args] 5))
and then use these 2 Vars:
five => 5
(->five) => 5
The parentheses make all the difference.
See also:
Brave Clojure
LispCast

In languages with the property of homoiconicity, code is data and data is code.
This code data duality blurs the distinction between code and data.
(I think your question is about what is the difference between lambda and data - if lambda itself is actually also just a data structure which has to be executed ...)
In homoiconic languages, data can become lambda (if it contains the instructions for a lambda) and vice versa.
So perhaps, the distinction is only by their type (function vs. any other data structure or primitive data type).

Related

Should I return Multiple Values with caution?

In Practical Common Lisp, Peter Seibel write:
The mechanism by which multiple values are returned is implementation dependent just like the mechanism for passing arguments into functions is. Almost all language constructs that return the value of some subform will "pass through" multiple values, returning all the values returned by the subform. Thus, a function that returns the result of calling VALUES or VALUES-LIST will itself return multiple values--and so will another function whose result comes from calling the first function. And so on.
The implementation dependent does worry me.
My understanding is that the following code might just return primary value:
> (defun f ()
(values 'a 'b))
> (defun g ()
(f))
> (g) ; ==> a ? or a b ?
If so, does it mean that I should use this feature sparingly?
Any help is appreciated.
It's implementation-dependent in the sense that how multiple values are returned at the CPU level may vary from implementation to implementation. However, the semantics are well-specified at the language level and you generally do not need to be concerned about the low-level implementation.
See section 2.5, "Function result protocol", of The Movitz development platform for an example of how one implementation handles multiple return values:
The CPU’s carry flag (i.e. the CF bit in the eflags register) is used to signal whether anything other than precisely one value is being returned. Whenever CF is set, ecx holds the number of values returned. When CF is cleared, a single value in eax is implied. A function’s primary value is always returned in eax. That is, even when zero values are returned, eax is loaded with nil.
It's this kind of low-level detail that may vary from implementation to implementation.
One thing to be aware: there is a limit for the number of values which can be returned on a specific Common Lisp implementation.
The variable MULTIPLE-VALUES-LIMIT has the implementation/machine specific value of the maximum numbers of values which can be returned. The standard says that it should not be smaller than 20. SBCL has a very large number on my computer, while LispWorks has only 51, ECL has 64 and CLISP has 128.
But I can't remember seeing Lisp code which wants to return more than 5 values.

Parameters of function in Julia

Does anyone know the reasons why Julia chose a design of functions where the parameters given as inputs cannot be modified?  This requires, if we want to use it anyway, to go through a very artificial process, by representing these data in the form of a ridiculous single element table.
Ada, which had the same kind of limitation, abandoned it in its 2012 redesign to the great satisfaction of its users. A small keyword (like out in Ada) could very well indicate that the possibility of keeping the modifications of a parameter at the output is required.
From my experience in Julia it is useful to understand the difference between a value and a binding.
Values
Each value in Julia has a concrete type and location in memory. Value can be mutable or immutable. In particular when you define your own composite type you can decide if objects of this type should be mutable (mutable struct) or immutable (struct).
Of course Julia has in-built types and some of them are mutable (e.g. arrays) and other are immutable (e.g. numbers, strings). Of course there are design trade-offs between them. From my perspective two major benefits of immutable values are:
if a compiler works with immutable values it can perform many optimizations to speed up code;
a user is can be sure that passing an immutable to a function will not change it and such encapsulation can simplify code analysis.
However, in particular, if you want to wrap an immutable value in a mutable wrapper a standard way to do it is to use Ref like this:
julia> x = Ref(1)
Base.RefValue{Int64}(1)
julia> x[]
1
julia> x[] = 10
10
julia> x
Base.RefValue{Int64}(10)
julia> x[]
10
You can pass such values to a function and modify them inside. Of course Ref introduces a different type so method implementation has to be a bit different.
Variables
A variable is a name bound to a value. In general, except for some special cases like:
rebinding a variable from module A in module B;
redefining some constants, e.g. trying to reassign a function name with a non-function value;
rebinding a variable that has a specified type of allowed values with a value that cannot be converted to this type;
you can rebind a variable to point to any value you wish. Rebinding is performed most of the time using = or some special constructs (like in for, let or catch statements).
Now - getting to the point - function is passed a value not a binding. You can modify a binding of a function parameter (in other words: you can rebind a value that a parameter is pointing to), but this parameter is a fresh variable whose scope lies inside a function.
If, for instance, we wanted a call like:
x = 10
f(x)
change a binding of variable x it is impossible because f does not even know of existence of x. It only gets passed its value. In particular - as I have noted above - adding such a functionality would break the rule that module A cannot rebind variables form module B, as f might be defined in a module different than where x is defined.
What to do
Actually it is easy enough to work without this feature from my experience:
What I typically do is simply return a value from a function that I assign to a variable. In Julia it is very easy because of tuple unpacking syntax like e.g. x,y,z = f(x,y,z), where f can be defined e.g. as f(x,y,z) = 2x,3y,4z;
You can use macros which get expanded before code execution and thus can have an effect modifying a binding of a variable, e.g. macro plusone(x) return esc(:($x = $x+1)) end and now writing y=100; #plusone(y) will change the binding of y;
Finally you can use Ref as discussed above (or any other mutable wrapper - as you have noted in your question).
"Does anyone know the reasons why Julia chose a design of functions where the parameters given as inputs cannot be modified?" asked by Schemer
Your question is wrong because you assume the wrong things.
Parameters are variables
When you pass things to a function, often those things are values and not variables.
for example:
function double(x::Int64)
2 * x
end
Now what happens when you call it using
double(4)
What is the point of the function modifying it's parameter x , it's pointless. Furthermore the function has no idea how it is called.
Furthermore, Julia is built for speed.
A function that modifies its parameter will be hard to optimise because it causes side effects. A side effect is when a procedure/function changes objects/things outside of it's scope.
If a function does not modifies a variable that is part of its calling parameter then you can be safe knowing.
the variable will not have its value changed
the result of the function can be optimised to a constant
not calling the function will not break the program's behaviour
Those above three factors are what makes FUNCTIONAL language fast and NON FUNCTIONAL language slow.
Furthermore when you move into Parallel programming or Multi Threaded programming, you absolutely DO NOT WANT a variable having it's value changed without you (The programmer) knowing about it.
"How would you implement with your proposed macro, the function F(x) which returns a boolean value and modifies c by c:= c + 1. F can be used in the following piece of Ada code : c:= 0; While F(c) Loop ... End Loop;" asked by Schemer
I would write
function F(x)
boolean_result = perform_some_logic()
return (boolean_result,x+1)
end
flag = true
c = 0
(flag,c) = F(c)
while flag
do_stuff()
(flag,c) = F(c)
end
"Unfortunately no, because, and I should have said that, c has to take again the value 0 when F return the value False (c increases as long the Loop lives and return to 0 when it dies). " said Schemer
Then I would write
function F(x)
boolean_result = perform_some_logic()
if boolean_result == true
return (true,x+1)
else
return (false,0)
end
end
flag = true
c = 0
(flag,c) = F(c)
while flag
do_stuff()
(flag,c) = F(c)
end

Puzzling results for Julia typeof

I am puzzled by the following results of typeof in the Julia 1.0.0 REPL:
# This makes sense.
julia> typeof(10)
Int64
# This surprised me.
julia> typeof(function)
ERROR: syntax: unexpected ")"
# No answer at all for return example and no error either.
julia> typeof(return)
# In the next two examples the REPL returns the input code.
julia> typeof(in)
typeof(in)
julia> typeof(typeof)
typeof(typeof)
# The "for" word returns an error like the "function" word.
julia> typeof(for)
ERROR: syntax: unexpected ")"
The Julia 1.0.0 documentation says for typeof
"Get the concrete type of x."
The typeof(function) example is the one that really surprised me. I expected a function to be a first-class object in Julia and have a type. I guess I need to understand types in Julia.
Any suggestions?
Edit
Per some comment questions below, here is an example based on a small function:
julia> function test() return "test"; end
test (generic function with 1 method)
julia> test()
"test"
julia> typeof(test)
typeof(test)
Based on this example, I would have expected typeof(test) to return generic function, not typeof(test).
To be clear, I am not a hardcore user of the Julia internals. What follows is an answer designed to be (hopefully) an intuitive explanation of what functions are in Julia for the non-hardcore user. I do think this (very good) question could also benefit from a more technical answer provided by one of the more core developers of the language. Also, this answer is longer than I'd like, but I've used multiple examples to try and make things as intuitive as possible.
As has been pointed out in the comments, function itself is a reserved keyword, and is not an actual function istself per se, and so is orthogonal to the actual question. This answer is intended to address your edit to the question.
Since Julia v0.6+, Function is an abstract supertype, much in the same way that Number is an abstract supertype. All functions, e.g. mean, user-defined functions, and anonymous functions, are subtypes of Function, in the same way that Float64 and Int are subtypes of Number.
This structure is deliberate and has several advantages.
Firstly, for reasons I don't fully understand, structuring functions in this way was the key to allowing anonymous functions in Julia to run just as fast as in-built functions from Base. See here and here as starting points if you want to learn more about this.
Secondly, because each function is its own subtype, you can now dispatch on specific functions. For example:
f1(f::T, x) where {T<:typeof(mean)} = f(x)
and:
f1(f::T, x) where {T<:typeof(sum)} = f(x) + 1
are different dispatch methods for the function f1
So, given all this, why does, e.g. typeof(sum) return typeof(sum), especially given that typeof(Float64) returns DataType? The issue here is that, roughly speaking, from a syntactical perspective, sum needs to serves two purposes simultaneously. It needs to be both a value, like e.g. 1.0, albeit one that is used to call the sum function on some input. But, it is also needs to be a type name, like Float64.
Obviously, it can't do both at the same time. So sum on its own behaves like a value. You can write f = sum ; f(randn(5)) to see how it behaves like a value. But we also need some way of representing the type of sum that will work not just for sum, but for any user-defined function, and any anonymous function. The developers decided to go with the (arguably) simplest option and have the type of sum print literally as typeof(sum), hence the behaviour you observe. Similarly if I write f1(x) = x ; typeof(f1), that will also return typeof(f1).
Anonymous functions are a bit more tricky, since they are not named as such. What should we do for typeof(x -> x^2)? What actually happens is that when you build an anonymous function, it is stored as a temporary global variable in the module Main, and given a number that serves as its type for lookup purposes. So if you write f = (x -> x^2), you'll get something back like #3 (generic function with 1 method), and typeof(f) will return something like getfield(Main, Symbol("##3#4")), where you can see that Symbol("##3#4") is the temporary type of this anonymous function stored in Main. (a side effect of this is that if you write code that keeps arbitrarily generating the same anonymous function over and over you will eventually overflow memory, since they are all actually being stored as separate global variables of their own type - however, this does not prevent you from doing something like this for n = 1:largenumber ; findall(y -> y > 1.0, x) ; end inside a function, since in this case the anonymous function is only compiled once at compile-time).
Relating all of this back to the Function supertype, you'll note that typeof(sum) <: Function returns true, showing that the type of sum, aka typeof(sum) is indeed a subtype of Function. And note also that typeof(typeof(sum)) returns DataType, in much the same way that typeof(typeof(1.0)) returns DataType, which shows how sum actually behaves like a value.
Now, given everything I've said, all the examples in your question now make sense. typeof(function) and typeof(for) return errors as they should, since function and for are reserved syntax. typeof(typeof) and typeof(in) correctly return (respectively) typeof(typeof), and typeof(in), since typeof and in are both functions. Note of course that typeof(typeof(typeof)) returns DataType.

Hidden remote function call of a local binding in ocaml

Given the following example for generating a lazy list number sequence:
type 'a lazy_list = Node of 'a * (unit -> 'a lazy_list);;
let make =
let rec gen i =
Node(i, fun() -> gen (i + 1))
in gen 0
;;
I asked myself the following questions when trying to understand how the example works (obviously I could not answer myself and therefore I am asking here)
When calling let Node(_, f) = make and then f(), why does the call of gen 1 inside f() succeed although gen is a local binding only existing in make?
Shouldn't the created Node be completely unaware of the existence of gen? (Obviously not since it works.)
How is a construction like this being handled by the compiler?
First of all, the questions that are asking have nothing to do with the concepts of lazy, so we can disregard this particular issue, to simplify the discussion.
As Jeffrey noted in the comment to your question, the answer is simple - it is a closure.
But let me extend it a little bit. Functional programming languages, as well as many other modern languages, including Python and C++, allows to define functions in a scope of another function and to refer to the variables available in the scope of the enclosing function. These variables are called captured variables, and the created functional object along with the captured values is called the closure.
From the compiler perspective, the implementation is rather simple (to understand). The closure is a normal value, that contains a code to be executed, as well as pointers to the extra values, that were captured from the outer scope. Since OCaml is a garbage collected language, the values are preserved, as they are referenced from a live object. In C++ the story is much more complicated, as C++ doesn't have the GC, but this is a completely different story.
Shouldn't the created Node be completely unaware of the existence of gen? (Obviously not since it works.)
The create Node is an object that has two pointers, a pointer to the initial object i, and a pointer to the anonymous function fun() -> gen (i + 1). The anonymous function has a pointer to the same initial object i. In our particular case, the i is an integer, so instead of being a pointer the i value is represented inline, but these are details that are irrelevant to the question.

Reflecting on a Type parameter

I am trying to create a function
import Language.Reflection
foo : Type -> TT
I tried it by using the reflect tactic:
foo = proof
{
intro t
reflect t
}
but this reflects on the variable t itself:
*SOQuestion> foo
\t => P Bound (UN "t") (TType (UVar 41)) : Type -> TT
Reflection in Idris is a purely syntactic, compile-time only feature. To predict how it will work, you need to know about how Idris converts your program to its core language. Importantly, you won't be able to get ahold of reflected terms at runtime and reconstruct them like you would with Lisp. Here's how your program is compiled:
Internally, Idris creates a hole that will expect something of type Type -> TT.
It runs the proof script for foo in this state. We start with no assumptions and a goal of type Type -> TT. That is, there's a term being constructed which looks like ?rhs : Type => TT . rhs. The ?foo : ty => body syntax shows that there's a hole called foo whose eventual value will be available inside of body.
The step intro t creates a function whose argument is t : Type - this means that we now have a term like ?foo_body : TT . \t : Type => foo_body.
The reflect t step then fills the current hole by taking the term on its right-hand side and converting it to a TT. That term is in fact just a reference to the argument of the function, so you get the variable t. reflect, like all other proof script steps, only has access to the information that is available directly at compile time. Thus, the result of filling in foo_body with the reflection of the term t is P Bound (UN "t") (TType (UVar (-1))).
If you could do what you are wanting here, it would have major consequences both for understanding Idris code and for running it efficiently.
The loss in understanding would come from the inability to use parametricity to reason about the behavior of functions based on their types. All functions would effectively become potentially ad-hoc polymorphic, because they could (say) run differently on lists of strings than on lists of ints.
The loss in performance would come from representing enough type information to do the reflection. After Idris code is compiled, there is no type information left in it (unlike in a system such as the JVM or .NET or a dynamically typed system such as Python, where types have a runtime representation that code can access). In Idris, types can be very large, because they can contain arbitrary programs - this means that far more information would have to be maintained, and computation occurring at the type level would also have to be preserved and repeated at runtime.
If you're wanting to reflect on the structure of a type for further proof automation at compile time, take a look at the applyTactic tactic. Its argument should be a function that takes a reflected context and goal and gives back a new reflected tactic script. An example can be seen in the Data.Vect source.
So I suppose the summary is that Idris can't do what you want, and it probably never will be able to, but you might be able to make progress another way.

Resources