Terminology: Partial application where the unbound argument is a function? - functional-programming

... partial application (or partial function application) refers to the process of fixing a
number of arguments to a function, producing another function of smaller arity.
I would like to find out if there is a specific name for the following: (pseudo-code!)
// Given functions:
def f(a, b) := ...
def g(a, b) := ...
def h(a, b) := ...
// And a construct of the following:
def cc(F, A, B) := F(A, B) // cc calls its argument F with A and B as parameters
// Then doing Partial Application for cc:
def call_1(F) := cc(F, 42, "answer")
def call_2(F) := cc(F, 7, "lucky")
// And the calling different matching functions this way:
do call_1(f)
do call_1(g)
do call_2(g)
do call_2(h)
Is there a name for this in functional programming? Or is it just partial application where the unbound parameter just happens to be a function

Actually, there's more to things like your call_N functions, beyond just partial application. Two things of note:
When you apply call_1 or call_2 to an argument, they can be immediately discarded; everything you do with them will be a tail call.
You could write similar functions that don't just apply the argument, but hold onto it for a while; this essentially lets the functions grab hold of their evaluation context, and give techniques for implementing complicated flow control via "jumping back" to previous contexts.
If you take the above two points and run with the concept, you'll eventually end up with continuation-passing style.

Related

Multiple dispatch in julia with the same variable type

Usually the multiple dispatch in julia is straightforward if one of the parameters in a function changes data type, for example Float64 vs Complex{Float64}. How can I implement multiple dispatch if the parameter is an integer, and I want two functions, one for even and other for odd values?
You may be able to solve this with a #generated function: https://docs.julialang.org/en/v1/manual/metaprogramming/#Generated-functions-1
But the simplest solution is to use an ordinary branch in your code:
function foo(x::MyType{N}) where {N}
if isodd(N)
return _oddfoo(x)
else
return _evenfoo(x)
end
end
This may seem as a defeat for the type system, but if N is known at compile-time, the compiler will actually select only the correct branch, and you will get static dispatch to the correct function, without loss of performance.
This is idiomatic, and as far as I know the recommended solution in most cases.
I expect that with type dispatch you ultimately still are calling after a check on odd versus even, so the most economical of code, without a run-time penatly, is going to be having the caller check the argument and call the proper function.
If you nevertheless have to be type based, for some reason unrelated to run-time efficiency, here is an example of such:
abstract type HasParity end
struct Odd <: HasParity
i::Int64
Odd(i::Integer) = new(isodd(i) ? i : error("not odd"))
end
struct Even <: HasParity
i::Int64
Even(i::Integer) = new(iseven(i) ? i : error("not even"))
end
parity(i) = return iseven(i) ? Even(i) : Odd(i)
foo(i::Odd) = println("$i is odd.")
foo(i::Even) = println("$i is even.")
for n in 1:4
k::HasParity = parity(n)
foo(k)
end
So here's other option which I think is cleaner and more multiple dispatch oriented (given by a coworker). Let's think N is the natural number to be checked and I want two functions that do different stuff depending if N is even or odd. Thus
boolN = rem(N,2) == 0
(...)
function f1(::Val{true}, ...)
(...)
end
function f1(::Val{false}, ...)
(...)
end
and to call the function just do
f1(Val(boolN))
As #logankilpatrick pointed out the dispatch system is type based. What you are dispatching on, though, is well established pattern known as a trait.
Essentially your code looks like
myfunc(num) = iseven(num) ? _even_func(num) : _odd_func(num)

What does the "Base" keyword mean in Julia?

I saw this example in the Julia language documentation. It uses something called Base. What is this Base?
immutable Squares
count::Int
end
Base.start(::Squares) = 1
Base.next(S::Squares, state) = (state*state, state+1)
Base.done(S::Squares, s) = s > S.count;
Base.eltype(::Type{Squares}) = Int # Note that this is defined for the type
Base.length(S::Squares) = S.count;
Base is a module which defines many of the functions, types and macros used in the Julia language. You can view the files for everything it contains here or call whos(Base) to print a list.
In fact, these functions and types (which include things like sum and Int) are so fundamental to the language that they are included in Julia's top-level scope by default.
This means that we can just use sum instead of Base.sum every time we want to use that particular function. Both names refer to the same thing:
Julia> sum === Base.sum
true
Julia> #which sum # show where the name is defined
Base
So why, you might ask, is it necessary is write things like Base.start instead of simply start?
The point is that start is just a name. We are free to rebind names in the top-level scope to anything we like. For instance start = 0 will rebind the name 'start' to the integer 0 (so that it no longer refers to Base.start).
Concentrating now on the specific example in docs, if we simply wrote start(::Squares) = 1, then we find that we have created a new function with 1 method:
Julia> start
start (generic function with 1 method)
But Julia's iterator interface (invoked using the for loop) requires us to add the new method to Base.start! We haven't done this and so we get an error if we try to iterate:
julia> for i in Squares(7)
println(i)
end
ERROR: MethodError: no method matching start(::Squares)
By updating the Base.start function instead by writing Base.start(::Squares) = 1, the iterator interface can use the method for the Squares type and iteration will work as we expect (as long as Base.done and Base.next are also extended for this type).
I'll grant that for something so fundamental, the explanation is buried a bit far down in the documentation, but http://docs.julialang.org/en/release-0.4/manual/modules/#standard-modules describes this:
There are three important standard modules: Main, Core, and Base.
Base is the standard library (the contents of base/). All modules
implicitly contain using Base, since this is needed in the vast
majority of cases.

How to keep elements in list through out the program in SML?

Suppose I have to update a list with each call to a function, such that, the previous element of the list are preserved.
Here is my attempt:
local
val all_list = [];
in
fun insert (x:int) : string = int2string (list_len( ((all_list#[x])) ) )
end;
The problem is that each time I call to insert, I get the output "1", which indicates that the list is initiated to [] again.
However I was expecting output of "1" for the first call to insert, and "2" for the second call,...etc.
I am not able to come with a workaround. How should it be done?
You need to use side-effects.
Most of the time functional programmers prefer to use pure functions, which don't have side effects. Your implementation is a pure function, so it will always return the same value for the same input (and in your case it returns the same value for any input).
You can deal with that by using a reference.
A crash course on references in Standard ML:
use ref to create a new reference, ref has type 'a -> 'a ref, so it packs an arbitrary value into a reference cell, which you can modify later;
! is for unpacking a reference: (!) : 'a ref -> 'a, in most imperative languages this operation is implicit, but not in SML or OCaml;
(:=) : 'a ref * 'a -> unit is an infix operator used for modifying references, here is how you increment the contents of an integer reference: r := !r + 1.
The above gives us the following code (I prepend xs onto the list, instead of appending them):
local
val rxs = ref []
in
fun insert (x:int) : string =
(rxs := x :: !rxs;
Int.toString (length (!rxs)))
end
Values are immutable in SML. insert is defined in a context in which the value of all_list is [] and nothing about your code changes that value.
all_list#[x]
doesn't mutate the value all_list -- it returns a brand new list, one which your code promptly discards (after taking its length).
Using reference types (one of SML's impure features) it is possible to do what you seem to be trying to do, but the resulting code wouldn't be idiomatic SML. It would break referential transparency (the desirable feature of functional programming languages where a function called with identical inputs yields identical outputs).

julia introspection - get name of variable passed to function

In Julia, is there any way to get the name of a passed to a function?
x = 10
function myfunc(a)
# do something here
end
assert(myfunc(x) == "x")
Do I need to use macros or is there a native method that provides introspection?
You can grab the variable name with a macro:
julia> macro mymacro(arg)
string(arg)
end
julia> #mymacro(x)
"x"
julia> #assert(#mymacro(x) == "x")
but as others have said, I'm not sure why you'd need that.
Macros operate on the AST (code tree) during compile time, and the x is passed into the macro as the Symbol :x. You can turn a Symbol into a string and vice versa. Macros replace code with code, so the #mymacro(x) is simply pulled out and replaced with string(:x).
Ok, contradicting myself: technically this is possible in a very hacky way, under one (fairly limiting) condition: the function name must have only one method signature. The idea is very similar the answers to such questions for Python. Before the demo, I must emphasize that these are internal compiler details and are subject to change. Briefly:
julia> function foo(x)
bt = backtrace()
fobj = eval(current_module(), symbol(Profile.lookup(bt[3]).func))
Base.arg_decl_parts(fobj.env.defs)[2][1][1]
end
foo (generic function with 1 method)
julia> foo(1)
"x"
Let me re-emphasize that this is a bad idea, and should not be used for anything! (well, except for backtrace display). This is basically "stupid compiler tricks", but I'm showing it because it can be kind of educational to play with these objects, and the explanation does lead to a more useful answer to the clarifying comment by #ejang.
Explanation:
bt = backtrace() generates a ... backtrace ... from the current position. bt is an array of pointers, where each pointer is the address of a frame in the current call stack.
Profile.lookup(bt[3]) returns a LineInfo object with the function name (and several other details about each frame). Note that bt[1] and bt[2] are in the backtrace-generation function itself, so we need to go further up the stack to get the caller.
Profile.lookup(...).func returns the function name (the symbol :foo)
eval(current_module(), Profile.lookup(...)) returns the function object associated with the name :foo in the current_module(). If we modify the definition of function foo to return fobj, then note the equivalence to the foo object in the REPL:
julia> function foo(x)
bt = backtrace()
fobj = eval(current_module(), symbol(Profile.lookup(bt[3]).func))
end
foo (generic function with 1 method)
julia> foo(1) == foo
true
fobj.env.defs returns the first Method entry from the MethodTable for foo/fobj
Base.decl_arg_parts is a helper function (defined in methodshow.jl) that extracts argument information from a given Method.
the rest of the indexing drills down to the name of the argument.
Regarding the restriction that the function have only one method signature, the reason is that multiple signatures will all be listed (see defs.next) in the MethodTable. As far as I know there is no currently exposed interface to get the specific method associated with a given frame address. (as an exercise for the advanced reader: one way to do this would be to modify the address lookup functionality in jl_getFunctionInfo to also return the mangled function name, which could then be re-associated with the specific method invocation; however, I don't think we currently store a reverse mapping from mangled name -> Method).
Note also that (1) backtraces are slow (2) there is no notion of "function-local" eval in Julia, so even if one has the variable name, I believe it would be impossible to actually access the variable (and the compiler may completely elide local variables, unused or otherwise, put them in a register, etc.)
As for the IDE-style introspection use mentioned in the comments: foo.env.defs as shown above is one place to start for "object introspection". From the debugging side, Gallium.jl can inspect DWARF local variable info in a given frame. Finally, JuliaParser.jl is a pure-Julia implementation of the Julia parser that is actively used in several IDEs to introspect code blocks at a high level.
Another method is to use the function's vinfo. Here is an example:
function test(argx::Int64)
vinfo = code_lowered(test,(Int64,))
string(vinfo[1].args[1][1])
end
test (generic function with 1 method)
julia> test(10)
"argx"
The above depends on knowing the signature of the function, but this is a non-issue if it is coded within the function itself (otherwise some macro magic could be needed).

How to call two functions and use their results as arguments for each other?

I have code:
let join a ~with':b ~by:key =
let rec a' = link a ~to':b' ~by:key
and b' = link b ~to':a' ~by:(Key.opposite key) in
a'
and compilation result for it is:
Error: This kind of expression is not allowed as right-hand side of
`let rec' build complete
I can rewrite it to:
let join a ~with':b ~by:key =
let rec a'() = link a ~to':(b'()) ~by:key
and b'() = link b ~to':(a'()) ~by:(Key.opposite key) in
a'()
It is compilable variant, but implemented function is infinitely recursive and it is not what I need.
My questions: Why is first implementation invalid? How to call two functions and use their results as arguments for each other?
My compiler version = 4.01.0
The answer to your first question is given in Section 7.3 of the OCaml manual. Here's what it says:
Informally, the class of accepted definitions consists of those definitions where the defined names occur only inside function bodies or as argument to a data constructor.
Your names appear as function arguments, which isn't supported.
I suspect the reason is that you can't assign a semantics otherwise. It seems to me the infinite computation that you see is impossible to avoid in general.

Resources