I have a pure function that takes 18 arguments process them and returns an answer.
Inside this function I call many other pure functions and those functions call other pure functions within them as deep as 6 levels.
This way of composition is cumbersome to test as the top level functions,in addition to their logic,have to gather parameters for inner functions.
# Minimal conceptual example
main_function(a, b, c, d, e) = begin
x = pure_function_1(a, b, d)
y = pure_function_2(a, c, e, x)
z = pure_function_3(b, c, y, x)
answer = pure_function_4(x,y,z)
return answer
end
# real example
calculate_time_dependant_losses(
Ap,
u,
Ac,
e,
Ic,
Ep,
Ecm_t,
fck,
RH,
T,
cementClass::Char,
ρ_1000,
σ_p_start,
f_pk,
t0,
ts,
t_start,
t_end,
) = begin
μ = σ_p_start / f_pk
fcm = fck + 8
Fr = σ_p_start * Ap
_σ_pb = σ_pb(Fr, Ac, e, Ic)
_ϵ_cs_t_start_t_end = ϵ_cs_ti_tj(ts, t_start, t_end, Ac, u, fck, RH, cementClass)
_ϕ_t0_t_start_t_end = ϕ_t0_ti_tj(RH, fcm, Ac, u, T, cementClass, t0, t_start, t_end)
_Δσ_pr_t_start_t_end =
Δσ_pr(σ_p_start, ρ_1000, t_end, μ) - Δσ_pr(σ_p_start, ρ_1000, t_start, μ)
denominator =
1 +
(1 + 0.8 * _ϕ_t0_t_start_t_end) * (1 + (Ac * e^2) / Ic) * ((Ep * Ap) / (Ecm_t * Ac))
shrinkageLoss = (_ϵ_cs_t_start_t_end * Ep) / denominator
relaxationLoss = (0.8 * _Δσ_pr_t_start_t_end) / denominator
creepLoss = (Ep * _ϕ_t0_t_start_t_end * _σ_pb) / Ecm_t / denominator
return shrinkageLoss + relaxationLoss + creepLoss
end
I see examples of functional composition (dot chaining,pipe operator etc) with single argument functions.
Is it practical to compose the above function using functional programming?If yes, how?
The standard and simple way is to recast your example so that it can be written as
# Minimal conceptual example, re-cast
main_function(a, b, c, d, e) = begin
x = pure_function_1'(a, b, d)()
y = pure_function_2'(a, c, e)(x)
z = pure_function_3'(b, c)(y) // I presume you meant `y` here
answer = pure_function_4(z) // and here, z
return answer
end
Meaning, we use functions that return functions of one argument. Now these functions can be easily composed, using e.g. a forward-composition operator (f >>> g)(x) = g(f(x)) :
# Minimal conceptual example, re-cast, composed
main_function(a, b, c, d, e) = begin
composed_calculation =
pure_function_1'(a, b, d) >>>
pure_function_2'(a, c, e) >>>
pure_function_3'(b, c, y) >>>
pure_function_4
answer = composed_calculation()
return answer
end
If you really need the various x y and z at differing points in time during the composed computation, you can pass them around in a compound, record-like data structure. We can avoid the coupling of this argument handling if we have extensible records:
# Minimal conceptual example, re-cast, composed, args packaged
main_function(a, b, c, d, e) = begin
composed_calculation =
pure_function_1'(a, b, d) >>> put('x') >>>
get('x') >>> pure_function_2'(a, c, e) >>> put('y') >>>
get('x') >>> pure_function_3'(b, c, y) >>> put('z') >>>
get({'x';'y';'z'}) >>> pure_function_4
answer = composed_calculation(empty_initial_state)
return value(answer)
end
The passed around "state" would be comprised of two fields: a value and an extensible record. The functions would accept this state, use the value as their additional input, and leave the record unchanged. get would take the specified field out of the record and put it in the "value" field in the state. put would mutate the extensible record in the state:
put(field_name) = ( {value:v ; record:r} =>
{v ; put_record_field( r, field_name, v)} )
get(field_name) = ( {value:v ; record:r} =>
{get_record_field( r, field_name) ; r} )
pure_function_2'(a, c, e) = ( {value:v ; record:r} =>
{pure_function_2(a, c, e, v); r} )
value(r) = get_record_field( r, value)
empty_initial_state = { novalue ; empty_record }
All in pseudocode.
Augmented function application, and hence composition, is one way of thinking about "what monads are". Passing around the pairing of a produced/expected argument and a state is known as State Monad. The coder focuses on dealing with the values while treating the state as if "hidden" "under wraps", as we do here through the get/put etc. facilities. Under this illusion/abstraction, we do get to "simply" compose our functions.
I can make a small start at the end:
sum $ map (/ denominator)
[ _ϵ_cs_t_start_t_end * Ep
, 0.8 * _Δσ_pr_t_start_t_end
, (Ep * _ϕ_t0_t_start_t_end * _σ_pb) / Ecm_t
]
As mentioned in the comments (repeatedly), the function composition operator does indeed accept multiple argument functions. Cite: https://docs.julialang.org/en/v1/base/base/#Base.:%E2%88%98
help?> ∘
"∘" can be typed by \circ<tab>
search: ∘
f ∘ g
Compose functions: i.e. (f ∘ g)(args...; kwargs...) means f(g(args...; kwargs...)). The ∘ symbol
can be entered in the Julia REPL (and most editors, appropriately configured) by typing
\circ<tab>.
Function composition also works in prefix form: ∘(f, g) is the same as f ∘ g. The prefix form
supports composition of multiple functions: ∘(f, g, h) = f ∘ g ∘ h and splatting ∘(fs...) for
composing an iterable collection of functions.
The challenge is chaining the operations together, because any function can only pass on a tuple to the next function in the composed chain. The solution could be making sure your chained functions 'splat' the input tuples into the next function.
Example:
# splat to turn max into a tuple-accepting function
julia> f = (x->max(x...)) ∘ minmax;
julia> f(3,5)
5
Using this will in no way help make your function cleaner, though, in fact it will probably make a horrible mess.
Your problems do not at all seem to me to be related to how you call, chain or compose your functions, but are entirely due to not organizing the inputs in reasonable types with clean interfaces.
Edit: Here's a custom composition operator that splats arguments, to avoid the tuple output issue, though I don't see how it can help picking the right arguments, it just passes everything on:
⊕(f, g) = (args...) -> f(g(args...)...)
⊕(f, g, h...) = ⊕(f, ⊕(g, h...))
Example:
julia> myrev(x...) = reverse(x);
julia> (myrev ⊕ minmax)(5,7)
(7, 5)
julia> (minmax ⊕ myrev ⊕ minmax)(5,7)
(5, 7)
Related
I'm a beginner in functional programming but I'm famaliar with imperative programming. I'm having trouble translating a piece of cpp code involving updatating two objects at the same time (context is n-body simulation).
It's roughly like this in c++:
for (Particle &i: particles) {
for (Particle &j: particles) {
collide(i, j) // function that mutates particles i and j
}
}
I'm translating this to Ocaml, with immutable objects and immutable Lists. The difficult part is that I need to replace two objects at the same time. So far I have this:
List.map (fun i ->
List.map (fun j ->
let (new_i, new_j) = collide(i, j) in // function that returns new particles i, j
// how do i update particles with new i, j?
) particles
) particles
How do I replace both objects in the List at the same time?
The functional equivalent of the imperative code is just as simple as,
let nbody f xs =
List.map (fun x -> List.fold_left f x xs) xs
It is a bit more generic, as a I abstracted the collide function and made it a parameter. The function f takes two bodies and returns the state of the first body as affected by the second body. For example, we can implement the following symbolic collide function,
let symbolic x y = "f(" ^ x ^ "," ^ y ^ ")"
so that we can see the result and associativity of the the collide function application,
# nbody symbolic [
"x"; "y"; "z"
];;
- : string list =
["f(f(f(x,x),y),z)"; "f(f(f(y,x),y),z)"; "f(f(f(z,x),y),z)"]
So, the first element of the output list is the result of collision of x with x itself, then with y, then with z. The second element is the result of collision of y with x, and y, and z. And so on.
Obviously the body shall not collide with itself, but this could be easily fixed by either modifying the collide function or by filtering the input list to List.fold and removing the currently being computed element. This is left as an exercise.
List.map returns a new list. The function you supply to List.map may transform the elements from one type to another or just apply some operation on the same type.
For example, let's assume you start with a list of integer tuples
let int_tuples = [(1, 3); (4, 3); (8, 2)];;
and let's assume that your update function takes an integer tuple and doubles the integers:
let update (i, j) = (i * 2, j * 2) (* update maybe your collide function *)
If you now do:
let new_int_tuples = List.map update int_tuples
You'll get
(* [(2, 6); (8, 6); (16, 4)] *)
Hope this helps
How do I evaluate the function in only one of its variables, that is, I hope to obtain another function after evaluating the function. I have the following piece of code.
deff ('[F] = fun (x, y)', 'F = x ^ 2-3 * y ^ 2 + x * y ^ 3');
fun (4, y)
I hope to get 16-3y ^ 2 + 4y ^ 3
If what you want to do is to write x = f(4,y), and later just do x(2) to get -36, that is called partial application:
Intuitively, partial function application says "if you fix the first arguments of the function, you get a function of the remaining arguments".
This is a very useful feature, and very common Functional Programming Languages, such as Haskell, but even JS and Python now are able to do it. It is also possible to do this in MATLAB and GNU/Octave using anonymous functions (see this answer). In Scilab, however, this feature is not available.
Workround
Nonetheless, Scilab itself uses a workarounds to carry a function with its arguments without fully evaluating. You see this being used in ode(), fsolve(), optim(), and others:
Create a list containing the function and the arguments to partial evaluation: list(f,arg1,arg2,...,argn)
Use another function to evaluate such list and the last argument: evalPartList(list(...),last_arg)
The implementation of evalPartList() can be something like this:
function y = evalPartList(fList,last_arg)
//fList: list in which the first element is a function
//last_arg: last argument to be applied to the function
func = fList(1); //extract function from the list
y = func(fList(2:$),last_arg); //each element of the list, from second
//to last, becomes an argument
endfunction
You can test it on Scilab's console:
--> deff ('[F] = fun (x, y)', 'F = x ^ 2-3 * y ^ 2 + x * y ^ 3');
--> x = list(fun,4)
x =
x(1)
[F]= x(1)(x,y)
x(2)
4.
--> evalPartList(x,2)
ans =
36.
This is a very simple implementation for evalPartList(), and you have to be careful not to exceed or be short on the number of arguments.
In the way you're asking, you can't.
What you're looking is called symbolic (or formal) computational mathematics, because you don't pass actual numerical values to functions.
Scilab is numerical software so it can't do such thing. But there is a toolbox scimax (installation guide) that rely on a the free formal software wxmaxima.
BUT
An ugly, stupid but still sort of working solution is to takes advantages of strings :
function F = fun (x, y) // Here we define a function that may return a constant or string depending on the input
fmt = '%10.3E'
if (type(x)==type('')) & (type(y)==type(0)) // x is string is
ys = msprintf(fmt,y)
F = x+'^2 - 3*'+ys+'^2 + '+x+'*'+ys+'^3'
end
if (type(y)==type('')) & (type(x)==type(0)) // y is string so is F
xs = msprintf(fmt,x)
F = xs+'^2 - 3*'+y+'^2 + '+xs+'*'+y+'^3'
end
if (type(y)==type('')) & (type(x)==type('')) // x&y are strings so is F
F = x+'^2 - 3*'+y+'^2 + '+x+'*'+y+'^3'
end
if (type(y)==type(0)) & (type(x)==type(0)) // x&y are constant so is F
F = x^2 - 3*y^2 + x*y^3
end
endfunction
// Then we can use this 'symbolic' function
deff('F2 = fun2(y)',' F2 = '+fun(4,'y'))
F2=fun2(2) // does compute fun(4,2)
disp(F2)
I cannot solve a problem in Scilab because it get stucked because of round-off errors. I get the message
!--error 9999
Error: Round-off error detected, the requested tolerance (or default) cannot be achieved. Try using bigger tolerances.
at line 2 of function scalpol called by :
at line 7 of function gram_schmidt_pol called by :
gram_schmidt_pol(a,-1/2,-1/2)
It's a Gram Schmidt process with the integral of the product of two functions and a weight as the scalar product, between -1 and 1.
gram_schmidt_pol is the process specially designed for polynome, and scalpol is the scalar product described for polynome.
The a and b are parameters for the weigth, which is (1+x)^a*(1-x)^b
The entry is a matrix representing a set of vectors, it works well with the matrix [[1;2;3],[4;5;6],[7;8;9]], but it fails with the above message error on matrix eye(2,2), in addition to this, I need to do it on eye(9,9) !
I have looked for a "tolerance setting" in the menus, there is some in General->Preferences->Xcos->Simulation but I believe this is not for what I wan't, I have tried low settings (high tolerance) in it and it hasn't change anything.
So how can I solve this rounf-off problem ?
Feel free to tell me my message lacks of clearness.
Thank you.
Edit: Code of the functions :
// function that evaluate a polynomial (vector of coefficients) in x
function [y] = pol(p, x)
y = 0
for i=1:length(p)
y = y + p(i)*x^(i-1)
end
endfunction
// weight function evaluated in x, parametrized by a and b
// (poids = weight in french)
function [y] = poids(x, a, b)
y = (1-x)^a*(1+x)^b
endfunction
// scalpol compute scalar product between polynomial p1 and p2
// using integrate, the weight and the pol functions.
function [s] = scalpol(p1, p2, a, b)
s = integrate('poids(x,a, b)*pol(p1,x)*pol(p2,x)', 'x', -1, 1)
endfunction
// norm associated to scalpol
function [y] = normscalpol(f, a, b)
y = sqrt(scalpol(f, f, a, b))
endfunction
// finally the gram schmidt process on a family of polynome
// represented by a matrix
function [o] = gram_schmidt_pol(m, a, b)
[n,p] = size(m)
o(1:n) = m(1:n,1)/(normscalpol(m(1:n,1), a, b))
for k = 2:p
s =0
for i = 1:(k-1)
s = s + (scalpol(o(1:n,i), m(1:n,k), a, b) / scalpol(o(1:n,i),o(1:n,i), a, b) .* o(1:n,i))
end
o(1:n,k) = m(1:n,k) - s
o(1:n,k) = o(1:n,k) ./ normscalpol(o(1:n,k), a, b)
end
endfunction
By default, Scilab's integrate routine tries to achieve absolute error at most 1e-8 and relative error at most 1e-14. This is reasonable, but its treatment of relative error does not take into account the issues that occur when the exact value is zero. (See How to calculate relative error when true value is zero?). For this reason, even the simple
integrate('x', 'x', -1, 1)
throws an error (in Scilab 5.5.1).
And this is what happens in the process of running your program: some integrals are zero. There are two solutions:
(A) Give up on the relative error bound, by specifying it as 1:
integrate('...', 'x', -1, 1, 1e-8, 1)
(B) Add some constant to the function being integrated, then subtract from the result:
integrate('100 + ... ', 'x', -1, 1) - 200
(The latter should work in most cases, though if the integral happens to be exactly -200, you'll have the same problem again)
The above works for gram_schmidt_pol(eye(2,2), -1/2, -1/2) but for larger, say, gram_schmidt_pol(eye(9,9), -1/2, -1/2), it throws the error "The integral is probably divergent, or slowly convergent".
It appears that the adaptive integration routine can't handle the functions of the kind you have. A fallback is to use the simple inttrap instead, which just applies the trapezoidal rule. Since at x=-1 and 1 the function poids is undefined, the endpoints have to be excluded.
function [s] = scalpol(p1, p2, a, b)
t = -0.9995:0.001:0.9995
y = poids(t,a, b).*pol(p1,t).*pol(p2,t)
s = inttrap(t,y)
endfunction
In order for this to work, other related functions must be vectorized (* and ^ changed to .* and .^ where necessary):
function [y] = pol(p, x)
y = 0
for i=1:length(p)
y = y + p(i)*x.^(i-1)
end
endfunction
function [y] = poids(x, a, b)
y = (1-x).^a.*(1+x).^b
endfunction
The result is guaranteed to work, though the precision may be a bit lower: you are going to get some numbers like 3D-16 which are actually zeros.
for example the following code
fun swap (pr : int*bool) =
(#2 pr, #1 pr)
fun div_mod (x : int, y : int) =
(x div y, x mod y)
the above code has taking pair(Tuple) as an argument in the first swap function , and taking two integers as an argument in function div_mod ..so my doubt is how does ML know that am calling it with a pair(Tuple) and not calling it with two arguments ?
please help me . am beginner in ML programming
Thank you :)
In terms of the types themselves, both functions take one argument, which is a pair.
These two definitions are equivalent to yours:
fun swap (i: int, b: bool) = (b, i)
fun div_mod (xy: int * int) = ((#1 xy) div (#2 xy), (#1 xy) mod (#2 xy))
The only difference is whether you do pattern matching against the elements of the tuple or not.
There's a slight difference in whether you would say that a function takes one or two arguments, though.
If the pair is just "incidental" – used for grouping like in these functions – you often say that the function takes two arguments.
If the pair represents some kind of abstraction like, say, a rational number, you would probably say that it takes one argument.
I suggest you check it in your preferred SML:
str#s132-intel:~> poly
Poly/ML 5.5.2 Release
>fun swap (pr : int*bool) = (#2 pr, #1 pr);;
val swap = fn: int * bool -> bool * int
>
> fun div_mod (x : int, y : int) = (x div y, x mod y);;
val div_mod = fn: int * int -> int * int
The first takes type int * bool, the second takes int * int, both are pairs.
In contrast, multiple arguments which are not tuples:
> fun maketriple a b c = (a, b, c);;
val maketriple = fn: 'a -> 'b -> 'c -> 'a * 'b * 'c
And how does SML tell the types of arguments apart? Tuples are written inside parens and separated by commata.
Is it possible to write recursive anonymous functions in SML? I know I could just use the fun syntax, but I'm curious.
I have written, as an example of what I want:
val fact =
fn n => case n of
0 => 1
| x => x * fact (n - 1)
The anonymous function aren't really anonymous anymore when you bind it to a
variable. And since val rec is just the derived form of fun with no
difference other than appearance, you could just as well have written it using
the fun syntax. Also you can do pattern matching in fn expressions as well
as in case, as cases are derived from fn.
So in all its simpleness you could have written your function as
val rec fact = fn 0 => 1
| x => x * fact (x - 1)
but this is the exact same as the below more readable (in my oppinion)
fun fact 0 = 1
| fact x = x * fact (x - 1)
As far as I think, there is only one reason to use write your code using the
long val rec, and that is because you can easier annotate your code with
comments and forced types. For examples if you have seen Haskell code before and
like the way they type annotate their functions, you could write it something
like this
val rec fact : int -> int =
fn 0 => 1
| x => x * fact (x - 1)
As templatetypedef mentioned, it is possible to do it using a fixed-point
combinator. Such a combinator might look like
fun Y f =
let
exception BlackHole
val r = ref (fn _ => raise BlackHole)
fun a x = !r x
fun ta f = (r := f ; f)
in
ta (f a)
end
And you could then calculate fact 5 with the below code, which uses anonymous
functions to express the faculty function and then binds the result of the
computation to res.
val res =
Y (fn fact =>
fn 0 => 1
| n => n * fact (n - 1)
)
5
The fixed-point code and example computation are courtesy of Morten Brøns-Pedersen.
Updated response to George Kangas' answer:
In languages I know, a recursive function will always get bound to a
name. The convenient and conventional way is provided by keywords like
"define", or "let", or "letrec",...
Trivially true by definition. If the function (recursive or not) wasn't bound to a name it would be anonymous.
The unconventional, more anonymous looking, way is by lambda binding.
I don't see what unconventional there is about anonymous functions, they are used all the time in SML, infact in any functional language. Its even starting to show up in more and more imperative languages as well.
Jesper Reenberg's answer shows lambda binding; the "anonymous"
function gets bound to the names "f" and "fact" by lambdas (called
"fn" in SML).
The anonymous function is in fact anonymous (not "anonymous" -- no quotes), and yes of course it will get bound in the scope of what ever function it is passed onto as an argument. In any other cases the language would be totally useless. The exact same thing happens when calling map (fn x => x) [.....], in this case the anonymous identity function, is still in fact anonymous.
The "normal" definition of an anonymous function (at least according to wikipedia), saying that it must not be bound to an identifier, is a bit weak and ought to include the implicit statement "in the current environment".
This is in fact true for my example, as seen by running it in mlton with the -show-basis argument on an file containing only fun Y ... and the val res ..
val Y: (('a -> 'b) -> 'a -> 'b) -> 'a -> 'b
val res: int32
From this it is seen that none of the anonymous functions are bound in the environment.
A shorter "lambdanonymous" alternative, which requires OCaml launched
by "ocaml -rectypes":
(fun f n -> f f n)
(fun f n -> if n = 0 then 1 else n * (f f (n - 1))
7;; Which produces 7! = 5040.
It seems that you have completely misunderstood the idea of the original question:
Is it possible to write recursive anonymous functions in SML?
And the simple answer is yes. The complex answer is (among others?) an example of this done using a fix point combinator, not a "lambdanonymous" (what ever that is supposed to mean) example done in another language using features not even remotely possible in SML.
All you have to do is put rec after val, as in
val rec fact =
fn n => case n of
0 => 1
| x => x * fact (n - 1)
Wikipedia describes this near the top of the first section.
let fun fact 0 = 1
| fact x = x * fact (x - 1)
in
fact
end
This is a recursive anonymous function. The name 'fact' is only used internally.
Some languages (such as Coq) use 'fix' as the primitive for recursive functions, while some languages (such as SML) use recursive-let as the primitive. These two primitives can encode each other:
fix f => e
:= let rec f = e in f end
let rec f = e ... in ... end
:= let f = fix f => e ... in ... end
In languages I know, a recursive function will always get bound to a name. The convenient and conventional way is provided by keywords like "define", or "let", or "letrec",...
The unconventional, more anonymous looking, way is by lambda binding. Jesper Reenberg's answer shows lambda binding; the "anonymous" function gets bound to the names "f" and "fact" by lambdas (called "fn" in SML).
A shorter "lambdanonymous" alternative, which requires OCaml launched by "ocaml -rectypes":
(fun f n -> f f n)
(fun f n -> if n = 0 then 1 else n * (f f (n - 1))
7;;
Which produces 7! = 5040.