Related
I have a pure function that takes 18 arguments process them and returns an answer.
Inside this function I call many other pure functions and those functions call other pure functions within them as deep as 6 levels.
This way of composition is cumbersome to test as the top level functions,in addition to their logic,have to gather parameters for inner functions.
# Minimal conceptual example
main_function(a, b, c, d, e) = begin
x = pure_function_1(a, b, d)
y = pure_function_2(a, c, e, x)
z = pure_function_3(b, c, y, x)
answer = pure_function_4(x,y,z)
return answer
end
# real example
calculate_time_dependant_losses(
Ap,
u,
Ac,
e,
Ic,
Ep,
Ecm_t,
fck,
RH,
T,
cementClass::Char,
ρ_1000,
σ_p_start,
f_pk,
t0,
ts,
t_start,
t_end,
) = begin
μ = σ_p_start / f_pk
fcm = fck + 8
Fr = σ_p_start * Ap
_σ_pb = σ_pb(Fr, Ac, e, Ic)
_ϵ_cs_t_start_t_end = ϵ_cs_ti_tj(ts, t_start, t_end, Ac, u, fck, RH, cementClass)
_ϕ_t0_t_start_t_end = ϕ_t0_ti_tj(RH, fcm, Ac, u, T, cementClass, t0, t_start, t_end)
_Δσ_pr_t_start_t_end =
Δσ_pr(σ_p_start, ρ_1000, t_end, μ) - Δσ_pr(σ_p_start, ρ_1000, t_start, μ)
denominator =
1 +
(1 + 0.8 * _ϕ_t0_t_start_t_end) * (1 + (Ac * e^2) / Ic) * ((Ep * Ap) / (Ecm_t * Ac))
shrinkageLoss = (_ϵ_cs_t_start_t_end * Ep) / denominator
relaxationLoss = (0.8 * _Δσ_pr_t_start_t_end) / denominator
creepLoss = (Ep * _ϕ_t0_t_start_t_end * _σ_pb) / Ecm_t / denominator
return shrinkageLoss + relaxationLoss + creepLoss
end
I see examples of functional composition (dot chaining,pipe operator etc) with single argument functions.
Is it practical to compose the above function using functional programming?If yes, how?
The standard and simple way is to recast your example so that it can be written as
# Minimal conceptual example, re-cast
main_function(a, b, c, d, e) = begin
x = pure_function_1'(a, b, d)()
y = pure_function_2'(a, c, e)(x)
z = pure_function_3'(b, c)(y) // I presume you meant `y` here
answer = pure_function_4(z) // and here, z
return answer
end
Meaning, we use functions that return functions of one argument. Now these functions can be easily composed, using e.g. a forward-composition operator (f >>> g)(x) = g(f(x)) :
# Minimal conceptual example, re-cast, composed
main_function(a, b, c, d, e) = begin
composed_calculation =
pure_function_1'(a, b, d) >>>
pure_function_2'(a, c, e) >>>
pure_function_3'(b, c, y) >>>
pure_function_4
answer = composed_calculation()
return answer
end
If you really need the various x y and z at differing points in time during the composed computation, you can pass them around in a compound, record-like data structure. We can avoid the coupling of this argument handling if we have extensible records:
# Minimal conceptual example, re-cast, composed, args packaged
main_function(a, b, c, d, e) = begin
composed_calculation =
pure_function_1'(a, b, d) >>> put('x') >>>
get('x') >>> pure_function_2'(a, c, e) >>> put('y') >>>
get('x') >>> pure_function_3'(b, c, y) >>> put('z') >>>
get({'x';'y';'z'}) >>> pure_function_4
answer = composed_calculation(empty_initial_state)
return value(answer)
end
The passed around "state" would be comprised of two fields: a value and an extensible record. The functions would accept this state, use the value as their additional input, and leave the record unchanged. get would take the specified field out of the record and put it in the "value" field in the state. put would mutate the extensible record in the state:
put(field_name) = ( {value:v ; record:r} =>
{v ; put_record_field( r, field_name, v)} )
get(field_name) = ( {value:v ; record:r} =>
{get_record_field( r, field_name) ; r} )
pure_function_2'(a, c, e) = ( {value:v ; record:r} =>
{pure_function_2(a, c, e, v); r} )
value(r) = get_record_field( r, value)
empty_initial_state = { novalue ; empty_record }
All in pseudocode.
Augmented function application, and hence composition, is one way of thinking about "what monads are". Passing around the pairing of a produced/expected argument and a state is known as State Monad. The coder focuses on dealing with the values while treating the state as if "hidden" "under wraps", as we do here through the get/put etc. facilities. Under this illusion/abstraction, we do get to "simply" compose our functions.
I can make a small start at the end:
sum $ map (/ denominator)
[ _ϵ_cs_t_start_t_end * Ep
, 0.8 * _Δσ_pr_t_start_t_end
, (Ep * _ϕ_t0_t_start_t_end * _σ_pb) / Ecm_t
]
As mentioned in the comments (repeatedly), the function composition operator does indeed accept multiple argument functions. Cite: https://docs.julialang.org/en/v1/base/base/#Base.:%E2%88%98
help?> ∘
"∘" can be typed by \circ<tab>
search: ∘
f ∘ g
Compose functions: i.e. (f ∘ g)(args...; kwargs...) means f(g(args...; kwargs...)). The ∘ symbol
can be entered in the Julia REPL (and most editors, appropriately configured) by typing
\circ<tab>.
Function composition also works in prefix form: ∘(f, g) is the same as f ∘ g. The prefix form
supports composition of multiple functions: ∘(f, g, h) = f ∘ g ∘ h and splatting ∘(fs...) for
composing an iterable collection of functions.
The challenge is chaining the operations together, because any function can only pass on a tuple to the next function in the composed chain. The solution could be making sure your chained functions 'splat' the input tuples into the next function.
Example:
# splat to turn max into a tuple-accepting function
julia> f = (x->max(x...)) ∘ minmax;
julia> f(3,5)
5
Using this will in no way help make your function cleaner, though, in fact it will probably make a horrible mess.
Your problems do not at all seem to me to be related to how you call, chain or compose your functions, but are entirely due to not organizing the inputs in reasonable types with clean interfaces.
Edit: Here's a custom composition operator that splats arguments, to avoid the tuple output issue, though I don't see how it can help picking the right arguments, it just passes everything on:
⊕(f, g) = (args...) -> f(g(args...)...)
⊕(f, g, h...) = ⊕(f, ⊕(g, h...))
Example:
julia> myrev(x...) = reverse(x);
julia> (myrev ⊕ minmax)(5,7)
(7, 5)
julia> (minmax ⊕ myrev ⊕ minmax)(5,7)
(5, 7)
In Julia I've defined a type and I need to write some functions that work with the fields of that type. Some of the functions contain complicated formulas and it gets messy to use the field access dot notation all over the place. So I end up putting the field values into local variables to improve readability. It works fine, but is there some clever way to avoid having to type out all the a=foo.a lines or to have Julia parse a as foo.a etc?
struct Foo
a::Real
b::Real
c::Real
end
# this gets hard to read
function bar(foo::Foo)
foo.a + foo.b + foo.c + foo.a*foo.b - foo.b*foo.c
end
# this is better
function bar(foo::Foo)
a = foo.a
b = foo.b
c = foo.c
a + b + c + a*b - b*c
end
# this would be great
function bar(foo::Foo)
something clever
a + b + c + a*b - b*c
end
Because Julia generally encourages the use of generalized interfaces to interact with fields rather than accessing the fields directly, a fairly natural way of accomplishing this would be unpacking via iteration. In Julia, objects can be "unpacked" into multiple variables by iteration:
julia> x, y = [1, 2, 3]
3-element Array{Int64,1}:
1
2
3
julia> x
1
julia> y
2
We can implement such an iteration protocol for a custom object, like Foo. In v0.7, this would look like:
Base.iterate(foo::Foo, state = 1) = state > 3 ? nothing : (getfield(foo, state), state + 1)
Note that 3 is hardcoded (based on the number of fields in Foo) and could be replaced with fieldcount(Foo). Now, you can simply "unpack" an instance of Foo as follows:
julia> a, b, c = Foo("one", 2.0, 3)
Foo("one", 2.0, 3)
julia> a
"one"
julia> b
2.0
julia> c
3
This could be the "something clever" at the beginning of your function. Additionally, as of v0.7, you can unpack the fields in the function argument itself:
function bar((a, b, c)::Foo)
a + b + c + a*b - b*c
end
Although this does require that you mention the field names again, it comes with two potential advantages:
In the case that your struct is refactored and the fields are renamed, all code accessing the fields will remain intact (as long as the field order doesn't change or the iterate implementation is changed to reflect the new object internals).
Longer field names can be abbreviated. (i.e. rather than using the full apples field name, you can opt to use a.)
If it's important that the field names not be repeated, you could define a macro to generate the required variables (a = foo.a; b = foo.b; c = foo.c); however, this would likely be more confusing for the readers of your code and lack the advantages listed above.
As of Julia 1.6, the macros in this package look relevant: https://github.com/mauro3/UnPack.jl.
The syntax would look like:
function bar(foo::Foo)
# something clever!
#unpack a, b, c = f
a + b + c + a*b - b*c
end
In Julia 1.7, it looks like this feature will be added with the syntax
function bar(foo::Foo)
# something clever!
(; a, b, c) = f
a + b + c + a*b - b*c
end
Here is the merged pull request: https://github.com/JuliaLang/julia/pull/39285
I cannot solve a problem in Scilab because it get stucked because of round-off errors. I get the message
!--error 9999
Error: Round-off error detected, the requested tolerance (or default) cannot be achieved. Try using bigger tolerances.
at line 2 of function scalpol called by :
at line 7 of function gram_schmidt_pol called by :
gram_schmidt_pol(a,-1/2,-1/2)
It's a Gram Schmidt process with the integral of the product of two functions and a weight as the scalar product, between -1 and 1.
gram_schmidt_pol is the process specially designed for polynome, and scalpol is the scalar product described for polynome.
The a and b are parameters for the weigth, which is (1+x)^a*(1-x)^b
The entry is a matrix representing a set of vectors, it works well with the matrix [[1;2;3],[4;5;6],[7;8;9]], but it fails with the above message error on matrix eye(2,2), in addition to this, I need to do it on eye(9,9) !
I have looked for a "tolerance setting" in the menus, there is some in General->Preferences->Xcos->Simulation but I believe this is not for what I wan't, I have tried low settings (high tolerance) in it and it hasn't change anything.
So how can I solve this rounf-off problem ?
Feel free to tell me my message lacks of clearness.
Thank you.
Edit: Code of the functions :
// function that evaluate a polynomial (vector of coefficients) in x
function [y] = pol(p, x)
y = 0
for i=1:length(p)
y = y + p(i)*x^(i-1)
end
endfunction
// weight function evaluated in x, parametrized by a and b
// (poids = weight in french)
function [y] = poids(x, a, b)
y = (1-x)^a*(1+x)^b
endfunction
// scalpol compute scalar product between polynomial p1 and p2
// using integrate, the weight and the pol functions.
function [s] = scalpol(p1, p2, a, b)
s = integrate('poids(x,a, b)*pol(p1,x)*pol(p2,x)', 'x', -1, 1)
endfunction
// norm associated to scalpol
function [y] = normscalpol(f, a, b)
y = sqrt(scalpol(f, f, a, b))
endfunction
// finally the gram schmidt process on a family of polynome
// represented by a matrix
function [o] = gram_schmidt_pol(m, a, b)
[n,p] = size(m)
o(1:n) = m(1:n,1)/(normscalpol(m(1:n,1), a, b))
for k = 2:p
s =0
for i = 1:(k-1)
s = s + (scalpol(o(1:n,i), m(1:n,k), a, b) / scalpol(o(1:n,i),o(1:n,i), a, b) .* o(1:n,i))
end
o(1:n,k) = m(1:n,k) - s
o(1:n,k) = o(1:n,k) ./ normscalpol(o(1:n,k), a, b)
end
endfunction
By default, Scilab's integrate routine tries to achieve absolute error at most 1e-8 and relative error at most 1e-14. This is reasonable, but its treatment of relative error does not take into account the issues that occur when the exact value is zero. (See How to calculate relative error when true value is zero?). For this reason, even the simple
integrate('x', 'x', -1, 1)
throws an error (in Scilab 5.5.1).
And this is what happens in the process of running your program: some integrals are zero. There are two solutions:
(A) Give up on the relative error bound, by specifying it as 1:
integrate('...', 'x', -1, 1, 1e-8, 1)
(B) Add some constant to the function being integrated, then subtract from the result:
integrate('100 + ... ', 'x', -1, 1) - 200
(The latter should work in most cases, though if the integral happens to be exactly -200, you'll have the same problem again)
The above works for gram_schmidt_pol(eye(2,2), -1/2, -1/2) but for larger, say, gram_schmidt_pol(eye(9,9), -1/2, -1/2), it throws the error "The integral is probably divergent, or slowly convergent".
It appears that the adaptive integration routine can't handle the functions of the kind you have. A fallback is to use the simple inttrap instead, which just applies the trapezoidal rule. Since at x=-1 and 1 the function poids is undefined, the endpoints have to be excluded.
function [s] = scalpol(p1, p2, a, b)
t = -0.9995:0.001:0.9995
y = poids(t,a, b).*pol(p1,t).*pol(p2,t)
s = inttrap(t,y)
endfunction
In order for this to work, other related functions must be vectorized (* and ^ changed to .* and .^ where necessary):
function [y] = pol(p, x)
y = 0
for i=1:length(p)
y = y + p(i)*x.^(i-1)
end
endfunction
function [y] = poids(x, a, b)
y = (1-x).^a.*(1+x).^b
endfunction
The result is guaranteed to work, though the precision may be a bit lower: you are going to get some numbers like 3D-16 which are actually zeros.
I'm new in Prolog and I have some problem understanding how the recursion works.
The think I want to do is to create a list of numbers (to later draw a graphic).
So I have this code :
nbClassTest(0, _).
nbClassTest(X, L) :-
numberTestClass(A,X),
append([A], L, L),
X is X - 1,
nbClassTest(X, L).
But it keeps giving me 'false' as an answer and I don't understand why it doesn't fill the list. It should end if X reaches 0 right?
The numberTestClass(A,X), gives me a number (in the variable A) for some X as if it was a function.
You should build the list without appending, because it's rather inefficient.
This code could do:
nbClassTest(0, []).
nbClassTest(X, [A|R]) :-
numberTestClass(A, X),
X is X - 1,
nbClassTest(X, R).
or, if your system has between/3, you can use an 'all solutions' idiom:
nbClassTest(X, L) :-
findall(A, (between(1, X, N), numberTestClass(A, X)), R),
reverse(R, L).
the problem is that you use the same variable for the old and the new list. right now your first to append/3 creates a list of infinite length consisting of elements equal to the value of A.
?-append([42],L,L).
L = [42|L].
?- append([42],L,L), [A,B,C,D|E]=L.
L = [42|L],
A = B, B = C, C = D, D = 42,
E = [42|L].
then, if the next A is not the same with the previous A it will fail.
?- append([42],L,L), append([41],L,L).
false.
there is still on more issue with the code; your base case has an non-instantiated variable. you might want that but i believe that you actually want an empty list:
nbClassTest(0, []).
nbClassTest(X, L) :-
numberTestClass(A,X),
append([A], L, NL),
X is X - 1,
nbClassTest(X, NL).
last, append/3 is kinda inefficient so you might want to avoid it and build the list the other way around (or use difference lists)
It fails because you use append in wrong way
try
nbClassTest(0, _).
nbClassTest(X, L) :-
numberTestClass(A,X),
append([A], L, Nl),
X is X - 1,
nbClassTest(X, Nl).
append concatenate 2 lists so there is no such list which after adding to it element still will be same list.
So I have a simple example of what I want to do:
restart;
assume(can, real);
f := {g = x+can*x*y, t = x+x*y};
assign(f[1]); g;
can := 2;
plot3d(g, x = 0 .. 100, y = 0 .. 100);
while this works:
restart;
f := {g = x+can*x*y, t = x+x*y};
assign(f[1]);
can := 2;
plot3d(g, x = 0 .. 100, y = 0 .. 100);
But that assumptions are really important for my real life case (for some optimisations with complex numbers) so I cant just leve can not preassumed.
Why it plots nothuing for me and how to make it plot?
The expression (or procedure) to be plotted must evaluate to a numeric, floating-point quantity. And so, for your expression g, the name can must have a specific numeric value at the time any plot of g is generated.
But you can produce a sequence of 3D plots, for various values of can, and display them. You can display them all at once, overlaid. Or you can display them in an animated sequence. And you can color or shade them each differently, to give a visual cue that can is changing and different for each.
restart;
f := {g = x+can*x*y, t = x+x*y};
eval(g,f);
N:=50:
Pseq := seq(
plot3d(eval(g,f),
x=0..10,y=0..10,
color=RGB(0.5,0.5,can/(2*N)),
transparency=0.5*(can/(N+1))),
can=1 .. N):
plots:-display(Pseq, axes=box);
plots:-display([Pseq],insequence=true,axes=box);
By the way, you don't have to assign to g just for the sake of using the equation for g that appears inside f. Doing that assignment (using assign, say, like you did) makes it more awkward for you subsequently to create other equations in terms of the pure name g unless you first unassign the name g. Some people find it easier to not make the assignment to g at all for such tasks, and to simply use eval as I've done above.
Now on to your deeper problem. You create an expression containing a local, assumed name. and then later on you want to use the same expression but with the global, unassumed version of that name. You can create the expression, with it containing the global, unassumed name instead of the local, assumed name, buy performing a substitution.
restart;
assume(can, real);
f := {g = x+can*x*y, t = x+x*y};
{g = x + can~ x y, t = x + x y}
assign(f[1]);
g;
x + can~ x y
can := 2:
g;
x + can~ x y
# This fails, because g contains the local name can~
plot3d(g, x=0..100, y=0..100);
# A procedure to make the desired substitution
revert:=proc(nm::name)
local len, snm;
snm:=convert(nm,string);
len:=length(snm);
if snm[-1]="~" then
return parse(snm[1..-2]);
else return parse(nm);
end if;
end proc:
# This is the version of the expression, but with global name can
subsindets(g,`local`,revert);
x + can x y
# This should work
plot3d(subsindets(g,`local`,revert),
x=0..100,y=0..100);