Reassign function and avoid recursive definition in Julia - recursion

I need to operate on a sequence of functions
h_k(x) = (I + f_k( ) )^k g(x)
for each k=1,...,N.
A basic example (N=2, f_k=f) is the following:
f(x) = x^2
g(x) = x
h1(x) = g(x) + f(g(x))
h2(x) = g(x) + f(g(x)) + f(g(x) + f(g(x)))
println(h1(1)) # returns 2
println(h2(1)) # returns 6
I need to write this in a loop and it would be best to redefine g(x) at each iteration. Unfortunately, I do not know how to do this in Julia without conflicting with the syntax for a recursive definition of g(x). Indeed,
f(x) = x^2
g(x) = x
for i=1:2
global g(x) = g(x) + f(g(x))
println(g(1))
end
results in a StackOverflowError.
In Julia, what is the proper way to redefine g(x), using its previous definition?
P.S. For those who would suggest that this problem could be solved with recursion: I want to use a for loop because of how the functions f_k(x) (in the above, each f_k = f) are computed in the real problem that this derives from.

I am not sure if it is best, but a natural approach is to use anonymous functions here like this:
let
f(x) = x^2
g = x -> x
for i=1:2
l = g
g = x -> l(x) + f(l(x))
println(g(1))
end
end
or like this
f(x) = x^2
g = x -> x
for i=1:4
l = g
global g = x -> l(x) + f(l(x))
println(g(1))
end
(I prefer the former option using let as it avoids using global variables)
The issue is that l is a loop local variable that gets a fresh binding at each iteration, while g is external to the loop.
You might also check out this section of the Julia manual.

Related

taking function as an input in SageMath

How do you make a function that takes a function as input?
What I want to do is something like:
f(x) = log(x)
g(f, x) = x^2 * f(x)
g(f, 2)
# Symbolic expression of x^2 * log(x)
I think I am looking for the way to create higher order function.
Would using a lambda function for g work for you?
Here is a way to do that:
sage: f(x) = log(x)
sage: g = lambda u, v: v*2 * u(v)
sage: g(f, 2)
4*log(2)
We can create a python function that returns a symbolic function and takes two symbolic functions as an input.
Consider the following code.
g(x) = x^2
f(x) = log(x)
def foo(f, g, x):
return g(x) * f(x)
z(x)=foo(g, f, x)
Take a look at it.
sage: z
which yields to the symbolic function
x |--> x^2*log(x)

Julia: Why ridge regression not working (optim)

I am trying to implement ridge-regression from scratch in Julia but something is going wrong.
# Imports
using DataFrames
using LinearAlgebra: norm, I
using Optim: optimize, LBFGS, minimizer
# Read Data
out = CSV.read(download("https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.csv"), DataFrame, header=0)
# Separate features and response
y = Vector(out[:, end])
X = Matrix(out[:, 1:(end-1)])
λ = 0.1
# Functions
loss(beta) = norm(y - X * beta)^2 + λ*norm(beta)^2
function grad!(G, beta)
G = -2*transpose(X) * (y - X * beta) + 2*λ*beta
end
function hessian!(H, beta)
H = X'X + λ*I
end
# Optimization
start = randn(13)
out = optimize(loss, grad!, hessian!, start, LBFGS())
However, the result of this is terrible and we essentially get back start since it is not moving. Of course, I know I could simply use (X'X + λ*I) \ X'y or IterativeSolvers.lmsr(X, y) but I would like to implement this myself.
The problem is with the implementation of the grad! and hessian! functions: you should use dot assignment to change the content of the G and H matrices:
G .= -2*transpose(X) * (y - X * beta) + 2*λ*beta
H .= X'X + λ*I
Without the dot you replace the matrix the function parameter refers to, but the matrix passed to the function (which will then be used by the optimizer) remains unchanged (presumably a zero matrix, that's why you got back the start vector).

How to dereference GlobalRef?

A variable p exists, it is created by a foreign package. I think it is a pointer to W1, which is created by me in the global scope.
typeof(p) # output: GlobalRef
p # output: :(Main.W1)
p.name # output: :W1
p.mod # output: Main
How can I retrieve W1, which is the value behind p?
In other words, is there a function f for which W1 === f(p)?
Some context for the interested: I'm trying to optimize a Neural Network and loss (together represented by the function loss) using vanilla Zygote:
for s in 1:100
l = 0.
gs = gradient(Zygote.Params(optimizable_params)) do
l = loss(X[s, :], y[s])
end
push!(losses, l)
for (p, g) in pairs(gs.grads)
p += η * g # Here the p is coming from
end
end
It seems like that this particular use-case is a mistake. However, in general you can get the object being referred to like this:
julia> module X
x = 5
end
Main.X
julia> g = GlobalRef(X, :x)
:(Main.X.x)
julia> getfield(g.mod, g.name)
5

How to work with the result of the wild sympy

I have the following code:
f=tan(x)*x**2
q=Wild('q')
s=f.match(tan(q))
s={q_ : x}
How to work with the result of the "wild"? How to not address the array, for example, s[0], s{0}?
Wild can be used when you have an expression which is the result of some complicated calculation, but you know it has to be of the form sin(something) times something else. Then s[q] will be the sympy expression for the "something". And s[p] for the "something else". This could be used to investigate both p and q. Or to further work with a simplified version of f, substituting p and q with new variables, especially if p and q would be complex expressions involving multiple variables.
Many more use cases are possible.
Here is an example:
from sympy import *
from sympy.abc import x, y, z
p = Wild('p')
q = Wild('q')
f = tan(x) * x**2
s = f.match(p*tan(q))
print(f'f is the tangent of "{s[q]}" multiplied by "{s[p]}"')
g = f.xreplace({s[q]: y, s[p]:z})
print(f'f rewritten in simplified form as a function of y and z: "{g}"')
h = s[p] * s[q]
print(f'a new function h, combining parts of f: "{h}"')
Output:
f is the tangent of "x" multiplied by "x**2"
f rewritten in simplified form as a function of y and z: "z*tan(y)"
a new function h, combining parts of f: "x**3"
If you're interested in all arguments from tan that appear in f written as a product, you might try:
from sympy import *
from sympy.abc import x
f = tan(x+2)*tan(x*x+1)*7*(x+1)*tan(1/x)
if f.func == Mul:
all_tan_args = [a.args[0] for a in f.args if a.func == tan]
# note: the [0] is needed because args give a tupple of arguments and
# in the case of tan you'ld want the first (there is only one)
elif f.func == tan:
all_tan_args = [f.args[0]]
else:
all_tan_args = []
prod = 1
for a in all_tan_args:
prod *= a
print(f'All the tangent arguments are: {all_tan_args}')
print(f'Their product is: {prod}')
Output:
All the tangent arguments are: [1/x, x**2 + 1, x + 2]
Their product is: (x + 2)*(x**2 + 1)/x
Note that neither method would work for f = tan(x)**2. For that, you'ld need to write another match and decide whether you'ld want to take the same power of the arguments.

How I display a math function in Julia?

I'm new in Julia and I'm trying to learn to manipulate calculus on it. How do I do if I calculate the gradient of a function with "ForwardDiff" like in the code below and see the function next?
I know if I input some values it gives me the gradient value in that point but I just want to see the function (the gradient of f1).
julia> gradf1(x1, x2) = ForwardDiff.gradient(z -> f1(z[1], z[2]), [x1, x2])
gradf1 (generic function with 1 method)
To elaborate on Felipe Lema's comment, here are some examples using SymPy.jl for various tasks:
#vars x y z
f(x,y,z) = x^2 * y * z
VF(x,y,z) = [x*y, y*z, z*x]
diff(f(x,y,z), x) # ∂f/∂x
diff.(f(x,y,z), [x,y,z]) # ∇f, gradiant
diff.(VF(x,y,z), [x,y,z]) |> sum # ∇⋅VF, divergence
J = VF(x,y,z).jacobian([x,y,z])
sum(diag(J)) # ∇⋅VF, divergence
Mx,Nx, Px, My,Ny,Py, Mz, Nz, Pz = J
[Py-Nz, Mz-Px, Nx-My] # ∇×VF
The divergence and gradient are also part of SymPy, but not exposed. Their use is more general, but cumbersome for this task. For example, this finds the curl:
import PyCall
PyCall.pyimport_conda("sympy.physics.vector", "sympy")
RF = sympy.physics.vector.ReferenceFrame("R")
v1 = get(RF,0)*get(RF,1)*RF.x + get(RF,1)*get(RF,2)*RF.y + get(RF,2)*get(RF,0)*RF.z
sympy.physics.vector.curl(v1, RF)

Resources