how to correctly pass a value to an included function? - julia

I have 2 julia files, alpha.jl and beta.jl.
in alpha.jl, there are 2 functions:
der that returns a derivative using Zygote,
derPlot that plots the function as well as the derivative:
function der(f, x)
y = f'(x)
return y
end
function derPlt(der,z)
plot(f, aspect_ratio=:equal, label="f(x)")
g(f,x₀) = (x -> f(x₀) + f'(x₀)*(x-x₀))
plot!(g(f,x), label="dy",color="magenta")
xlims!(-z,z)
ylims!(-z,z)
end
everything comes out fine when i call these 2 functions in beta.jl, after including the files:
include("alpha.jl")
f(x)=-x^2+2
x = -1.3
derPlt(der(f, x), 6)
However, if i directly enter in a value for the function, the plotted derivative line doesnt update; i.e, if i enter 2.0 instead of passing in some variable named x,
derPlt(der(f, 2.0), 6)
no change is reflected on the plot. New to Julia, trying to understand and fix it.

I think it's because in your derPlt function, you call
plot!(g(f,x),...)
on x instead of the z argument. The problem is then that you define a x = -1.3, the value of which is used inside of derPlt, regardless of what z argument you feed it.
Maybe replace that line with
plot!(g(f,z),...)
and you should be fine.

Seeing as this is a follow up to a question I answered previously I thought I'd have to respond: Benoit is broadly speaking correct, you are running into a scoping issue here, but a few more changes are in order.
Note that your function signature is derPlot(der, z) but then:
You never actually use the der argument in your function body; and
You construct your tangent line as g(f,x₀) = (x -> f(x₀) + f'(x₀)*(x-x₀)) - note that there's no z in there, only an x
Now where does that x come from? In the absence of any x argument being passed to your function, Julia will look for it in the global scope (outside your function). Here, you define x = -1.3, and when you then call derPlt, that's the x that will be used to construct your tangent, irrespective of the z argument you're passing.
Here's a cleaned up and working version:
using Plots, Zygote
function derPlt(f,z)
plot(f, label="f(x)", aspect_ratio = :equal,
xlims = (-5,5), ylims = (-5,5))
g(f,x₀) = (z -> f(x₀) + f'(x₀)*(z-x₀))
plot!(i -> g(f, z)(i), label="dy",color="magenta")
end
f(x)=-x^2+2
derPlt(f, -1.5)
I would encourage you to read the relevant manual section on Scope of Variables to ensure you get an understanding of what's happening in your code - good luck!

Related

Understanding Forward.Diff issues

I got an apparently quite common Julia error when trying to use AD with forward.diff. The error messages vary a bit (sometimes matching function name sometimes Float64)
MethodError: no method matching logL_multinom(::Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(logL_multinom), Real}, Real, 7}})
My goal: Transform a probability vector to be unbounded (θ -> y), do some stuff (namely HMC sampling) and transform back to the simplex space whenever the unnormalized posterior (logL_multinom()) is evaluated. DA should be used to overome problems for later, more complex, models than this.
Unfortunately, neither the Julia documentation, not the solutions from other questions helped me figure the particular problem out. Especially, it seems to work when I do the first transformation (y -> z) outside of the function, but the first transformation is a 1-to-1 mapping via logistic and should not cause any harm to differentiation.
Here is an MWE:
using LinearAlgebra
using ForwardDiff
using Base
function logL_multinom(y)
# transform to constrained
K = length(y)+1
k = collect(1:(K-1))
# inverse logit:
z = 1 ./ (1 .+ exp.(-y .- log.(K .- k))) # if this is outside, it works
θ = zeros(eltype(y),K) ; x_cumsum = zeros(eltype(y),K-1)
typeof(θ)
for i in k
x_cumsum[i] = 1-sum(θ)
θ[i] = (x_cumsum[i]) * z[i]
end
θ[K] = x_cumsum[K-1] - θ[K-1]
#log_dens_correction = sum( log(z*(1-z)*x_cumsum) )
dot(colSums, log.(θ))
end
colSums = [835, 52, 1634, 3469, 3053, 2507, 2279, 1115]
y0 = [-0.8904013824298864, -0.8196709647741431, -0.2676845405543302, 0.31688184351556026, -0.870860684394019,0.15187821053559714,0.39888119498547964]
logL_multinom(y0)
∇L = y -> ForwardDiff.gradient(logL_multinom,y)
∇L(y0)
Thanks a lot and especially some further readings/ explanations for the problem are appreciated since I'll be working with it moreoften :D
Edit: I tried to convert the input and any intermediate variable into Real / arrays of these, but nothing helped so far.

SageMath - Precomposing a vector-valued function

Given a function from R into R^n, I'd like to define a new function by precomposition, for example as follows
alpha(x) = [e^x,e^(-x)]
beta(x) = alpha(-x+2)
However attempting to do so in this way throws an error
"unable to convert (e^(-x + 2), e^(x - 2)) to a symbolic expression"
Now the similar but simpler version of the code
alpha(x) = e^x
beta(x) = alpha(-x+2)
works perfectly, so the issue arrises from the fact that alpha is multivalued.
The following variant of the original code does exactly what I want
alpha(x) = [e^x,e^(-x)]
beta(x) = [alpha[0](-x+2),alpha[1](-x+2)]
but requires me to assume the length of alpha, which is undesirable. And the obvious solution to that problem
alpha(x) = [e^x,e^(-x)]
for i in range(0,len(alpha)):
beta[i](x) = alpha[i](x)
or any variant thereupon throws the error "can't assign to function call"
My question is as follows:
Is there any way to do this precomposition? In particular without assuming the length of alpha. I control how the functions alpha and beta are defined, so if theres another way of defining them (for example using lambda notation or something like that) that lets me do this, that's acceptable too. But note that I would like to do some equivalent of the following at some point in my code
... + beta.derivative(x).dot_product( ...
Defined as in the question, alpha is not a symbolic function
returning vectors, but a vector of callable functions.
Below we describe two other ways of defining alpha and beta,
either defining alpha as a vector over the symbolic ring,
and defining beta by substitution, or defining alpha and
beta as Python functions.
Original approach in the question:
sage: alpha(x) = [e^x, e^-x]
sage: alpha
x |--> (e^x, e^(-x))
sage: alpha.parent()
Vector space of dimension 2 over Callable function ring with argument x
Using a vector over the symbolic ring
sage: alpha = vector([e^x, e^-x])
sage: alpha
(e^x, e^(-x))
sage: alpha.parent()
Vector space of dimension 2 over Symbolic Ring
sage: beta = alpha.subs({x: -x + 2})
sage: beta
(e^(-x + 2), e^(x - 2))
Using Python functions
sage: def alpha(x):
....: return vector([e^x, e^-x])
....:
sage: def beta(x):
....: return alpha(-x + 2)
....:
sage: beta(x)
(e^(-x + 2), e^(x - 2))
Some related resources.
Query:
Ask Sage query: vector function
Questions:
Ask Sage question 8066: Composite function
Ask Sage question 24943: Define vector valued function of a vector of symbolic variables?
Ask Sage question 43550: vector constants and vector functions
Ask Sage question 9375: functions with vector inputs
Ask Sage question 23758: Defining a function of vector variables
Ask Sage question 36127: Plotting 2d vector fields – how to delay function evaluation
Ask Sage question 10704: How to create a vector function (mapping)?
Ask Sage question 8924: Basic vector functions in Sage
Ask Sage question 30025: How can I define a function with quaternion argument, and other non-vector input
Ask Sage question 47710: Vector valued function: unable to convert to symbolic expression
Ask Sage question 36159: apply functions iteratively (modified re-post)
Tickets:
Sage Trac ticket 11507: make f(x,y,z)=vector make a vector-valued function
Sage Trac ticket 11180: Allow vector input to functions taking vectors
Sage Trac ticket 28640: Manifolds: Vector Valued Forms

Use LsqFit for multi-variate output?

I wanted to fit a geometric mapping parameter with some input/output (x,y) points. The model is very simple:
xp = x .+ k.*x.*(x.^2+y.^2)
yp = y .+ k.*y.*(x.^2+y.^2)
k is the only parameter, (x,y) is an input point and (xp,yp) is an output point.
I formulated the input/output data array as:
x = [x for x=-2.:2. for y=-2.:2.]
y = [y for x=-2.:2. for y=-2.:2.]
in_data = [x y]
out_data = [xp yp]
However I'm confused about how to turn this into the LsqFit model, I tried:
k0=[0.]
#. model(x,p) = [x[:,1]+p[1]*x[:,1]*(x[:,1]^2+x[:,2]^2) x[:,2]+p[1]*x[:,2]*(x[:,1]^2+x[:,2]^2)]
ret = curve_fit(model, in_data, out_data, k0)
but got an error:
DimensionMismatch("dimensions must match: a has dims (Base.OneTo(25),
Base.OneTo(2)), must have singleton at dim 2")
So the question is: is it possible to use LsqFit for multi-variate output? (even though this particular problem can be solved analytically)
OK Just figured out the correct way to do this. The vector output variable needs to be stacked together to form a 1D array. So the only changes needed is:
out_data = [xp; yp]

1D integration with multivariable function input

To demonstrate, let's start with a simple multi-variable function f(x,y) = xy^2.
I am trying to find a command that would allow me to numerically integrate f(2, y) = 2y^2 from y = 0 to y = 2. (i.e. the original function is multi-variable, but only one variable remains when actually doing the integration)
I needed to define the function that way as I need to obtain the results using different values of x. (probably going to involve for-loop, but that is another story)
I have tried to walk through Cubature's user guide but apparently did not find anything useful. Maybe I have missed it
Can anyone help?
In such case it is simplest to use an anonymous function wrapper:
using QuadGK
f(x,y) = x*y^2
intf(x) = quadgk(y -> f(x, y), 0, 2)
if the anonymous function would be longer you could write:
intf(x) = quadgk(0, 2) do y
f(x, y)
end
This is an exact equivalent of the latter but do syntax allows you to write longer bodies of an anonymous function.
Now you can write e.g.:
julia> intf(1)
(2.6666666666666665, 4.440892098500626e-16)
julia> intf(2)
(5.333333333333333, 8.881784197001252e-16)
julia> intf(3)
(8.0, 0.0)

Plot of function, DomainError. Exponentiation yielding a complex result requires a complex argument

Background
I read here that newton method fails on function x^(1/3) when it's inital step is 1. I am tring to test it in julia jupyter notebook.
I want to print a plot of function x^(1/3)
then I want to run code
f = x->x^(1/3)
D(f) = x->ForwardDiff.derivative(f, float(x))
x = find_zero((f, D(f)),1, Roots.Newton(),verbose=true)
Problem:
How to print chart of function x^(1/3) in range eg.(-1,1)
I tried
f = x->x^(1/3)
plot(f,-1,1)
I got
I changed code to
f = x->(x+0im)^(1/3)
plot(f,-1,1)
I got
I want my plot to look like a plot of x^(1/3) in google
However I can not print more than a half of it
That's because x^(1/3) does not always return a real (as in numbers) result or the real cube root of x. For negative numbers, the exponentiation function with some powers like (1/3 or 1.254 and I suppose all non-integers) will return a Complex. For type-stability requirements in Julia, this operation applied to a negative Real gives a DomainError. This behavior is also noted in Frequently Asked Questions section of Julia manual.
julia> (-1)^(1/3)
ERROR: DomainError with -1.0:
Exponentiation yielding a complex result requires a complex argument.
Replace x^y with (x+0im)^y, Complex(x)^y, or similar.
julia> Complex(-1)^(1/3)
0.5 + 0.8660254037844386im
Note that The behavior of returning a complex number for exponentiation of negative values is not really different than, say, MATLAB's behavior
>>> (-1)^(1/3)
ans =
0.5000 + 0.8660i
What you want, however, is to plot the real cube root.
You can go with
plot(x -> x < 0 ? -(-x)^(1//3) : x^(1//3), -1, 1)
to enforce real cube root or use the built-in cbrt function for that instead.
plot(cbrt, -1, 1)
It also has an alias ∛.
plot(∛, -1, 1)
F(x) is an odd function, you just use [0 1] as input variable.
The plot on [-1 0] is deducted as follow
The code is below
import numpy as np
import matplotlib.pyplot as plt
# Function f
f = lambda x: x**(1/3)
fig, ax = plt.subplots()
x1 = np.linspace(0, 1, num = 100)
x2 = np.linspace(-1, 0, num = 100)
ax.plot(x1, f(x1))
ax.plot(x2, -f(x1[::-1]))
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
plt.show()
Plot
That Google plot makes no sense to me. For x > 0 it's ok, but for negative values of x the correct result is complex, and the Google plot appears to be showing the negative of the absolute value, which is strange.
Below you can see the output from Matlab, which is less fussy about types than Julia. As you can see it does not agree with your plot.
From the plot you can see that positive x values give a real-valued answer, while negative x give a complex-valued answer. The reason Julia errors for negative inputs, is that they are very concerned with type stability. Having the output type of a function depend on the input value would cause a type instability, which harms performance. This is less of a concern for Matlab or Python, etc.
If you want a plot similar the above in Julia, you can define your function like this:
f = x -> sign(x) * abs(complex(x)^(1/3))
Edit: Actually, a better and faster version is
f = x -> sign(x) * abs(x)^(1/3)
Yeah, it looks awkward, but that's because you want a really strange plot, which imho makes no sense for the function x^(1/3).

Resources