Plot of function, DomainError. Exponentiation yielding a complex result requires a complex argument - plot

Background
I read here that newton method fails on function x^(1/3) when it's inital step is 1. I am tring to test it in julia jupyter notebook.
I want to print a plot of function x^(1/3)
then I want to run code
f = x->x^(1/3)
D(f) = x->ForwardDiff.derivative(f, float(x))
x = find_zero((f, D(f)),1, Roots.Newton(),verbose=true)
Problem:
How to print chart of function x^(1/3) in range eg.(-1,1)
I tried
f = x->x^(1/3)
plot(f,-1,1)
I got
I changed code to
f = x->(x+0im)^(1/3)
plot(f,-1,1)
I got
I want my plot to look like a plot of x^(1/3) in google
However I can not print more than a half of it

That's because x^(1/3) does not always return a real (as in numbers) result or the real cube root of x. For negative numbers, the exponentiation function with some powers like (1/3 or 1.254 and I suppose all non-integers) will return a Complex. For type-stability requirements in Julia, this operation applied to a negative Real gives a DomainError. This behavior is also noted in Frequently Asked Questions section of Julia manual.
julia> (-1)^(1/3)
ERROR: DomainError with -1.0:
Exponentiation yielding a complex result requires a complex argument.
Replace x^y with (x+0im)^y, Complex(x)^y, or similar.
julia> Complex(-1)^(1/3)
0.5 + 0.8660254037844386im
Note that The behavior of returning a complex number for exponentiation of negative values is not really different than, say, MATLAB's behavior
>>> (-1)^(1/3)
ans =
0.5000 + 0.8660i
What you want, however, is to plot the real cube root.
You can go with
plot(x -> x < 0 ? -(-x)^(1//3) : x^(1//3), -1, 1)
to enforce real cube root or use the built-in cbrt function for that instead.
plot(cbrt, -1, 1)
It also has an alias ∛.
plot(∛, -1, 1)

F(x) is an odd function, you just use [0 1] as input variable.
The plot on [-1 0] is deducted as follow
The code is below
import numpy as np
import matplotlib.pyplot as plt
# Function f
f = lambda x: x**(1/3)
fig, ax = plt.subplots()
x1 = np.linspace(0, 1, num = 100)
x2 = np.linspace(-1, 0, num = 100)
ax.plot(x1, f(x1))
ax.plot(x2, -f(x1[::-1]))
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
plt.show()
Plot

That Google plot makes no sense to me. For x > 0 it's ok, but for negative values of x the correct result is complex, and the Google plot appears to be showing the negative of the absolute value, which is strange.
Below you can see the output from Matlab, which is less fussy about types than Julia. As you can see it does not agree with your plot.
From the plot you can see that positive x values give a real-valued answer, while negative x give a complex-valued answer. The reason Julia errors for negative inputs, is that they are very concerned with type stability. Having the output type of a function depend on the input value would cause a type instability, which harms performance. This is less of a concern for Matlab or Python, etc.
If you want a plot similar the above in Julia, you can define your function like this:
f = x -> sign(x) * abs(complex(x)^(1/3))
Edit: Actually, a better and faster version is
f = x -> sign(x) * abs(x)^(1/3)
Yeah, it looks awkward, but that's because you want a really strange plot, which imho makes no sense for the function x^(1/3).

Related

Understanding Forward.Diff issues

I got an apparently quite common Julia error when trying to use AD with forward.diff. The error messages vary a bit (sometimes matching function name sometimes Float64)
MethodError: no method matching logL_multinom(::Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(logL_multinom), Real}, Real, 7}})
My goal: Transform a probability vector to be unbounded (θ -> y), do some stuff (namely HMC sampling) and transform back to the simplex space whenever the unnormalized posterior (logL_multinom()) is evaluated. DA should be used to overome problems for later, more complex, models than this.
Unfortunately, neither the Julia documentation, not the solutions from other questions helped me figure the particular problem out. Especially, it seems to work when I do the first transformation (y -> z) outside of the function, but the first transformation is a 1-to-1 mapping via logistic and should not cause any harm to differentiation.
Here is an MWE:
using LinearAlgebra
using ForwardDiff
using Base
function logL_multinom(y)
# transform to constrained
K = length(y)+1
k = collect(1:(K-1))
# inverse logit:
z = 1 ./ (1 .+ exp.(-y .- log.(K .- k))) # if this is outside, it works
θ = zeros(eltype(y),K) ; x_cumsum = zeros(eltype(y),K-1)
typeof(θ)
for i in k
x_cumsum[i] = 1-sum(θ)
θ[i] = (x_cumsum[i]) * z[i]
end
θ[K] = x_cumsum[K-1] - θ[K-1]
#log_dens_correction = sum( log(z*(1-z)*x_cumsum) )
dot(colSums, log.(θ))
end
colSums = [835, 52, 1634, 3469, 3053, 2507, 2279, 1115]
y0 = [-0.8904013824298864, -0.8196709647741431, -0.2676845405543302, 0.31688184351556026, -0.870860684394019,0.15187821053559714,0.39888119498547964]
logL_multinom(y0)
∇L = y -> ForwardDiff.gradient(logL_multinom,y)
∇L(y0)
Thanks a lot and especially some further readings/ explanations for the problem are appreciated since I'll be working with it moreoften :D
Edit: I tried to convert the input and any intermediate variable into Real / arrays of these, but nothing helped so far.

How do I write a piecewise Differential Equation in Julia?

I am new to Julia, I would like to solve this system:
where k1 and k2 are constant parameters. However, I=0 when y,0 or Ky otherwise, where k is a constant value.
I followed the tutorial about ODE. The question is, how to solve this piecewise differential equation in DifferentialEquations.jl?
Answered on the OP's cross post on Julia Discourse; copied here for completeness.
Here is a (mildly) interesting example $x''+x'+x=\pm p_1$ where the sign of $p_1$ changes when a switching manifold is encountered at $x=p_2$. To make things more interesting, consider hysteresis in the switching manifold such that $p_2\mapsto -p_2$ whenever the switching manifold is crossed.
The code is relatively straightforward; the StaticArrays/SVector/MVector can be ignored, they are only for speed.
using OrdinaryDiffEq
using StaticArrays
f(x, p, t) = SVector(x[2], -x[2]-x[1]+p[1]) # x'' + x' + x = ±p₁
h(u, t, integrator) = u[1]-integrator.p[2] # switching surface x = ±p₂;
g(integrator) = (integrator.p .= -integrator.p) # impact map (p₁, p₂) = -(p₁, p₂)
prob = ODEProblem(f, # RHS
SVector(0.0, 1.0), # initial value
(0.0, 100.0), # time interval
MVector(1.0, 1.0)) # parameters
cb = ContinuousCallback(h, g)
sol = solve(prob, Vern6(), callback=cb, dtmax=0.1)
Then plot sol[2,:] against sol[1,:] to see the phase plane - a nice non-smooth limit cycle in this case.
Note that if you try to use interpolation of the resulting solution (i.e., sol(t)) you need to be very careful around the points that have a discontinuous derivative as the interpolant goes a little awry. That's why I've used dtmax=0.1 to get a smoother solution output in this case. (I'm probably not using the most appropriate integrator either but it's the one that I was using in a previous piece of code that I copied-and-pasted.)

How to draw graph of Gauss function?

Gauss function has an infinite number of jump discontinuities at x = 1/n, for positive integers.
I want to draw diagram of Gauss function.
Using Maxima cas I can draw it with simple command :
f(x):= 1/x - floor(1/x); plot2d(f(x),[x,0,1]);
but the result is not good ( near x=0 it should be like here)
Also Maxima claims:
plot2d: expression evaluates to non-numeric value somewhere in plotting
range.
I can define picewise function ( jump discontinuities at x = 1/n, for positive integers )
so I tried :
define( g(x), for i:2 thru 20 step 1 do if (x=i) then x else (1/x) - floor(1/x));
but it don't works.
I can also use chebyshew polynomials to aproximate function ( like in : A Graduate Introduction to Numerical Methods From the Viewpoint of Backward Error Analysis by Corless, Robert, Fillion, Nicolas)
How to do it properly ?
For plot2d you can set the adapt_depth and nticks parameters. The default values are 5 and 29, respectively. set_plot_option() (i.e. with no argument) returns the current list of option values. If you increase adapt_depth and/or nticks, then plot2d will use more points for plotting. Perhaps that makes the figure look good enough.
Another way is to use the draw2d function (in the draw package) and explicitly tell it to plot each segment. We know that there are discontinuities at 1/k, for k = 1, 2, 3, .... We have to decide how many segments to plot. Let's say 20.
(%i6) load (draw) $
(%i7) f(x):= 1/x - floor(1/x) $
(%i8) makelist (explicit (f, x, 1/(k + 1), 1/k), k, 1, 20);
(%o8) [explicit(f,x,1/2,1),explicit(f,x,1/3,1/2),
explicit(f,x,1/4,1/3),explicit(f,x,1/5,1/4),
explicit(f,x,1/6,1/5),explicit(f,x,1/7,1/6),
explicit(f,x,1/8,1/7),explicit(f,x,1/9,1/8),
explicit(f,x,1/10,1/9),explicit(f,x,1/11,1/10),
explicit(f,x,1/12,1/11),explicit(f,x,1/13,1/12),
explicit(f,x,1/14,1/13),explicit(f,x,1/15,1/14),
explicit(f,x,1/16,1/15),explicit(f,x,1/17,1/16),
explicit(f,x,1/18,1/17),explicit(f,x,1/19,1/18),
explicit(f,x,1/20,1/19),explicit(f,x,1/21,1/20)]
(%i9) apply (draw2d, %);
I have made a list of segments with ending points. The result is :
and full code is here
Edit: smaller size with shorter lists in case of almost straight lines,
if (n>20) then iMax:10 else iMax : 250,
in the GivePart function

Differentiating a scalar with respect to matrix

I have a scalar function which is obtained by iterative calculations. I wish to differentiate(find the directional derivative) of the values with respect to a matrix elementwise. How should I employ the finite difference approximation in this case. Does diff or gradient help in this case. Note that I only want numerical derivatives.
The typical code that I would work on is:
n=4;
for i=1:n
for x(i)=-2:0.04:4;
for y(i)=-2:0.04:4;
A(:,:,i)=[sin(x(i)), cos(y(i));2sin(x(i)),sin(x(i)+y(i)).^2];
B(:,:,i)=[sin(x(i)), cos(x(i));3sin(y(i)),cos(x(i))];
R(:,:,i)=horzcat(A(:,:,i),B(:,:,i));
L(i)=det(B(:,:,i)'*A(:,:,i)B)(:,:,i));
%how to find gradient of L with respect to x(i), y(i)
grad_L=tr((diff(L)/diff(R)')*(gradient(R))
endfor;
endfor;
endfor;
I know that the last part for grad_L would syntax error saying the dimensions don't match. How do I proceed to solve this. Note that gradient or directional derivative of a scalar functionf of a matrix variable X is given by nabla(f)=trace((partial f/patial(x_{ij})*X_dot where x_{ij} denotes elements of matrix and X_dot denotes gradient of the matrix X
Both your code and explanation are very confusing. You're using an iteration of n = 4, but you don't do anything with your inputs or outputs, and you overwrite everything. So I will ignore the n aspect for now since you don't seem to be making any use of it. Furthermore you have many syntactical mistakes which look more like maths or pseudocode, rather than any attempt to write valid Matlab / Octave.
But, essentially, you seem to be asking, "I have a function which for each (x,y) coordinate on a 2D grid, it calculates a scalar output L(x,y)", where the calculation leading to L involves multiplying two matrices and then getting their determinant. Here's how to produce such an array L:
X = -2 : 0.04 : 4;
Y = -2 : 0.04 : 4;
X_indices = 1 : length(X);
Y_indices = 1 : length(Y);
for Ind_x = X_indices
for Ind_y = Y_indices
x = X(Ind_x); y = Y(Ind_y);
A = [sin(x), cos(y); 2 * sin(x), sin(x+y)^2];
B = [sin(x), cos(x); 3 * sin(y), cos(x) ];
L(Ind_x, Ind_y) = det (B.' * A * B);
end
end
You then want to obtain the gradient of L, which, of course, is a vector output. Now, to obtain this, ignoring the maths you mentioned for a second, if you're basically trying to use the gradient function correctly, then you just use it directly onto L, and specify the grid X Y used for it to specify the spacings between the different elements in L, and collect its output as a two-element array, so that you capture both the x and y vector-components of the gradient:
[gLx, gLy] = gradient(L, X, Y);

R plot implicit function outer command

I would like to plot an implicit function of x and y: 1 - 0.125 * y ^ 2 - x ^ 2 = 0.005
I know it can be plotted as a contour plot but have trouble with the "outer" command
in the following:
x<-seq(0.4,1.01,length=1000)
y<-seq(0,3,length=1000)
z<-outer(x,y,FUN="1-0.125*y^2-x^2=0.005")
contour(x,y,z,levels=0,drawpoints=FALSE)
I read the FAQ (7.17) regarding the "outer" command and the need to vectorize the function but am still in a quandry.
I think you're a little confused about the meaning of 'function'.
All the operations (+,-,^) in your function are vectorized so that all works just fine.
x <- seq(0.4,1.01,length=1000)
y <- seq(0,3,length=1000)
z <- outer(x,y,function(x,y) 1-0.125*y^2-x^2-0.005)
contour(x,y,z,levels=0,drawlabels=FALSE)
Or if you want a minor shortcut:
library(emdbook)
curve3d(1-0.125*y^2-x^2-0.005,
xlim=c(0.4,1.01),
ylim=c(0,3),
n=c(100,100),
sys3d="contour",drawlabels=FALSE,levels=0)
This actually is slower because it uses a for loop internally rather than outer(), so I set it to 100x100 rather than 1000x1000 (which is overkill for this example anyway), but it will work on more complex examples that can't be vectorized easily ...
plotting an implicit equation as a contour plot is overkill. You are essentially throwing away 99.99% of the computations you did.
Better way is to find the value of y for a given x that will make the equation 0. Here is the code using uniroot in base R.
R code using uniroot from base R
x = seq(0, 0.995, length = 100) # no real root above x = 0.995
root <- function(a) {
uniroot(function(x,y) 1 - 0.125 * y^2 - x^2 - 0.005, c(0, 3), x = a )$root #only care about the root
}
y <- sapply(x, root)
plot(x,y, type = "l")
Ok the c(0, 3) in the uniroot argument is the range of y values where the root lies. so for every given x value uniroot will look for a y value between 0 and 3 for a root.
R code using fsolve from pracma package
library("pracma")
x <- seq(0,0.995, length=100)
fun <- function(y) c(1 - 0.125 * y^2 - x^2 - 0.005)
y_sol <- fsolve(fun, rep(1, length(x)))$x
plot(x,y_sol, type="l")
fsolve takes in the function whose root is sought and guess values of y for every given value of x. Here we are saying that the y values lie near 1. we are giving it a guess value of 1's
uniroot wants a bound range to look for root, fsolve requires a guess where the root might be.
These are faster ways to plot implicit equations. You can then use any graph package like ggplot2/rbokeh to do the plotting.
I haven't done any benchmarks so cannot tell which one method is faster. Though for such a simple implicit function it won't matter

Resources