SingularException(2) when invoking multivariate Newton's method in Julia - julia

I have implemented the multivariate Newton's method in Julia.
function newton(f::Function, J::Function, x)
h = Inf64
tolerance = 10^(-10)
while (norm(h) > tolerance)
h = J(x)\f(x)
x = x - h
end
return x
end
On the one hand, when I try to solve the below system of equations, LoadError: SingularException(2) is thrown.
f(θ) = [cos(θ[1]) + cos(θ[2] - 1.3),
sin(θ[1]) + sin(θ[2]) - 1.3]
J(θ) = [-sin(θ[1]) -sin(θ[2]);
cos(θ[1]) cos(θ[2])]
θ = [pi/3, pi/7]
newton(f, J, θ)
On the other hand, when I try to solve this other system
f(x) = [(93-x[1])^2 + (63-x[2])^2 - 55.1^2,
(6-x[1])^2 + (16-x[2])^2 - 46.2^2]
J(x) = [-2*(93-x[1]) -2*(63-x[2]); -2*(6-x[1]) -2*(16-x[2])]
x = [35, 50]
newton(f, J, x)
no errors are thrown and the correct solution is returned.
Furthermore, if I first solve the second system and then try to solve the first system, which normally throwns SingularException(2), I now instead encounter no errors but am returned a totally inaccurate solution to the first system.
What exactly is going wrong when I try to solve the first system and how can I resolve the error?

Related

Stochastic differential equation sensitivity analysis with specified noise

I am trying to calculate the gradient of a functional of a stochastic differential equation (SDE) solution given a specific realization of the noise. I can successfully calculate these gradients if I leave the noise unspecified, as shown in DiffEqFlux.jl: Using Other Differential Equations. I can also successfully obtain the solution to my SDE for a specific noise realization, like shown in DifferentialEquations.jl: NoiseWrapper Example. When I try and put the two together, though, the code returns an error.
Here is a minimal working example adapted from the two separate examples referenced above:
using StochasticDiffEq, DiffEqBase, DiffEqNoiseProcess, DiffEqSensitivity, Zygote
function lotka_volterra(du,u,p,t)
x, y = u
α, β, δ, γ = p
du[1] = dx = α*x - β*x*y
du[2] = dy = -δ*y + γ*x*y
end
function lotka_volterra_noise(du,u,p,t)
du[1] = 0.1u[1]
du[2] = 0.1u[2]
end
dt = 1//2^(4)
u0 = [1.0,1.0]
p = [2.2, 1.0, 2.0, 0.4]
prob1 = SDEProblem(lotka_volterra,lotka_volterra_noise,u0,(0.0,10.0),p)
sol1 = solve(prob1,EM(),dt=dt,save_noise=true)
W2 = NoiseWrapper(sol1.W)
prob2 = SDEProblem(lotka_volterra,lotka_volterra_noise,u0,(0.0,10.0),p,noise=W2)
sol2 = solve(prob2,EM(),dt=dt)
function predict_sde1(p)
Array(concrete_solve(remake(prob1,p=p),EM(),dt=dt,sensealg=ForwardDiffSensitivity(),saveat=0.1))
end
loss_sde1(p) = sum(abs2,x-1 for x in predict_sde1(p))
loss_sde1(p)
# This gradient is successfully calculated
Zygote.gradient(loss_sde1,p)
function predict_sde2(p)
W2 = NoiseWrapper(sol1.W)
Array(concrete_solve(remake(prob2,p=p,noise=W2),EM(),dt=dt,sensealg=ForwardDiffSensitivity(),saveat=0.1))
end
loss_sde2(p) = sum(abs2,x-1 for x in predict_sde2(p))
# This loss is successfully calculated
loss_sde2(p)
# This gradient calculation raises and error
Zygote.gradient(loss_sde2,p)
The error I get at the end of running this code is
TypeError: in setfield!, expected Float64, got ForwardDiff.Dual{Nothing,Float64,4}
Stacktrace:
[1] setproperty! at ./Base.jl:21 [inlined]
...
followed by an interminable conclusion to the stacktrace (I can post it if you think it would be helpful, but since it's longer than the rest of this question I'd rather not clutter things up off the bat).
Is calculating gradients for SDE problems with specified noise realizations not currently supported, or am I just not making the appropriate function calls? I could easily believe the latter, since it was a bit of a struggle just to get to the point where the working parts of the above code worked, but I couldn't find any clue as to what I had incorrectly supplied after stepping through this code with the Juno debugger.
As a StackOverflow solution, you can use ForwardDiffSensitivity(convert_tspan=false) to work around this. Working code:
using StochasticDiffEq, DiffEqBase, DiffEqNoiseProcess, DiffEqSensitivity, Zygote
function lotka_volterra(du,u,p,t)
x, y = u
α, β, δ, γ = p
du[1] = dx = α*x - β*x*y
du[2] = dy = -δ*y + γ*x*y
end
function lotka_volterra_noise(du,u,p,t)
du[1] = 0.1u[1]
du[2] = 0.1u[2]
end
dt = 1//2^(4)
u0 = [1.0,1.0]
p = [2.2, 1.0, 2.0, 0.4]
prob1 = SDEProblem(lotka_volterra,lotka_volterra_noise,u0,(0.0,10.0),p)
sol1 = solve(prob1,EM(),dt=dt,save_noise=true)
W2 = NoiseWrapper(sol1.W)
prob2 = SDEProblem(lotka_volterra,lotka_volterra_noise,u0,(0.0,10.0),p,noise=W2)
sol2 = solve(prob2,EM(),dt=dt)
function predict_sde1(p)
Array(concrete_solve(remake(prob1,p=p),EM(),dt=dt,sensealg=ForwardDiffSensitivity(convert_tspan=false),saveat=0.1))
end
loss_sde1(p) = sum(abs2,x-1 for x in predict_sde1(p))
loss_sde1(p)
# This gradient is successfully calculated
Zygote.gradient(loss_sde1,p)
function predict_sde2(p)
Array(concrete_solve(prob2,EM(),prob2.u0,p,dt=dt,sensealg=ForwardDiffSensitivity(convert_tspan=false),saveat=0.1))
end
loss_sde2(p) = sum(abs2,x-1 for x in predict_sde2(p))
# This loss is successfully calculated
loss_sde2(p)
# This gradient calculation raises and error
Zygote.gradient(loss_sde2,p)
As a developer... this isn't a nice solution and our default should be better here. I'll work on this. You can track the development here https://github.com/JuliaDiffEq/DiffEqSensitivity.jl/issues/204. It'll probably get solved in an hour or so.
Edit: The fix is released and your original code works.

Complex PDE (Ginzburg Landau) in Julia with Pseudo-Spectral method

I want to teach myself about solving PDEs with Julia and I am trying to solve the complex Ginzburg Landau equation (CGLE) with a pseudospectral method in Julia now. However, I struggle with it and I am a bit of ideas what to try.
The CGLE reads:
With Fourier transform and its inverse , I can transform into the spectral form:
This is for example also given in this old script I found (https://www.uni-muenster.de/Physik.TP/archive/fileadmin/lehre/NumMethoden/SoSe2009/Skript/script.pdf) From the same source I know, that alpha=1, beta=2 and initial conditions with small noise of order 0.01 around 0 should result in plane waves as solutions. Thats what I want to test first.
Following the very nice tutorial from Chris Rackauckas (https://youtu.be/okGybBmihOE), I tried to use ApproxFun and DifferentialEquations in the following way to solve this problem:
EDIT: I corrected two mistakes from the original post, a missing dot and minus sign, but the code is still not giving the correct results
EDIT2: Figured out that I computed the wavenumber k completely wrong
using ApproxFun
using DifferentialEquations
F = Fourier()
n = 512
L = 100
T = ApproxFun.plan_transform(F, n)
Ti = ApproxFun.plan_itransform(F, n)
x = collect(range(-L/2,stop=L/2, length=n))
k = points(F, n)
alpha = 1im
beta = 2im
u0 = 0.01*(rand(ComplexF64, n) .- 0.5)
Fu0 = T*u0
function cgle!(du, u, p, t)
a, b, k, T, Ti = p
invu = Ti*u
du .= (1.0 .- k.^2*(1.0 .+a)).*u .- T*( (1.0 .+b) .* (abs.(invu)).^2 .* invu)
end
pars = alpha, beta, k, T, Ti
prob = ODEProblem(cgle!, Fu0, (0.,50.), pars)
u = solve(prob, Rodas5(autodiff=false))
# plotting on a equidistant time stepping
t = collect(range(0, stop=50, length=1000))
sol = zeros(eltype(u),(n, length(t)))
for it in eachindex(t)
sol[:,it] = Ti*u(t[it])
end
IM = PyPlot.imshow(real.(sol))
cb = PyPlot.colorbar(IM, orientation="horizontal")
gcf()
(edited) I tried different solvers, as also recommended in the video, some apparently wont work for complex numbers, some do, but when I run this code it does not give the expected results. The solution remain very small in value and it wont result in the plane waves that actually should be the result. I also tested other intial conditions that should result in chaos, but those result in the same very small solutions as well. I also alternativly used an explicit Laplace Operator with ApproxFun, but the results are the same. My problem here, is that I am neither really an expert with PDE mathemitacaly, nor with their numerical treatment, so far I mainly worked with ODEs.
EDIT2 This now seems to work more or less. I am still wondering about some things though
How can I compute this on a specified domain like , I am seriously confused about how this works with ApproxFun, as far as I can see the wavenumbers k should be (2pi/L)*[-N/2+1 ; N/2 -1], but I am not so sure about how to do this with ApproxFun
https://codeinthehole.com/tutorial/coherent.html shows the different dynamic regimes / phase portrait of the equation. While I can reproduce some of them, some don't seem to work, like the Spatio-temporal intermittency
EDIT 3: I solved these issues by using FFTW directly instead of ApproxFun. In case somebody knows how to this with ApproxFun, I would still be interessted though. Below follows the code with FFTW (it is also a bit more optimized for performance)
begin
using FFTW
using DifferentialEquations
using PyPlot
end
begin
n = 512
L = 200
n2 = Int(n/2)
alpha = 2im
beta = 1im
x = range(-L/2,stop=L/2,length=n)
u0 = 0.01*(rand(ComplexF64, n) .- 0.5)
k = [0:n/2-1; 0; -n/2+1:-1] .*(2*pi/L);
k2 = k.*k
k2[n2 + 1] = (n2*(2*pi/L))^2
T = plan_fft(u0)
Ti = plan_ifft(T*u0)
LinOp = (1.0 .- k2.*(1.0 .+alpha))
Fu0 = T*u0
end
function cgle!(du, u, p, t)
LinOp, b, T, Ti = p
invu = Ti*u
du .= LinOp.*u .- T*( (1.0 .+b) .* (abs.(invu)).^2 .* invu)
end
pars = LinOp, beta, T, Ti
prob = ODEProblem(cgle!, Fu0, (0.,100.), pars)
#time u = solve(prob)
t = collect(range(0, stop=50, length=1000))
sol = zeros(eltype(u),(n, length(t)))
for it in eachindex(t)
sol[:,it] = Ti*u(t[it])
end
IM = PyPlot.imshow(abs.(sol))
cb = PyPlot.colorbar(IM, orientation="horizontal")
gcf()
EDIT 4: Rodas turned out to be a extremly slow solver for this case, just using the default works out nicely for me.
Any help is appreciated.
du = (1. .- k.^2*(1. .+(im*a))).*u + T*( (1. .+(im*b)) .* abs.(invu).^2 .* invu)
Notice that is replacing the pointer to du, not updating it. Use something like .= instead:
du .= (1. .- k.^2*(1. .+(im*a))).*u + T*( (1. .+(im*b)) .* abs.(invu).^2 .* invu)
Otherwise your derivative is just 0.

How to use NLopt in Julia with equality_constraint

I'm struggling to amend the Julia-specific tutorial on NLopt to meet my needs and would be grateful if someone could explain what I'm doing wrong or failing to understand.
I wish to:
Minimise the value of some objective function myfunc(x); where
x must lie in the unit hypercube (just 2 dimensions in the example below); and
the sum of the elements of x must be one.
Below I make myfunc very simple - the square of the distance from x to [2.0, 0.0] so that the obvious correct solution to the problem is x = [1.0,0.0] for which myfunc(x) = 1.0. I have also added println statements so that I can see what the solver is doing.
testNLopt = function()
origin = [2.0,0.0]
n = length(origin)
#Returns square of the distance between x and "origin", and amends grad in-place
myfunc = function(x::Vector{Float64}, grad::Vector{Float64})
if length(grad) > 0
grad = 2 .* (x .- origin)
end
xOut = sum((x .- origin).^2)
println("myfunc: x = $x; myfunc(x) = $xOut; ∂myfunc/∂x = $grad")
return(xOut)
end
#Constrain the sums of the x's to be 1...
sumconstraint =function(x::Vector{Float64}, grad::Vector{Float64})
if length(grad) > 0
grad = ones(length(x))
end
xOut = sum(x) - 1
println("sumconstraint: x = $x; constraint = $xOut; ∂constraint/∂x = $grad")
return(xOut)
end
opt = Opt(:LD_SLSQP,n)
lower_bounds!(opt, zeros(n))
upper_bounds!(opt,ones(n))
equality_constraint!(opt,sumconstraint,0)
#xtol_rel!(opt,1e-4)
xtol_abs!(opt,1e-8)
min_objective!(opt, myfunc)
maxeval!(opt,20)#to ensure code always terminates, remove this line when code working correctly?
optimize(opt,ones(n)./n)
end
I have read this similar question and documentation here and here, but still can't figure out what's wrong. Worryingly, each time I run testNLopt I see different behaviour, as in this screenshot including occasions when the solver uselessly evaluates myfunc([NaN,NaN]) many times.
You aren't actually writing to the grad parameters in-place, as you write in the comments;
grad = 2 .* (x .- origin)
just overrides the local variable, not the array contents -- and I guess that's why you see these df/dx = [NaN, NaN] everywhere. The simplest way to fix that would be with broadcasting assignment (note the dot):
grad .= 2 .* (x .- origin)
and so on. You can read about that behaviour here and here.

Julia program on ephemerides shows inadequate answers

While solving a differential equation on satellite motion encountered this error:
dt <= dtmin. Aborting. If you would like to force continuation with dt=dtmin, set force_dtmin=true
Here is my code:
using JPLEphemeris
spk = SPK("some-path/de430.bsp")
jd = Dates.datetime2julian(DateTime(some-date))#date of the calculatinons
yyyy/mm/dd hh/mm/ss
jd2 = Dates.datetime2julian(DateTime(some-date))#date of the calculatinons
yyyy/mm/dd hh/mm/ss
println(jd)
println(jd2)
st_bar_sun = state(spk, 0, 10, jd)
st_bar_moon_earth = state(spk, 0, 3, jd)
st_bar_me_earth = state(spk, 3, 399, jd)
st_bar_me_moon = state(spk, 3, 301, jd)
moon_cord = st_bar_me_moon - st_bar_me_earth
a = st_bar_moon_earth + st_bar_me_earth
sun_cord = st_bar_sun - a
println(sputnik_cord)
sputnik_cord = [8.0,8.0,8.0,8.0,8.0,8.0,8.0]
moon_sputnik = sputnik_cord - moon_cord
sun_sputnic = sputnik_cord - sun_cord
Req = 6378137
J2 = 1.08262668E-3
GMe = 398600.4418E+9
GMm = 4.903E+12
GMs = 1.32712440018E+20
function f(dy,y,p,t)
re2=(y[1]^2 + y[2]^2 + y[3]^2)
re3=re2^(3/2)
rs3 = ((y[1]-sun_cord[1])^2 + (y[2]-sun_cord[2])^2 + (y[3]-sun_cord[3])^2)^(3/2)
rm3 = ((y[1]-moon_cord[1])^2 + (y[2]-moon_cord[2])^2 + (y[3]-moon_cord[3])^2)^(3/2)
w = 1 + 1.5J2*(Req*Req/re2)*(1 - 5y[3]*y[3]/re2)
w2 = 1 + 1.5J2*(Req*Req/re2)*(3 - 5y[3]*y[3]/re2)
dy[1] = y[4]
dy[2] = y[5]
dy[3] = y[6]
dy[4] = -GMe*y[1]*w/re3
dy[5] = -GMe*y[2]*w/re3
dy[6] = -GMe*y[3]*w2/re3
end
function f2(dy,y,p,t)
re2=(y[1]^2 + y[2]^2 + y[3]^2)
re3=re2^(3/2)
rs3 = ((y[1]-sun_cord[1])^2 + (y[2]-sun_cord[2])^2 + (y[3]-sun_cord[3])^2)^(3/2)
rm3 = ((y[1]-moon_cord[1])^2 + (y[2]-moon_cord[2])^2 + (y[3]-moon_cord[3])^2)^(3/2)
w = 1 + 1.5J2*(Req*Req/re2)*(1 - 5y[3]*y[3]/re2)
w2 = 1 + 1.5J2*(Req*Req/re2)*(3 - 5y[3]*y[3]/re2)
dy[1] = y[4]
dy[2] = y[5]
dy[3] = y[6]
dy[4] = -GMe*y[1]*w/re3 - GMm*y[1]/rm3 - GMs*y[1]/rs3
dy[5] = -GMe*y[2]*w/re3 - GMm*y[2]/rm3 - GMs*y[2]/rs3
dy[6] = -GMe*y[3]*w2/re3- GMm*y[3]/rm3 - GMs*y[3]/rs3
end
y0 = sputnik_cord
jd=jd*86400
jd2=jd2*86400
using DifferentialEquations
prob = ODEProblem(f,y0,(jd,jd2))
sol = solve(prob,DP5(),abstol=1e-13,reltol=1e-13)
prob2 = ODEProblem(f2,y0,(jd,jd2))
sol2 = solve(prob2,DP5(),abstol=1e-13,reltol=1e-13)
println("Without SUN and MOON")
println(sol[end])
for i = (1:6)
println(sputnik_cord[i]-sol[end][i])
end
println("With SUN and MOON")
println(sol2[end])
What(except the values) can be a reason of this? It worked well before I added the terms with sun_coords and moon_coords in definition of dy[4], dy[5], dy[6] in function f2(As I suppose the function f1 works correctly).
There are two reasons this could be happening. For one, you could see this error because the model is unstable due to implementation issues. If you accidentally put something in wrong, the solution may be diverging to infinity and as it diverges the time steps shorten and it exists with this error.
Another thing that can happen is that your model might be stiff. This can happen if you have large time scale differences between different components. In that case, DP5(), an explicit Runge-Kutta method, is not an appropriate algorithm for this problem. Instead, you will want to look at something for stiff equations. I would recommend giving Rosenbrock23() a try: it's not the fastest but it's super stable and if the problem is integrable it'll handle it.
That's a very good way to diagnose these issues: try other integrators. Try Rosenbrock23(), CVODE_BDF(), radau(), dopri5(), Vern9(), etc. If none of these are working, then you will have just tested your algorithm with a mixture of the most well-tested algorithms (some of them Julia implementations, but others are just wrappers to standard classic C++ and Fortran methods) and this suggests that the issue is in your model formulation and not a peculiarity of a specific solver on this problem. Since I cannot run your example (you should make your example runnable, i.e. no extra files required, if you want me to test things out), I cannot be sure that your model implementation is correct and this would be a good way to find out.
My guess is that the model you have written down is not a good implementation because of floating point issues.
GMe = 398600.4418E+9
GMm = 4.903E+12
GMs = 1.32712440018E+20
these values have only precision 16 digits less than their prescribed value:
> eps(1.32712440018E+20)
16384.0
> 1.32712440018E+20 + 16383
1.3271244001800002e20
> 1.32712440018E+20 + 16380
1.3271244001800002e20
> 1.32712440018E+20 + 16000
1.3271244001800002e20
Notice the lack of precision below the machine epsilon for this value. Well, you're asking for
sol = solve(prob,DP5(),abstol=1e-13,reltol=1e-13)
precision to 1e-13 when it's difficult to be precise to 1e5 given the size of your constants. You need to adjust your units or utilize BigFloat numbers if you want this kind of precision on this problem. So what's likely going on is that the differential equation solvers are realizing that they are not hitting 1e-13 precision, shrinking the stepsize, and repeating this indefinitely (because they can never hit 1e-13 precision due to the size of the floating point numbers) until the stepsize is too small and it exits. If you change the units so that way the constants are more reasonable in size then you can fix this problem.

Scilab round-off error

I cannot solve a problem in Scilab because it get stucked because of round-off errors. I get the message
!--error 9999
Error: Round-off error detected, the requested tolerance (or default) cannot be achieved. Try using bigger tolerances.
at line 2 of function scalpol called by :
at line 7 of function gram_schmidt_pol called by :
gram_schmidt_pol(a,-1/2,-1/2)
It's a Gram Schmidt process with the integral of the product of two functions and a weight as the scalar product, between -1 and 1.
gram_schmidt_pol is the process specially designed for polynome, and scalpol is the scalar product described for polynome.
The a and b are parameters for the weigth, which is (1+x)^a*(1-x)^b
The entry is a matrix representing a set of vectors, it works well with the matrix [[1;2;3],[4;5;6],[7;8;9]], but it fails with the above message error on matrix eye(2,2), in addition to this, I need to do it on eye(9,9) !
I have looked for a "tolerance setting" in the menus, there is some in General->Preferences->Xcos->Simulation but I believe this is not for what I wan't, I have tried low settings (high tolerance) in it and it hasn't change anything.
So how can I solve this rounf-off problem ?
Feel free to tell me my message lacks of clearness.
Thank you.
Edit: Code of the functions :
// function that evaluate a polynomial (vector of coefficients) in x
function [y] = pol(p, x)
y = 0
for i=1:length(p)
y = y + p(i)*x^(i-1)
end
endfunction
// weight function evaluated in x, parametrized by a and b
// (poids = weight in french)
function [y] = poids(x, a, b)
y = (1-x)^a*(1+x)^b
endfunction
// scalpol compute scalar product between polynomial p1 and p2
// using integrate, the weight and the pol functions.
function [s] = scalpol(p1, p2, a, b)
s = integrate('poids(x,a, b)*pol(p1,x)*pol(p2,x)', 'x', -1, 1)
endfunction
// norm associated to scalpol
function [y] = normscalpol(f, a, b)
y = sqrt(scalpol(f, f, a, b))
endfunction
// finally the gram schmidt process on a family of polynome
// represented by a matrix
function [o] = gram_schmidt_pol(m, a, b)
[n,p] = size(m)
o(1:n) = m(1:n,1)/(normscalpol(m(1:n,1), a, b))
for k = 2:p
s =0
for i = 1:(k-1)
s = s + (scalpol(o(1:n,i), m(1:n,k), a, b) / scalpol(o(1:n,i),o(1:n,i), a, b) .* o(1:n,i))
end
o(1:n,k) = m(1:n,k) - s
o(1:n,k) = o(1:n,k) ./ normscalpol(o(1:n,k), a, b)
end
endfunction
By default, Scilab's integrate routine tries to achieve absolute error at most 1e-8 and relative error at most 1e-14. This is reasonable, but its treatment of relative error does not take into account the issues that occur when the exact value is zero. (See How to calculate relative error when true value is zero?). For this reason, even the simple
integrate('x', 'x', -1, 1)
throws an error (in Scilab 5.5.1).
And this is what happens in the process of running your program: some integrals are zero. There are two solutions:
(A) Give up on the relative error bound, by specifying it as 1:
integrate('...', 'x', -1, 1, 1e-8, 1)
(B) Add some constant to the function being integrated, then subtract from the result:
integrate('100 + ... ', 'x', -1, 1) - 200
(The latter should work in most cases, though if the integral happens to be exactly -200, you'll have the same problem again)
The above works for gram_schmidt_pol(eye(2,2), -1/2, -1/2) but for larger, say, gram_schmidt_pol(eye(9,9), -1/2, -1/2), it throws the error "The integral is probably divergent, or slowly convergent".
It appears that the adaptive integration routine can't handle the functions of the kind you have. A fallback is to use the simple inttrap instead, which just applies the trapezoidal rule. Since at x=-1 and 1 the function poids is undefined, the endpoints have to be excluded.
function [s] = scalpol(p1, p2, a, b)
t = -0.9995:0.001:0.9995
y = poids(t,a, b).*pol(p1,t).*pol(p2,t)
s = inttrap(t,y)
endfunction
In order for this to work, other related functions must be vectorized (* and ^ changed to .* and .^ where necessary):
function [y] = pol(p, x)
y = 0
for i=1:length(p)
y = y + p(i)*x.^(i-1)
end
endfunction
function [y] = poids(x, a, b)
y = (1-x).^a.*(1+x).^b
endfunction
The result is guaranteed to work, though the precision may be a bit lower: you are going to get some numbers like 3D-16 which are actually zeros.

Resources