Julia: using CurveFit package non linear - julia

why the code below does not work?
xa = [0 0.200000000000000 0.400000000000000 1.00000000000000 1.60000000000000 1.80000000000000 2.00000000000000 2.60000000000000 2.80000000000000 3.00000000000000 3.80000000000000 4.80000000000000 5.00000000000000 5.20000000000000 6.00000000000000 6.20000000000000 7.40000000000000 7.60000000000000 7.80000000000000 8.60000000000000 8.80000000000000 9.00000000000000 9.20000000000000 9.40000000000000 10.0000000000000 10.6000000000000 10.8000000000000 11.2000000000000 11.6000000000000 11.8000000000000 12.2000000000000 12.4000000000000];
ya = [-0.183440428023042 -0.131101157495126 0.0268875670852843 0.300000000120000 0.579048247883555 0.852605831272159 0.935180993484717 1.13328608090532 1.26893326843583 1.10202945535186 1.09201137189664 1.14279083803453 0.811302535321072 0.909735376251797 0.417067545528244 0.460107770989798 -0.516307074859654 -0.333994077331822 -0.504124744955962 -0.945794320817293 -0.915934553082780 -0.975458595671737 -1.09943707404275 -1.11254211607374 -1.29739980589100 -1.23440439602665 -0.953807504156356 -1.12240274852172 -0.609284630192522 -0.592560286759450 -0.402521296049042 -0.510090363150962];
x0 = vec(xa)
y0 = vec(ya)
fun(x,a) = a[1].*sin(a[2].*x - a[3])
a0 = [1,2,3]
eps = 0.000001
maxiter=200
coefs, converged, iter = CurveFit.nonlinear_fit(x0 , fun , a0 , eps, maxiter )
y0b = fit(x0)
Winston.plot(x0, y0, "ob", x0, y0b, "r-", linewidth=3)
Error: LoadError: MethodError: convert has no method matching convert(::Type{Float64}, ::Array{Float64,1}) This may have arisen from
a call to the constructor Float64(...), since type constructors fall
back to convert methods. Closest candidates are: call{T}(::Type{T},
::Any) convert(::Type{Float64}, !Matched::Int8)
convert(::Type{Float64}, !Matched::Int16)
while loading In[269], in expression starting on line 8
in nonlinear_fit at /home/jmarcellopereira/.julia/v0.4/CurveFit/src/nonlinfit.jl:75

The fun function has to return a residual value r of type Float64, calculated at each iteration of the data, as follows:
r = y - fun(x, coefs)
so your function y=a1*sin(x*a2-a3) will be defined as:
fun(x,a) = x[2]-a[1]*sin(a[2]*x[1] - a[3])
Where:
x[2] is a value of 'y' vector
x[1] is a value of 'x' vector
a[...] is the set of parameters
The fun function has to return a single Float64, so the operators can't be 'dot version' (.*).
By calling the nonlinear_fit function, the first parameter must be an array Nx2, with the first column containing N values of x and the second, containing N values of y, so you must concatenate the two vectors x and y in a two columns array:
xy = [x y]
and finally, call the function:
coefs, converged, iter = CurveFit.nonlinear_fit(xy , fun , a0 , eps, maxiter )
Answering to your comment about the returned coefficients are not correct:
The y = 1 * sin (x * a2-a3) is a harmonic function, so the coefficients returning from the function call, depend heavily on the parameter a0 ("initial guess for each fitting parameter") you will send as the third parameter (with maxiter=200_000):
a0=[1.5, 1.5, 1.0]
coefficients: [0.2616335317043578, 1.1471991302529982,0.7048665905560775]
a0=[100.,100.,100.]
coefficients: [-0.4077952060368059, 90.52328921205392, 96.75331155303707]
a0=[1.2, 0.5, 0.5]
coefficients: [1.192007321713507, 0.49426296880933257, 0.19863645732313934]
I think the results you're getting are harmonics, as the graph:
Where:
blue line:
f1(xx)=0.2616335317043578*sin(xx*1.1471991302529982-0.7048665905560775)
yellow line:
f2(xx)=1.192007321713507*sin(xx*0.49426296880933257-0.19863645732313934)
pink line:
f3(xx)=-0.4077952060368059*sin(xx*90.52328921205392-96.75331155303707)
blue dots are your initial data.
The graph was generated with Gadfly:
plot(layer(x=x,y=y,Geom.point),layer([f1,f2,f3],0.0, 15.0,Geom.line))
tested with Julia Version 0.4.3

from Doc:
we are trying to fit the relationship fun(x, a) = 0
So, if you want to find elements of a in a way that: for each xi,yi in [x0 y0] => a[1].*sin(a[2].*xi - a[3])==yi, then the right way is:
fun(xy,a) = a[1].*sin(a[2].*xy[1] - a[3])-xy[2];
xy=hcat(x0,y0);
coefs,converged,iter = CurveFit.nonlinear_fit(xy,fun,a0,eps,maxiter);

I found the LsqFit package a bit simpler to use, just define first the model and "fit it" with your data:
using DataFrames, Plots, LsqFit
xa = [0 0.200000000000000 0.400000000000000 1.00000000000000 1.60000000000000 1.80000000000000 2.00000000000000 2.60000000000000 2.80000000000000 3.00000000000000 3.80000000000000 4.80000000000000 5.00000000000000 5.20000000000000 6.00000000000000 6.20000000000000 7.40000000000000 7.60000000000000 7.80000000000000 8.60000000000000 8.80000000000000 9.00000000000000 9.20000000000000 9.40000000000000 10.0000000000000 10.6000000000000 10.8000000000000 11.2000000000000 11.6000000000000 11.8000000000000 12.2000000000000 12.4000000000000];
ya = [-0.183440428023042 -0.131101157495126 0.0268875670852843 0.300000000120000 0.579048247883555 0.852605831272159 0.935180993484717 1.13328608090532 1.26893326843583 1.10202945535186 1.09201137189664 1.14279083803453 0.811302535321072 0.909735376251797 0.417067545528244 0.460107770989798 -0.516307074859654 -0.333994077331822 -0.504124744955962 -0.945794320817293 -0.915934553082780 -0.975458595671737 -1.09943707404275 -1.11254211607374 -1.29739980589100 -1.23440439602665 -0.953807504156356 -1.12240274852172 -0.609284630192522 -0.592560286759450 -0.402521296049042 -0.510090363150962];
x0 = vec(xa)
y0 = vec(ya)
xbase = collect(linspace(minimum(x0),maximum(x0),100))
p0 = [1.2, 0.5, 0.5] # initial value of parameters
fun(x0, p) = p[1] .* sin.(p[2] .* x0 .- p[3]) # definition of the model
fit = curve_fit(fun,x0,y0,p0) # actual fitting job
yFit = [fit.param[1] * sin(fit.param[2] * x - fit.param[3]) for x in xbase] # building the fitted values
# Plotting..
scatter(x0, y0, label="obs")
plot!(xbase, yFit, label="fitted")
Note that using LsqFit does not solve the problem of dependency from initial conditions highlighted by Gomiero..

Related

Puzzling result from boundary condition code in Julia BVP solver

I am trying to solve a boundary value problem in Julia, following the example found here, using the BoundaryValueDiffEq package. In the boundary condition function, the example requires a for loop to update each index individually, à la
function bc1!(residual, u, p, t)
for i in 1:n
residual[i] = u[end][i] - 10
end
end
I would like to use the following code, which should be more efficient:
function bc1!(residual, u, p, t)
residual = u[end] .- 10
end
Though the resulting value of residual is the same for both versions of the code, the solver gives the correct result in the first case and an incorrect result in the second case.
All I can think of is that there is some difference between updating residual
index by index and assigning a new vector to it, even if the result is identical in value and in type. Why is this the case, and is it possible to make the code more efficient while preserving the correct result?
Here is the full code in case it helps.
using BoundaryValueDiffEq, Plots
n = 3
f(t) = .1
F(t) = .1*t
function du!(du,u,p,t)
fn(i) = 1/(u[i]-t)
for i in 1:n
du[i] = 1/(n-1)*F(u[i])/f(u[i])*((2-n)/(u[i]-t)+sum(map(fn,
vcat(1:i-1,i+1:n))))
end
end
function bc1!(residual, u, p, t)
#residual = u[end] .- 10
for i in 1:n
residual[i] = u[end][i]-10
end
end
# exact solution
xvals = LinRange(0,20/3,200)
yvals = 1.5*xvals
# solving BVP
tspan = (0.0,20/3)
bvp1 = BVProblem(du!, bc1!, 10*ones(Int8,n), tspan)
sol1 = solve(bvp1, GeneralMIRK4(), dt=.2)
# plotting computed solution vs actual solution
plot(sol1,vars=(0,1))
plot!(xvals,yvals,label="Exact solution")
You overrode the array instead of mutating it. You need to use .= to update it in-place.
function bc1!(residual, u, p, t)
residual .= u[end] .- 10
end
or safer:
function bc1!(residual, u, p, t)
#. residual = u[end] .- 10
end

Julia JuMP making sure nonlinear objective function has correct function signatures so that autodifferentiate works properly?

so I wrote a minimum example to show what I'm trying to do. Basically I want to solve a optimization problem with multiple variables. When I try to do this in JuMP I was having issues with my function obj not being able to take a forwardDiff object.
I looked here: and it seemed to do with the function signature :Restricting function signatures while using ForwardDiff in Julia . I did this in my obj function, and for insurance did it in my sub-function as well, but I still get the error
LoadError: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(my_fun)},Float64},Float64,2})
Closest candidates are:
Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
Float64(::T) where T<:Number at boot.jl:715
Float64(::Int8) at float.jl:60
This still does not work. I feel like I have the bulk of the code correct, just some weird of type thing going on that I have to clear up so autodifferentiate works...
Any suggestions?
using JuMP
using Ipopt
using LinearAlgebra
function obj(x::Array{<:Real,1})
println(x)
x1 = x[1]
x2 = x[2]
eye= Matrix{Float64}(I, 4, 4)
obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
println(obj_val)
return obj_val
end
function mat_fun(var::T) where {T<:Real}
eye= Matrix{Float64}(I, 2, 2)
eye[2,2]=var
return eye
end
m = Model(Ipopt.Optimizer)
my_fun(x...) = obj(collect(x))
#variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
#NLobjective(m, Min, my_fun(x...))
optimize!(m)
# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))
Use instead
function obj(x::Vector{T}) where {T}
println(x)
x1 = x[1]
x2 = x[2]
eye= Matrix{T}(I, 4, 4)
obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
println(obj_val)
return obj_val
end
function mat_fun(var::T) where {T}
eye= Matrix{T}(I, 2, 2)
eye[2,2]=var
return eye
end
Essentially, anywhere you see Float64, replace it by the type in the incoming argument.
I found the problem:
in my mat_fun the type of the return had to be "Real" in order for it to propgate through. Before it was Float64, which was not consistent with the fact I guess all types have to be Real with the autodifferentiate. Even though a Float64 is clearly Real, it looks like the inheritence isn't perserved i.e you have to make sure everything that is returned and inputed are type Real.
using JuMP
using Ipopt
using LinearAlgebra
function obj(x::AbstractVector{T}) where {T<:Real}
println(x)
x1 = x[1]
x2 = x[2]
eye= Matrix{Float64}(I, 4, 4)
obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
#println(obj_val)
return obj_val
end
function mat_fun(var::T) where {T<:Real}
eye= zeros(Real,(2,2))
eye[2,2]=var
return eye
end
m = Model(Ipopt.Optimizer)
my_fun(x...) = obj(collect(x))
#variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
#NLobjective(m, Min, my_fun(x...))
optimize!(m)
# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))

Error when calculating RHS of ode "no method matching Float64(::Num)"

I have some code that uses a function to calculate some changes in concentration, but I get an error of:
ERROR: LoadError: MethodError: no method matching Float64(::Num)
Closest candidates are:
(::Type{T})(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
(::Type{T})(::T) where T<:Number at boot.jl:760
(::Type{T})(::AbstractChar) where T<:Union{AbstractChar, Number} at char.jl:50
I have attached a MWE below.
The code initializes some parameters, and uses the initialized parameters to calculate additional parameters (Ke and kb), then inputs these parameters into my function oderhs(c,Ke,kb,aw,aw²,aw³,ρζ,ρζ²,ρζ³,γ,γ²) which should return dc which is my solution vector that I require.
using DifferentialEquations
#parameters t c0[1:4] Ke[1:2] kb[1:2] aw aw² aw³ ρ ζ ρζ ρζ² γ γ² T
# Calculate parameters
ρ = 0.592
ζ = 1.0
ρζ = ρ*ζ
ρζ² = ρζ*ρζ
ρζ³ = ρζ*ρζ²
aw = 0.995
aw² = aw*aw
aw³ = aw*aw²
γ = 1.08
γ² = γ*γ
T = 590.0
# calculate equilibrium constants
Ke[01] = (1.0E-06)*10.0^(-4.098 + (-3245.2/T) + (2.2362E+05/(T^2)) + (-3.9984E+07/(T^3)) + (log10(ρ) * (13.957 + (-1262.3/T) + (8.5641E+05/(T^2)))) )
Ke[02] = 10^(28.6059+0.012078*T+(1573.21/T)-13.2258*log10(T))
# calculate backward rate constants
kb[01] = Ke[01]*ρζ²/γ²
kb[02] = Ke[02]*γ/ρζ
# set initial concentrations
c0 = [0.09897, 0.01186, 2.94e-5, 4.17e-8]
function oderhs(c,Ke,kb,aw,aw²,aw³,ρζ,ρζ²,ρζ³,γ,γ²)
# rename c to their corresponding species
H₃BO₃ = c[1]; H₄BO₄⁻ = c[2]; OH⁻ = c[3]; H⁺ = c[4];
# rename Ke to their corresponding reactions
Ke_iw1 = Ke[1]; Ke_ba1 = Ke[2];
# rename kb to their corresponding reactions
kb_iw1 = kb[1]; kb_ba1 = kb[2];
# determine the rate of reaction for each reaction
r_iw1 = kb_iw1*(H⁺*OH⁻ - Ke_iw1*ρζ²*aw/γ²)
r_ba1 = kb_ba1*(H₄BO₄⁻ - H₃BO₃*OH⁻*Ke_ba1*γ/ρζ)
dc = zeros(eltype(c),4)
# calculate the change in species concentration
dc[1] = r_ba1
dc[2] = r_ba1
dc[3] = r_iw1 + r_ba1
dc[4] = r_iw1
return dc
end
dc = oderhs(c0,Ke,kb,aw,aw²,aw³,ρζ,ρζ²,ρζ³,γ,γ²)
zeros(eltype(c),4) creates an Array of Float64, which isn't what you want because you're trying to create a symbolic version of the ODE equations (right? otherwise this doesn't make sense). Thus you want to this be like zeros(Num,4), so that the return is the symbolic equations, and then you'd generate the actual code for DifferentialEquations.jl from the ModelingToolkit.jl ODESystem.

Is it possible to convert an Array{Num,1} to Array{Float64,1} in Julia?

I have the following function that uses symbolics in Julia. Everything works fine until the moment of plotting
using Distributions
using Plots
using Symbolics
using SymbolicUtils
function BinomialMeasure(iter::Int64, p::Float64, current_level = nothing)
#variables m0 m1
if current_level == nothing
current_level = [1]
end
next_level = []
for item in current_level
append!(next_level, m0*item)
append!(next_level, m1*item)
end
If iter != 0
current_level = next_level
return BinomialMeasure(iter - 1, p , current_level)
else
return [substitute(i, Dict([m0 => p, m1 => 1 - p])) for i in next_level]
end
end
y = BinomialMeasure(10, 0.4)
x = [( i + 1 ) / length(y) for i = 1:length(y) ]
append!(x, 0)
append!(y,0)
plot(x,y)
Then it returns the following:
MethodError: no method matching AbstractFloat(::Num)
Closest candidates are:
AbstractFloat(::Real, !Matched::RoundingMode) where T<:AbstractFloat at rounding.jl:200
AbstractFloat(::T) where T<:Number at boot.jl:716
AbstractFloat(!Matched::Bool) at float.jl:258
y is an Array{Num,1} and x is an Array{Float64,1}.
I tried map(float, y), convert(float,y) and float(y), but I think it not possible to convert a type Num to a Float64 or at least I don't know how to do it.
you can access the field val without using string and parse
y_val = [i.val for i in y]
this will of course have way better performance than parsing a string

Scilab round-off error

I cannot solve a problem in Scilab because it get stucked because of round-off errors. I get the message
!--error 9999
Error: Round-off error detected, the requested tolerance (or default) cannot be achieved. Try using bigger tolerances.
at line 2 of function scalpol called by :
at line 7 of function gram_schmidt_pol called by :
gram_schmidt_pol(a,-1/2,-1/2)
It's a Gram Schmidt process with the integral of the product of two functions and a weight as the scalar product, between -1 and 1.
gram_schmidt_pol is the process specially designed for polynome, and scalpol is the scalar product described for polynome.
The a and b are parameters for the weigth, which is (1+x)^a*(1-x)^b
The entry is a matrix representing a set of vectors, it works well with the matrix [[1;2;3],[4;5;6],[7;8;9]], but it fails with the above message error on matrix eye(2,2), in addition to this, I need to do it on eye(9,9) !
I have looked for a "tolerance setting" in the menus, there is some in General->Preferences->Xcos->Simulation but I believe this is not for what I wan't, I have tried low settings (high tolerance) in it and it hasn't change anything.
So how can I solve this rounf-off problem ?
Feel free to tell me my message lacks of clearness.
Thank you.
Edit: Code of the functions :
// function that evaluate a polynomial (vector of coefficients) in x
function [y] = pol(p, x)
y = 0
for i=1:length(p)
y = y + p(i)*x^(i-1)
end
endfunction
// weight function evaluated in x, parametrized by a and b
// (poids = weight in french)
function [y] = poids(x, a, b)
y = (1-x)^a*(1+x)^b
endfunction
// scalpol compute scalar product between polynomial p1 and p2
// using integrate, the weight and the pol functions.
function [s] = scalpol(p1, p2, a, b)
s = integrate('poids(x,a, b)*pol(p1,x)*pol(p2,x)', 'x', -1, 1)
endfunction
// norm associated to scalpol
function [y] = normscalpol(f, a, b)
y = sqrt(scalpol(f, f, a, b))
endfunction
// finally the gram schmidt process on a family of polynome
// represented by a matrix
function [o] = gram_schmidt_pol(m, a, b)
[n,p] = size(m)
o(1:n) = m(1:n,1)/(normscalpol(m(1:n,1), a, b))
for k = 2:p
s =0
for i = 1:(k-1)
s = s + (scalpol(o(1:n,i), m(1:n,k), a, b) / scalpol(o(1:n,i),o(1:n,i), a, b) .* o(1:n,i))
end
o(1:n,k) = m(1:n,k) - s
o(1:n,k) = o(1:n,k) ./ normscalpol(o(1:n,k), a, b)
end
endfunction
By default, Scilab's integrate routine tries to achieve absolute error at most 1e-8 and relative error at most 1e-14. This is reasonable, but its treatment of relative error does not take into account the issues that occur when the exact value is zero. (See How to calculate relative error when true value is zero?). For this reason, even the simple
integrate('x', 'x', -1, 1)
throws an error (in Scilab 5.5.1).
And this is what happens in the process of running your program: some integrals are zero. There are two solutions:
(A) Give up on the relative error bound, by specifying it as 1:
integrate('...', 'x', -1, 1, 1e-8, 1)
(B) Add some constant to the function being integrated, then subtract from the result:
integrate('100 + ... ', 'x', -1, 1) - 200
(The latter should work in most cases, though if the integral happens to be exactly -200, you'll have the same problem again)
The above works for gram_schmidt_pol(eye(2,2), -1/2, -1/2) but for larger, say, gram_schmidt_pol(eye(9,9), -1/2, -1/2), it throws the error "The integral is probably divergent, or slowly convergent".
It appears that the adaptive integration routine can't handle the functions of the kind you have. A fallback is to use the simple inttrap instead, which just applies the trapezoidal rule. Since at x=-1 and 1 the function poids is undefined, the endpoints have to be excluded.
function [s] = scalpol(p1, p2, a, b)
t = -0.9995:0.001:0.9995
y = poids(t,a, b).*pol(p1,t).*pol(p2,t)
s = inttrap(t,y)
endfunction
In order for this to work, other related functions must be vectorized (* and ^ changed to .* and .^ where necessary):
function [y] = pol(p, x)
y = 0
for i=1:length(p)
y = y + p(i)*x.^(i-1)
end
endfunction
function [y] = poids(x, a, b)
y = (1-x).^a.*(1+x).^b
endfunction
The result is guaranteed to work, though the precision may be a bit lower: you are going to get some numbers like 3D-16 which are actually zeros.

Resources