Scilab round-off error - scilab

I cannot solve a problem in Scilab because it get stucked because of round-off errors. I get the message
!--error 9999
Error: Round-off error detected, the requested tolerance (or default) cannot be achieved. Try using bigger tolerances.
at line 2 of function scalpol called by :
at line 7 of function gram_schmidt_pol called by :
gram_schmidt_pol(a,-1/2,-1/2)
It's a Gram Schmidt process with the integral of the product of two functions and a weight as the scalar product, between -1 and 1.
gram_schmidt_pol is the process specially designed for polynome, and scalpol is the scalar product described for polynome.
The a and b are parameters for the weigth, which is (1+x)^a*(1-x)^b
The entry is a matrix representing a set of vectors, it works well with the matrix [[1;2;3],[4;5;6],[7;8;9]], but it fails with the above message error on matrix eye(2,2), in addition to this, I need to do it on eye(9,9) !
I have looked for a "tolerance setting" in the menus, there is some in General->Preferences->Xcos->Simulation but I believe this is not for what I wan't, I have tried low settings (high tolerance) in it and it hasn't change anything.
So how can I solve this rounf-off problem ?
Feel free to tell me my message lacks of clearness.
Thank you.
Edit: Code of the functions :
// function that evaluate a polynomial (vector of coefficients) in x
function [y] = pol(p, x)
y = 0
for i=1:length(p)
y = y + p(i)*x^(i-1)
end
endfunction
// weight function evaluated in x, parametrized by a and b
// (poids = weight in french)
function [y] = poids(x, a, b)
y = (1-x)^a*(1+x)^b
endfunction
// scalpol compute scalar product between polynomial p1 and p2
// using integrate, the weight and the pol functions.
function [s] = scalpol(p1, p2, a, b)
s = integrate('poids(x,a, b)*pol(p1,x)*pol(p2,x)', 'x', -1, 1)
endfunction
// norm associated to scalpol
function [y] = normscalpol(f, a, b)
y = sqrt(scalpol(f, f, a, b))
endfunction
// finally the gram schmidt process on a family of polynome
// represented by a matrix
function [o] = gram_schmidt_pol(m, a, b)
[n,p] = size(m)
o(1:n) = m(1:n,1)/(normscalpol(m(1:n,1), a, b))
for k = 2:p
s =0
for i = 1:(k-1)
s = s + (scalpol(o(1:n,i), m(1:n,k), a, b) / scalpol(o(1:n,i),o(1:n,i), a, b) .* o(1:n,i))
end
o(1:n,k) = m(1:n,k) - s
o(1:n,k) = o(1:n,k) ./ normscalpol(o(1:n,k), a, b)
end
endfunction

By default, Scilab's integrate routine tries to achieve absolute error at most 1e-8 and relative error at most 1e-14. This is reasonable, but its treatment of relative error does not take into account the issues that occur when the exact value is zero. (See How to calculate relative error when true value is zero?). For this reason, even the simple
integrate('x', 'x', -1, 1)
throws an error (in Scilab 5.5.1).
And this is what happens in the process of running your program: some integrals are zero. There are two solutions:
(A) Give up on the relative error bound, by specifying it as 1:
integrate('...', 'x', -1, 1, 1e-8, 1)
(B) Add some constant to the function being integrated, then subtract from the result:
integrate('100 + ... ', 'x', -1, 1) - 200
(The latter should work in most cases, though if the integral happens to be exactly -200, you'll have the same problem again)
The above works for gram_schmidt_pol(eye(2,2), -1/2, -1/2) but for larger, say, gram_schmidt_pol(eye(9,9), -1/2, -1/2), it throws the error "The integral is probably divergent, or slowly convergent".
It appears that the adaptive integration routine can't handle the functions of the kind you have. A fallback is to use the simple inttrap instead, which just applies the trapezoidal rule. Since at x=-1 and 1 the function poids is undefined, the endpoints have to be excluded.
function [s] = scalpol(p1, p2, a, b)
t = -0.9995:0.001:0.9995
y = poids(t,a, b).*pol(p1,t).*pol(p2,t)
s = inttrap(t,y)
endfunction
In order for this to work, other related functions must be vectorized (* and ^ changed to .* and .^ where necessary):
function [y] = pol(p, x)
y = 0
for i=1:length(p)
y = y + p(i)*x.^(i-1)
end
endfunction
function [y] = poids(x, a, b)
y = (1-x).^a.*(1+x).^b
endfunction
The result is guaranteed to work, though the precision may be a bit lower: you are going to get some numbers like 3D-16 which are actually zeros.

Related

Unable to plot solution of ODE in Maxima

Good time of the day!
Here is the code:
eq:'diff(x,t)=(exp(cos(t))-1)*x;
ode2(eq,x,t);
sol:ic1(%,t=1,x=-1);
/*---------------------*/
plot2d(
rhs(sol),
[t,-4*%pi, 4*%pi],
[y,-5,5],
[xtics,-4*%pi, 1*%pi, 4*%pi],
[ytics, false],
/*[yx_ratio , 0.6], */
[legend,"Solution."],
[xlabel, "t"], [ylabel, "x(t)"],
[style, [lines,1]],
[color, blue]
);
and here is the errors:
integrate: variable must not be a number; found: -12.56637061435917
What went wrong?
Thanks.
Here's a way to plot the solution sol which was found by ode2 and ic2 as you showed. First replace the integrate nouns with calls to quad_qags, a numerical quadrature function. I'll introduce a made-up variable name (a so-called gensym) to avoid confusion with the variable t.
(%i59) subst (nounify (integrate) =
lambda ([e, xx],
block ([u: gensym(string(xx))],
quad_qags (subst (xx = u, e), u, -4*%pi, xx)[1])),
rhs(sol));
(%o59) -%e^((-t)-quad_qags(%e^cos(t88373),t88373,-4*%pi,t,
epsrel = 1.0E-8,epsabs = 0.0,
limit = 200)[
1]
+quad_qags(%e^cos(t88336),t88336,-4*%pi,t,
epsrel = 1.0E-8,epsabs = 0.0,
limit = 200)[
1]+1)
Now I'll define a function foo1 with that result. I'll make a list of numerical values to see if it works right.
(%i60) foo1(t) := ''%;
(%o60) foo1(t):=-%e
^((-t)-quad_qags(%e^cos(t88373),t88373,-4*%pi,t,
epsrel = 1.0E-8,epsabs = 0.0,
limit = 200)[
1]
+quad_qags(%e^cos(t88336),t88336,-4*%pi,t,
epsrel = 1.0E-8,epsabs = 0.0,
limit = 200)[
1]+1)
(%i61) foo1(0.5);
(%o61) -1.648721270700128
(%i62) makelist (foo1(t), t, makelist (k, k, -10, 10));
(%o62) [-59874.14171519782,-22026.46579480672,
-8103.083927575384,-2980.957987041728,
-1096.633158428459,-403.4287934927351,
-148.4131591025766,-54.59815003314424,
-20.08553692318767,-7.38905609893065,-2.71828182845904,
-1.0,-0.3678794411714423,-0.1353352832366127,
-0.04978706836786394,-0.01831563888873418,
-0.006737946999085467,-0.002478752176666358,
-9.118819655545163E-4,-3.354626279025119E-4,
-1.234098040866796E-4]
Does %o62 look right to you? I'll assume it is okay. Next I'll define a function foo which calls foo1 defined before when the argument is a number, otherwise it just returns 0. This is a workaround for a bug in plot2d, which incorrectly determines that foo1 is not a function of t alone. Usually that workaround isn't needed, but it is needed in this case.
(%i63) foo(t) := if numberp(t) then foo1(t) else 0;
(%o63) foo(t):=if numberp(t) then foo1(t) else 0
Okay, now the function foo can be plotted!
(%i64) plot2d (foo, [t, -4*%pi, 4*%pi], [y, -5, 5]);
plot2d: some values were clipped.
(%o64) false
That takes about 30 seconds to plot -- calling quad_qags is relatively expensive.
it looks like ode2 does not know how to completely solve the problem, so the result contains an integral:
(%i6) display2d: false $
(%i7) eq:'diff(x,t)=(exp(cos(t))-1)*x;
(%o7) 'diff(x,t,1) = (%e^cos(t)-1)*x
(%i8) ode2(eq,x,t);
(%o8) x = %c*%e^('integrate(%e^cos(t),t)-t)
(%i9) sol:ic1(%,t=1,x=-1);
(%o9) x = -%e^((-%at('integrate(%e^cos(t),t),t = 1))
+'integrate(%e^cos(t),t)-t+1)
I tried it with contrib_ode also:
(%i12) load (contrib_ode);
(%o12) "/Users/dodier/tmp/maxima-code/share/contrib/diffequations/contrib_ode.mac"
(%i13) contrib_ode (eq, x, t);
(%o13) [x = %c*%e^('integrate(%e^cos(t),t)-t)]
So contrib_ode did not solve it completely either.
However the solution returned by ode2 (same for contrib_ode) appears to be a valid solution. I'll post a separate answer describing how to evaluate it numerically for plotting.

SingularException(2) when invoking multivariate Newton's method in Julia

I have implemented the multivariate Newton's method in Julia.
function newton(f::Function, J::Function, x)
h = Inf64
tolerance = 10^(-10)
while (norm(h) > tolerance)
h = J(x)\f(x)
x = x - h
end
return x
end
On the one hand, when I try to solve the below system of equations, LoadError: SingularException(2) is thrown.
f(θ) = [cos(θ[1]) + cos(θ[2] - 1.3),
sin(θ[1]) + sin(θ[2]) - 1.3]
J(θ) = [-sin(θ[1]) -sin(θ[2]);
cos(θ[1]) cos(θ[2])]
θ = [pi/3, pi/7]
newton(f, J, θ)
On the other hand, when I try to solve this other system
f(x) = [(93-x[1])^2 + (63-x[2])^2 - 55.1^2,
(6-x[1])^2 + (16-x[2])^2 - 46.2^2]
J(x) = [-2*(93-x[1]) -2*(63-x[2]); -2*(6-x[1]) -2*(16-x[2])]
x = [35, 50]
newton(f, J, x)
no errors are thrown and the correct solution is returned.
Furthermore, if I first solve the second system and then try to solve the first system, which normally throwns SingularException(2), I now instead encounter no errors but am returned a totally inaccurate solution to the first system.
What exactly is going wrong when I try to solve the first system and how can I resolve the error?

How to use NLopt in Julia with equality_constraint

I'm struggling to amend the Julia-specific tutorial on NLopt to meet my needs and would be grateful if someone could explain what I'm doing wrong or failing to understand.
I wish to:
Minimise the value of some objective function myfunc(x); where
x must lie in the unit hypercube (just 2 dimensions in the example below); and
the sum of the elements of x must be one.
Below I make myfunc very simple - the square of the distance from x to [2.0, 0.0] so that the obvious correct solution to the problem is x = [1.0,0.0] for which myfunc(x) = 1.0. I have also added println statements so that I can see what the solver is doing.
testNLopt = function()
origin = [2.0,0.0]
n = length(origin)
#Returns square of the distance between x and "origin", and amends grad in-place
myfunc = function(x::Vector{Float64}, grad::Vector{Float64})
if length(grad) > 0
grad = 2 .* (x .- origin)
end
xOut = sum((x .- origin).^2)
println("myfunc: x = $x; myfunc(x) = $xOut; ∂myfunc/∂x = $grad")
return(xOut)
end
#Constrain the sums of the x's to be 1...
sumconstraint =function(x::Vector{Float64}, grad::Vector{Float64})
if length(grad) > 0
grad = ones(length(x))
end
xOut = sum(x) - 1
println("sumconstraint: x = $x; constraint = $xOut; ∂constraint/∂x = $grad")
return(xOut)
end
opt = Opt(:LD_SLSQP,n)
lower_bounds!(opt, zeros(n))
upper_bounds!(opt,ones(n))
equality_constraint!(opt,sumconstraint,0)
#xtol_rel!(opt,1e-4)
xtol_abs!(opt,1e-8)
min_objective!(opt, myfunc)
maxeval!(opt,20)#to ensure code always terminates, remove this line when code working correctly?
optimize(opt,ones(n)./n)
end
I have read this similar question and documentation here and here, but still can't figure out what's wrong. Worryingly, each time I run testNLopt I see different behaviour, as in this screenshot including occasions when the solver uselessly evaluates myfunc([NaN,NaN]) many times.
You aren't actually writing to the grad parameters in-place, as you write in the comments;
grad = 2 .* (x .- origin)
just overrides the local variable, not the array contents -- and I guess that's why you see these df/dx = [NaN, NaN] everywhere. The simplest way to fix that would be with broadcasting assignment (note the dot):
grad .= 2 .* (x .- origin)
and so on. You can read about that behaviour here and here.

How do I evaluate the function in only one of its variables in Scilab

How do I evaluate the function in only one of its variables, that is, I hope to obtain another function after evaluating the function. I have the following piece of code.
deff ('[F] = fun (x, y)', 'F = x ^ 2-3 * y ^ 2 + x * y ^ 3');
fun (4, y)
I hope to get 16-3y ^ 2 + 4y ^ 3
If what you want to do is to write x = f(4,y), and later just do x(2) to get -36, that is called partial application:
Intuitively, partial function application says "if you fix the first arguments of the function, you get a function of the remaining arguments".
This is a very useful feature, and very common Functional Programming Languages, such as Haskell, but even JS and Python now are able to do it. It is also possible to do this in MATLAB and GNU/Octave using anonymous functions (see this answer). In Scilab, however, this feature is not available.
Workround
Nonetheless, Scilab itself uses a workarounds to carry a function with its arguments without fully evaluating. You see this being used in ode(), fsolve(), optim(), and others:
Create a list containing the function and the arguments to partial evaluation: list(f,arg1,arg2,...,argn)
Use another function to evaluate such list and the last argument: evalPartList(list(...),last_arg)
The implementation of evalPartList() can be something like this:
function y = evalPartList(fList,last_arg)
//fList: list in which the first element is a function
//last_arg: last argument to be applied to the function
func = fList(1); //extract function from the list
y = func(fList(2:$),last_arg); //each element of the list, from second
//to last, becomes an argument
endfunction
You can test it on Scilab's console:
--> deff ('[F] = fun (x, y)', 'F = x ^ 2-3 * y ^ 2 + x * y ^ 3');
--> x = list(fun,4)
x =
x(1)
[F]= x(1)(x,y)
x(2)
4.
--> evalPartList(x,2)
ans =
36.
This is a very simple implementation for evalPartList(), and you have to be careful not to exceed or be short on the number of arguments.
In the way you're asking, you can't.
What you're looking is called symbolic (or formal) computational mathematics, because you don't pass actual numerical values to functions.
Scilab is numerical software so it can't do such thing. But there is a toolbox scimax (installation guide) that rely on a the free formal software wxmaxima.
BUT
An ugly, stupid but still sort of working solution is to takes advantages of strings :
function F = fun (x, y) // Here we define a function that may return a constant or string depending on the input
fmt = '%10.3E'
if (type(x)==type('')) & (type(y)==type(0)) // x is string is
ys = msprintf(fmt,y)
F = x+'^2 - 3*'+ys+'^2 + '+x+'*'+ys+'^3'
end
if (type(y)==type('')) & (type(x)==type(0)) // y is string so is F
xs = msprintf(fmt,x)
F = xs+'^2 - 3*'+y+'^2 + '+xs+'*'+y+'^3'
end
if (type(y)==type('')) & (type(x)==type('')) // x&y are strings so is F
F = x+'^2 - 3*'+y+'^2 + '+x+'*'+y+'^3'
end
if (type(y)==type(0)) & (type(x)==type(0)) // x&y are constant so is F
F = x^2 - 3*y^2 + x*y^3
end
endfunction
// Then we can use this 'symbolic' function
deff('F2 = fun2(y)',' F2 = '+fun(4,'y'))
F2=fun2(2) // does compute fun(4,2)
disp(F2)

Wrong Fortran Output [duplicate]

I've written a rudimentary algorithm in Fortran 95 to calculate the gradient of a function (an example of which is prescribed in the code) using central differences augmented with a procedure known as Richardson extrapolation.
function f(n,x)
! The scalar multivariable function to be differentiated
integer :: n
real(kind = kind(1d0)) :: x(n), f
f = x(1)**5.d0 + cos(x(2)) + log(x(3)) - sqrt(x(4))
end function f
!=====!
!=====!
!=====!
program gradient
!==============================================================================!
! Calculates the gradient of the scalar function f at x=0using a finite !
! difference approximation, with a low order Richardson extrapolation. !
!==============================================================================!
parameter (n = 4, M = 25)
real(kind = kind(1d0)) :: x(n), xhup(n), xhdown(n), d(M), r(M), dfdxi, h0, h, gradf(n)
h0 = 1.d0
x = 3.d0
! Loop through each component of the vector x and calculate the appropriate
! derivative
do i = 1,n
! Reset step size
h = h0
! Carry out M successive central difference approximations of the derivative
do j = 1,M
xhup = x
xhdown = x
xhup(i) = xhup(i) + h
xhdown(i) = xhdown(i) - h
d(j) = ( f(n,xhup) - f(n,xhdown) ) / (2.d0*h)
h = h / 2.d0
end do
r = 0.d0
do k = 3,M r(k) = ( 64.d0*d(k) - 20.d0*d(k-1) + d(k-2) ) / 45.d0
if ( abs(r(k) - r(k-1)) < 0.0001d0 ) then
dfdxi = r(k)
exit
end if
end do
gradf(i) = dfdxi
end do
! Print out the gradient
write(*,*) " "
write(*,*) " Grad(f(x)) = "
write(*,*) " "
do i = 1,n
write(*,*) gradf(i)
end do
end program gradient
In single precision it runs fine and gives me decent results. But when I try to change to double precision as shown in the code, I get an error when trying to compile claiming that the assignment statement
d(j) = ( f(n,xhup) - f(n,xhdown) ) / (2.d0*h)
is producing a type mismatch real(4)/real(8). I have tried several different declarations of double precision, appended every appropriate double precision constant in the code with d0, and I get the same error every time. I'm a little stumped as to how the function f is possibly producing a single precision number.
The problem is that f is not explicitely defined in your main program, therefore it is implicitly assumed to be of single precision, which is the type real(4) for gfortran.
I completely agree to the comment of High Performance Mark, that you really should use implicit none in all your fortran code, to make sure all object are explicitely declared. This way, you would have obtained a more appropriate error message about f not being explicitely defined.
Also, you could consider two more things:
Define your function within a module and import that module in the main program. It is a good practice to define all subroutines/functions within modules only, so that the compiler can make extra checks on number and type of the arguments, when you invoke the function.
You could (again in module) introduce a constant for the precicision and use it everywhere, where the kind of a real must be specified. Taking the example below, by changing only the line
integer, parameter :: dp = kind(1.0d0)
into
integer, parameter :: dp = kind(1.0)
you would change all your real variables from double to single precision. Also note the _dp suffix for the literal constants instead of the d0 suffix, which would automatically adjust their precision as well.
module accuracy
implicit none
integer, parameter :: dp = kind(1.0d0)
end module accuracy
module myfunc
use accuracy
implicit none
contains
function f(n,x)
integer :: n
real(dp) :: x(n), f
f = 0.5_dp * x(1)**5 + cos(x(2)) + log(x(3)) - sqrt(x(4))
end function f
end module myfunc
program gradient
use myfunc
implicit none
real(dp) :: x(n), xhup(n), xhdown(n), d(M), r(M), dfdxi, h0, h, gradf(n)
:
end program gradient

Resources