Basic questions about scilab - scilab

I am taking a numeric calculus class and we are not required to know any scilab programming except the very basic, which is taught through a booklet, since the class is mostly theoretical. I was reading the booklet and found this scilab code meant to find a root of a function through bissection method.
The problem is, I can't find a way to make it work. I tried to call it with bissecao(x,-1,1,0.1,40) however it didn't work.
The error I got was:
at line 3 of function bissecao ( E:\Downloads\bisseccao3.sce line 3 )
Invalid index.
As I highly doubt that the code itself isn't working, and I tried to search for anything I could spot that seemed wrong, to no avail, I guess I am probably calling it wrong, somehow.
The code is the following:
function p = bissecao(f, a, b, TOL, N)
i = 1
fa = f(a)
while (i <= N)
//iteraction of the bissection
p = a + (b-a)/2
fp = f(p)
//stop condition
if ((fp == 0) | ((b-a)/2 < TOL)) then
return p
end
//bissects the interval
i = i+1
if (fa * fp > 0) then
a = p
fa = fp
else
b = p
end
end
error ('Max number iter. exceded!')
endfunction
Where f is a function(I guess), a and b are the limits of the interval in which we will be iterating, TOL is the tolerance at which the program terminates close to a zero, and N is the maximum number of iteractions.
Any help on how to make this run is greatly appreciated.

Error in bissecao
The only error your bissecao function have is the call to return :
In a function return stops the execution of the function,
[x1,..,xn]=return(a1,..,an) stops the execution of the function and
put the local variables ai in calling environment under names xi.
So you should either call it without any argument (input our output) and the function will exit and return p.
Or you could call y1 = return(p) and the function will exit and p will be stored in y1.
It is better to use the non-arguments form return in functions to avoid changing values of variables in the parent/calling script/functions (possible side-effect).
The argument form is more useful when interactively debugging with pause:
In pause mode, it allows to return to lower level.
[x1,..,xn]=return(a1,..,an) returns to lower level and put the local
variables ai in calling environment under names xi.
Error in calling bissecao
The problem may come by your call: bissecao(x,-1,1,0.1,40) because you didn't defined x. Just fixing this by creating a function solves the problem:
function y=x(t)
y=t+0.3
enfunction
x0=bissecao(x,-1,1,0.1,40) // changed 'return p' to 'return'
disp(x0) // gives -0.3 as expected

Related

Julia: How to change the type of an argument in the Bessel function?

Here is my code:
using Plots
using SpecialFunctions
using QuadGK
kappa = 1
B = 1
xi = (kappa/B)^4
function correlation(x)
quadgk(q -> q * SpecialFunctions.besselj0(x*q)/(q^4 + xi), 0, 1e6)[1]/kappa
end
r = range(-20, 20, length = 1001)
plot(r, correlation(r))
I get an error on the Bessel function. I get that the argument is the problem and that it should be of the format ::BigFloat, ::Float16, or ::Float32, but I don't know how to do it. I tried to write x .* q instead, but the problem remains the same, I get the error:
ERROR: MethodError: no method matching besselj0(::StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}})
Also, I'm searching for a way to write +infinity instead of 1e6.
Just replace correlation(r) with correlation.(r) in your code to use broadcasting, as is explained here.
The core of your problem that in Julia functions are not broadcastable by default - you usually have to opt-in to have it (especially when you work with numerical code). Here is a basic example:
julia> sin(1)
0.8414709848078965
julia> sin([1])
ERROR: MethodError: no method matching sin(::Array{Int64,1})
Closest candidates are:
sin(::BigFloat) at mpfr.jl:727
sin(::Missing) at math.jl:1197
sin(::Complex{Float16}) at math.jl:1145
...
Stacktrace:
[1] top-level scope at REPL[2]:1
julia> sin.([1])
1-element Array{Float64,1}:
0.8414709848078965
However, in your case the correlation function is quite expensive. In such a case I usually use ProgressMeter.jl to monitor the progress of the computations (it shows how long you can expect for the computations to finish). So you can write:
using ProgressMeter
result = #showprogress map(correlation, r)
and use the map function to apply correlation function to all elements of r (in this case the result will be the same as for broadcasting).
Finally, your computations will be much faster if you do not use global variables in quadgk. It is better to pass kappa and xi as arguments to the function like this:
function correlation(x, kappa, xi)
quadgk(q -> q * SpecialFunctions.besselj0(x*q)/(q^4 + xi), 0, 1e6)[1]/kappa
end
result = #showprogress map(x -> correlation(x, kappa, xi), r)

Weird R behavior with indexing function arrays

I'm having some unexpected behaviour in R with function arrays, and I've reduced the problem to a minimal working example:
theory = c(function(p) p)
i = 1
posterior = function(p) theory[[i]](p)
i = 2
posterior(0)
Which gives me an error saying the subscript i is out of bounds.
So I guess that i is somehow being used as a "free" variable in the definition of posterior so it gets updated when I redefine i. Oddly enough, this works:
theory = c(function(p) p)
i = 1
posterior = theory[[i]]
i = 2
posterior(0)
How can I avoid this? Note that not redefining i is not an option, as this stuff is going in a for loop where i is the index.
The reason that this doesn't work is that you redefine i = 2, and then you are out of bounds of theory, which contains a single element. The function is evaluated lazily, so that it only executes theory[[i]] when the function is called, at which point i equals 2.
You can read some more about lazy evaluation here.

Stopping criteria for optim/SANN in R not working

Issue 1
I have an objective function, gFun(modelOutput,l,u), which returns 0 if the simulated output is in interval [l,u], otherwise it returns a positive(!) number.
OFfun <- function(params) {
out <- simulate(params)
OF <- gFun(out,0,5)
return(OF)
}
The objective function is called from the optim function with some tolerance settings.
fitval=optim(par=parms,fn=OFfun,method="SANN",control = list(abstol = 1e-2))
summary(fitval)
My issue is that the optimization doesn't stop if the OFfun == 0.
I have tried with the condition below:
if (OF == 0){
opt <- options(show.error.messages=FALSE)
on.exit(options(opt))
stop()
}
it works but it doesn't return the OF back to optim and therefore I don't get the fitval info with estimated parameters.
Issue 2
Another issue is that the solver sometimes crashes and aborts the entire optimisation. I would like to harvest many solution sets for different initial guesses - so I need to handle failed simulations. probably related to issue 1.
Any advice would be very appreciated.

Rstudio - Error in user-created function - Object not found

First thing's first; my skills in R are somewhat lacking, so there is a chance I may be using something incorrectly in the following. If I go wrong somewhere, please let me know.
I've been having a problem in Rstudio where I try to create 2 functions for formulae, then use nls() to create a model using those, with which I will make a plot. When I try to run the line for creating it, I get an error message saying an object is missing. It is always the last object in the function of the first "formula", in this case, 'p'.
I'll provide my code here then explain what I am trying to do for a little context;
DATA <- read.csv(file.choose(), as.is=T)
formula <- function(m, h, g, p){(2*m)/(m+(sqrt(m^2+1)))*p*g*(h^2/2)}
formula.2 <- function(P, V, g){P*V*g}
m = 0.85
p = 766.42
g = 9.81
P = 0.962
h = DATA$lithothick
V = DATA$Vol
fit.1 <- nls(formula (P, V, g) ~ formula(m, h, g, p), data = DATA)
If I run it how it is shown, I get the error;
Error in (2 * m)/(m + (sqrt(m^2 + 1))) * p : 'p' is missing
However it will show h if I rearrange the objects in the formula to (m,g,p,h)
Error in h^2 : 'h' is missing
Now, what I'm trying to do is this; I have a .csv file with 3 thicknesses (0.002, 0.004, 0.006 meters) and 3 volumes (10, 25, 50 milliliters). I am trying to see how the rates of strength and buoyancy increase (in relation to each other) as the thickness and volume for each object (respectively) increases. I was hoping to come out with a graph showing the upward trend for each property (strength and buoyancy), as I believe them to be unequal (one exponential the other linear). I hope that isn't more confusing than clarifying, but any pointers would be GREATLY appreciated.
You cannot overload functions this way in R, what you can do is provide optional arguments (which is a kind of overload) with syntax function(mandatory, optionnal="")
For what you are trying to do, you have to use formula.2 if you want to use the 3-arguments formula.
A workaround could be to use one function with one optionnal argument and check if this argument has been used. Something like :
formula = function(m, h, g, p="") {
if (is.numeric(p)) {
(2*m)/(m+(sqrt(m^2+1)))*p*g*(h^2/2)
} else {
m*h*g
}
}
This is ugly and a very bad way to do it (your variables do not really mean the same thing from one call to the other) but it works.

Behavior of optim() function in R

I'm doing maximum likelihood estimation using the R optim function.
The command I used is
optim(3, func, lower=1.0001, method="L-BFGS-B")$par
The function func has infinite value if the parameter is 1.
Thus I set the lower value to be 1.0001.
But sometime an error occurs.
Error in optim(3, func, lower = 1.0001, method = "L-BFGS-B", sx = sx, :
L-BFGS-B needs finite values of 'fn'
What happened next is hard to understand.
If I run the same command again, then it gives the result 1.0001 which is lower limit.
It seems that the optim function 'learns' that 1 is not the proper answer.
How can the optim function can give the answer 1.0001 at my first run?
P.S.
I just found that this problem occurs only in stand-alone R-console. If I run the same code in R Studio, it does not occur. Very strange.
The method "L-BFGS-B" requires all computed values of the function to be finite.
It seems, for some reason, that optim is evaluating your function at the value of 1.0, giving you an inf, then throwing an error.
If you want a quick hack, try defining a new function that gives a very high value(or low if you're trying to maximize) for inputs of 1.
func2 <- function(x){
if (x == 1){
return -9999
}
else{
return func(x)
}
}
optim(3, func2, lower=1.0001, method="L-BFGS-B")$par
(Posted as answer rather than comment for now; will delete later if appropriate.)
For what it's worth, I can't get this example (with a singularity at 1) to fail, even using the default control parameters (e.g. ndeps=1e-3):
func <- function(x) 1/(x-1)*x^2
library(numDeriv)
grad(func,x=2) ## critical point at x=2
optim(par=1+1e-4,fn=func,method="L-BFGS-B",lower=1+1e-4)
Try a wide range of starting values:
svec <- 1+10^(seq(-4,2,by=0.5))
sapply(svec,optim,fn=func,method="L-BFGS-B",lower=1+1e-4)
These all work.

Resources