Using GSL.jl integration routines in julia: integration_qawc - julia

Analogous to the example given in GSL.jl/examples/Quadrature.jl I am trying to integrate a function. However, since this function has a singularity, I need to use the cauchy weight. My idea was to use the following code
using GSL
function Q(p)
ws_size = 200
ws = GSL.integration_workspace_alloc(ws_size)
f_ = x -> 1/(x+p)
f = GSL.#gsl_function(f_)
result = Cdouble[0][1]
epsrel = 1e-10
epsabs = 1e-10
abserr = Cdouble[0][1]
limit = Csize_t[0][1]
result = integration_qawc(f, 0., 1.e4, p, epsabs,epsrel,limit,ws,result,abserr)
GSL.integration_workspace_free(ws)
return result
end
However, I get the following error
UndefVarError: f_ not defined
Stacktrace:
[1] (::getfield(Main, Symbol("##117#118")))(::Float64, ::Ptr{Nothing}) at /home/varantir/.julia/packages/GSL/IVE5m /src/manual_wrappers.jl:45
[2] integration_qawc at /home/varantir/.julia/packages/GSL/IVE5m/src/gen/direct_wrappers/gsl_integration_h.jl:570 [inlined]
[3] Q(::Float64) at ./In[250]:14
[4] top-level scope at In[251]:1
Which seems a little bit strange to me, since I clearly have defined f_. Any ideas?

for a weird reason, this doesn't throw errors but throws 0:
function Q(p)
ws_size = 200
ws = GSL.integration_workspace_alloc(ws_size)
f0(x::Float64)::Float64 = 1/(x+p)
f = GSL.#gsl_function(f0)
result = Cdouble[0][1]
epsrel = 1e-10
epsabs = 1e-10
abserr = Cdouble[0][1]
limit = Csize_t[0][1]
result = integration_qawc(f, 0., 1.e4, p, epsabs,epsrel,limit,ws,result,abserr)
GSL.integration_workspace_free(ws)
return result
end
From the docs of integration_qawc:
The adaptive bisection algorithm of QAG is used, with modifications to ensure that subdivisions do not occur at the singular point x = c. When a subinterval contains the point x = c or is close to it then a special 25-point modified Clenshaw-Curtis rule is used to control the singularity. Further away from the singularity the algorithm uses an ordinary 15-point Gauss-Kronrod integration rule.
Using an alternative, using QuadGK.jl:
using QuadGK
function G2(p)
f(x)=1/(x+p)
a = 0.0
b = 1e4
if a<-p<b
res, err = quadgk(f,a,-p,b,rtol=1e-10,atol=1e-10)
return res
else
res, err = quadgk(f,a,b,rtol=1e-10,atol=1e-10)
return res
end
end
from the QuadGK docs:
The algorithm is an adaptive Gauss-Kronrod integration technique: the integral in each interval is estimated using a Kronrod rule (2*order+1 points) and the error is estimated using an embedded Gauss rule (order points). The interval with the largest error is then subdivided into two intervals and the process is repeated until the desired error tolerance is achieved.
These quadrature rules work best for smooth functions within each interval, so if your function has a known discontinuity or other singularity, it is best to subdivide your interval to put the singularity at an endpoint. For example, if f has a discontinuity at x=0.7 and you want to integrate from 0 to 1, you should use quadgk(f, 0,0.7,1) to subdivide the interval at the point of discontinuity. The integrand is never evaluated exactly at the endpoints of the intervals, so it is possible to integrate functions that diverge at the endpoints as long as the singularity is integrable (for example, a log(x) or 1/sqrt(x) singularity).
The default order is 7, so is equivalent to the GSL integration.

Related

Initial state starts at y(1), how to go backwards to find y(0)? [duplicate]

I would like to solve a differential equation in R (with deSolve?) for which I do not have the initial condition, but only the final condition of the state variable. How can this be done?
The typical code is: ode(times, y, parameters, function ...) where y is the initial condition and function defines the differential equation.
Are your equations time reversible, that is, can you change your differential equations so they run backward in time? Most typically this will just mean reversing the sign of the gradient. For example, for a simple exponential growth model with rate r (gradient of x = r*x) then flipping the sign makes the gradient -r*x and generates exponential decay rather than exponential growth.
If so, all you have to do is use your final condition(s) as your initial condition(s), change the signs of the gradients, and you're done.
As suggested by #LutzLehmann, there's an even easier answer: ode can handle negative time steps, so just enter your time vector as (t_end, 0). Here's an example, using f'(x) = r*x (i.e. exponential growth). If f(1) = 3, r=1, and we want the value at t=0, analytically we would say:
x(T) = x(0) * exp(r*T)
x(0) = x(T) * exp(-r*T)
= 3 * exp(-1*1)
= 1.103638
Now let's try it in R:
library(deSolve)
g <- function(t, y, parms) { list(parms*y) }
res <- ode(3, times = c(1, 0), func = g, parms = 1)
print(res)
## time 1
## 1 1 3.000000
## 2 0 1.103639
I initially misread your question as stating that you knew both the initial and final conditions. This type of problem is called a boundary value problem and requires a separate class of numerical algorithms from standard (more elementary) initial-value problems.
library(sos)
findFn("{boundary value problem}")
tells us that there are several R packages on CRAN (bvpSolve looks the most promising) for solving these kinds of problems.
Given a differential equation
y'(t) = F(t,y(t))
over the interval [t0,tf] where y(tf)=yf is given as initial condition, one can transform this into the standard form by considering
x(s) = y(tf - s)
==> x'(s) = - y'(tf-s) = - F( tf-s, y(tf-s) )
x'(s) = - F( tf-s, x(s) )
now with
x(0) = x0 = yf.
This should be easy to code using wrapper functions and in the end some list reversal to get from x to y.
Some ODE solvers also allow negative step sizes, so that one can simply give the times for the construction of y in the descending order tf to t0 without using some intermediary x.

how to specify final value (rather than initial value) for solving differential equations

I would like to solve a differential equation in R (with deSolve?) for which I do not have the initial condition, but only the final condition of the state variable. How can this be done?
The typical code is: ode(times, y, parameters, function ...) where y is the initial condition and function defines the differential equation.
Are your equations time reversible, that is, can you change your differential equations so they run backward in time? Most typically this will just mean reversing the sign of the gradient. For example, for a simple exponential growth model with rate r (gradient of x = r*x) then flipping the sign makes the gradient -r*x and generates exponential decay rather than exponential growth.
If so, all you have to do is use your final condition(s) as your initial condition(s), change the signs of the gradients, and you're done.
As suggested by #LutzLehmann, there's an even easier answer: ode can handle negative time steps, so just enter your time vector as (t_end, 0). Here's an example, using f'(x) = r*x (i.e. exponential growth). If f(1) = 3, r=1, and we want the value at t=0, analytically we would say:
x(T) = x(0) * exp(r*T)
x(0) = x(T) * exp(-r*T)
= 3 * exp(-1*1)
= 1.103638
Now let's try it in R:
library(deSolve)
g <- function(t, y, parms) { list(parms*y) }
res <- ode(3, times = c(1, 0), func = g, parms = 1)
print(res)
## time 1
## 1 1 3.000000
## 2 0 1.103639
I initially misread your question as stating that you knew both the initial and final conditions. This type of problem is called a boundary value problem and requires a separate class of numerical algorithms from standard (more elementary) initial-value problems.
library(sos)
findFn("{boundary value problem}")
tells us that there are several R packages on CRAN (bvpSolve looks the most promising) for solving these kinds of problems.
Given a differential equation
y'(t) = F(t,y(t))
over the interval [t0,tf] where y(tf)=yf is given as initial condition, one can transform this into the standard form by considering
x(s) = y(tf - s)
==> x'(s) = - y'(tf-s) = - F( tf-s, y(tf-s) )
x'(s) = - F( tf-s, x(s) )
now with
x(0) = x0 = yf.
This should be easy to code using wrapper functions and in the end some list reversal to get from x to y.
Some ODE solvers also allow negative step sizes, so that one can simply give the times for the construction of y in the descending order tf to t0 without using some intermediary x.

Calculate the n-th derivative in any point using Scilab

I am trying to evaluate a function in Scilab using the following steps:
x=poly(0,'x')
y=(x^18+x^11)^3 // function (the function is variable)
y1=derivat(y) // first derivate
y2=derivat(y) //second derivate
y3=derivat(y) //third derivate
I need evaluate the 3 derivatives in any point.
I know the function: evstr(expression) but it does not work with the return value of the derivative.
I try to use: string(y) but it returns something strange.
How can to do it, to cast the return of derivat to string to evaluate with evstr or how can I evaluate the n-th derivative in any point using Scilab.
To evaluate numerical derivatives of almost any kind of function (of one or sereval variables) up to machine precision (you won't get better results if you evaluate symbolic expressions obtained by hand), you can use the complex step method (google these terms you will have a bunch of references). For example:
function y = f(x)
s = poly(0,'s');
p = (s-s^2)^3;
y = horner(p,x).*exp(-x.^2);
end
x=linspace(-1,1,100);
d = imag(f(x+complex(0,1e-100)))/1e-100;
true_d = exp(-x.^2).*(-1+x).^2.*x^2.*(3-6*x-2*x.^2+2.*x^3)
disp(max(abs(d-true_d)))
--> disp(max(abs(d-true_d)))
1.776D-15
To evaluate a symbolic polynomial at a particular point or points, use the horner command. Example:
t = 0:0.1:1
v1 = horner(y1, t)
plot(t, v1)
This is the closest I got to a solution to this problem.
He proposes using:
old = 'f';
for i=1:n
new = 'd'+string(i)+'f';
deff('y='+new+'(x)','y=numderivative('+old+',x)');
old=new;
end
I know, it's horrible, but I think there is no better solution, at least in Scilab.
I found a way:
function y = deriva(f, v, n, h)
deff("y = DF0(x)", "y="+f)
if n == 0 then
y = DF0(v);
else
for i=1:(n-1)
deff("y=DF"+string(i)+"(x)", "y=numderivative(DF"+string(i-1)+",x,"+string(h)+",4)");
end
deff("y=DFN(x)", "y=numderivative(DF"+string(n-1)+",x,"+string(h)+",4)");
y = DFN(v);
end
endfunction
disp(deriva("x.*x", 3, 2, 0.0001));
This correctly calculates numerical derivatives of nth order. But it needs to have the function passed as a string. Errors can get pretty large, and time to compute tends to go up fast as a function of n.

set function output type based on whether optional keyword argument present

I need to make a histogram, and my data points each carry a statistical weight. The standard hist function isn't equipped to handle this. I could of course import the numpy.histogram function, which handles weighted data just fine, but I thought it would be a good exercise in learning julia to try and augment the hist() function to accept weights as an optional (named) argument.
I started by looking at the julia source for hist(), and was able to modify it slightly (if amateurishly -- suggestions for improvements welcome), to get it sort of working:
function sturges(n) # Sturges' formula
n==0 && return one(n)
iceil(log2(n))+1
end
function weightedhist!{HT}(h::AbstractArray{HT}, v::AbstractVector, edg::AbstractVector; init::Bool=true, weights::AbstractVector = ones(HT,length(v)))
n = length(edg) - 1
length(weights) == length(v) || error("length(weights) must equal length(v)")
length(h) == n || error("length(h) must equal length(edg) - 1.")
if init
fill!(h, zero(HT))
end
for j=1:length(v)
i = searchsortedfirst(edg, v[j])-1
if 1 <= i <= n
h[i] += weights[j]
end
end
edg, h
end
weightedhist(v::AbstractVector, edg::AbstractVector; weights::AbstractVector = ones(Int,length(v))) = weightedhist!(Array(Float64, length(edg)-1), v, edg; weights=weights)
weightedhist(v::AbstractVector, n::Integer; weights::AbstractVector = ones(Int,length(v))) = weightedhist(v, histrange(v,n); weights=weights)
weightedhist(v::AbstractVector; weights::AbstractVector = ones(Int,length(v))) = weightedhist(v, sturges(length(v)); weights=weights)
If I generate some random data with
v = randn(10^5);
w = rand(length(v));
edges = floor(minimum(v)):0.1:ceil(maximum(v));
then weightedhist(v, edges; weights=w) agrees with numpy.histogram(v, edges, weights=w). If I leave out the optional keyword argument for weights, then weightedhist(v, edges) agrees with the built in hist(v, edges), and weightedhist(v) agrees with the built in hist(v), except for the fact that my function outputs floats rather than ints when no weights are provided.
I don't understand why this is the case (is h getting created as a float array? promoted?), and I'd like for the my function to fall back on the behavior of the built in one as closely as possible when no weights are provided.
Can anyone suggest why my function is outputting floats, and how I might change that behavior to output ints when no weights are provided? I'd like to do this without first creating the h array and then converting it from one type to another, since I'd like the code to be as fast as possible.
If I understand correctly, when you call
weightedhist(v, edges)
you are using the first of your three "extra" definitions at the bottom.
This calls
weightedhist!(Array(Float64, length(edg)-1), v, edg; weights=weights)
so in your "main" weightedhist! the HT parameterization will be Float64, so h will be filled with HT == Float64, hence the Float64 output. So changing it to Array(eltype(weights), length(edg)-1) would be sufficient, I believe.

Fitting an inverse function

I have a function which looks like:
g(x) = f(x) - a^b / f(x)^b
g(x) - known function, data vector provided.
f(x) - hidden process.
a,b - parameters of this function.
From the above we get the relation:
f(x) = inverse(g(x))
My goal is to optimize parameters a and b such that f(x) would be as close as possible
to a normal distribution. If we look on a f(x) Q-Q normal plot (attached), my purpose is to minimize the distance between f(x) to the straight line which represents the normal distribution, by optimizing parameters a and b.
I wrote the below code:
g_fun <- function(x) {x - a^b/x^b}
inverse = function (f, lower = 0, upper = 2000) {
function (y) uniroot((function (x) f(x) - y), lower = lower, upper = upper)[1]
}
f_func = inverse(function(x) g_fun(x))
enter code here
# let's made up an example
# g(x) values are known
g <- c(-0.016339, 0.029646, -0.0255258, 0.003352, -0.053258, -0.018971, 0.005172,
0.067114, 0.026415, 0.051062)
# Calculate f(x) by using the inverse of g(x), when a=a0 and b=b0
for (i in 1:10) {
f[i] <- f_fun(g[i])
}
I have two question:
How to pass parameters a and b to the functions?
How to perform this optimization task, meaning find a and b such that f(x) would approximate normal distribution.
Not sure how you were able to produce the Q-Q plot since your provided examples do not work. You are not specifying the values of a and b and you are defining f_func but calling f_fun. Anyway here is my answer to your questions:
How to pass parameters a and b to the functions? - Just pass them as
arguments to the functions.
How to perform this optimization task, meaning find a and b such that f(x) would approximate normal distribution? - The same way any optimization task is done. Define a cost function, then minimize it.
Here is the revised code: I have added a and b as parameters, removed the inverse function and incorporated it inside f_func, which can now take vector input so no need for a for loop.
g_fun <- function(x,a,b) {x - a^b/x^b}
f_func = function(y,a,b,lower = 0, upper = 2000){
sapply(y,function(z) { uniroot(function(x) g_fun(x,a,b) - z, lower = lower, upper = upper)$root})
}
# g(x) values are known
g <- c(-0.016339, 0.029646, -0.0255258, 0.003352, -0.053258, -0.018971, 0.005172,
0.067114, 0.026415, 0.051062)
f <- f_func(g,1,1) # using a = 1 and b = 1
#[1] 0.9918427 1.0149329 0.9873386 1.0016774 0.9737270 0.9905320 1.0025893
#[8] 1.0341199 1.0132947 1.0258569
f_func(g,2,10)
[1] 1.876408 1.880554 1.875578 1.878138 1.873094 1.876170 1.878304 1.884049
[9] 1.880256 1.882544
Now for the optimization part, it depends on what you mean by f(x) would approximate normal distribution. You can compare mean square error from the qq-line if you want. Also since you say approximate, how close is good enough? You can go with shapiro.test and keep searching till you find p-value below 0.05 (be ware that there may not be a solution)
shapiro.test(f_func(g,1,2))$p
[1] 0.9484821
cost <- function(x,y) shapiro.test(f_func(g,x,y))$p
Now that we have a cost function how do we go about minimizing it. There are many many different ways to do numerical optimization. Take a look at optim function http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optim.html.
optim(c(1,1),cost)
This final line does not work, but without proper data and context this is as far as I can go. Hope this helps.

Resources