Solving a double integral associated with Multivariate Normal desity - r

I am trying to solve a double integral associated with the Multivariate Normal density with known mean vector and covariance matrix:
library(cubature)
mu1 <- matrix(c(3,3), nrow=2)
sigma1 <- rbind(c(4,-1), c(-1,6))
quadratic <- function(a,b) {
X <- matrix(c(a,b),nrow=2)
Q <- (-1/2)*t(X-mu1)%*%solve(sigma1)%*%(X-mu1)
}
NormalPDF <- function(x1,x2) {
f <- (1/(2*pi))*(1/sqrt(det(sigma1)))*exp(quadratic(x1,x2))
}
# Solving for P(1 < X1 < 3, 1 < X2 < 3)
P <- adaptIntegrate(NormalPDF(x1,x2), c(1,3), c(1,3))
However, it keeps giving me the error:
Error in matrix(c(a, b), nrow = 2) : object 'x1' not found
Is there any obvious error with my code?

HubertL has pointed out that the first argument should be a function, not a function-call with arguments. It is assumed that the function will accept an "x" argument, a single length 2 vector, so the NormalPDF function needs to be modified in its arguments and in its call to the helper function. Another error was is how the limits are set up.
Consider this:
library(cubature)
mu1 <- matrix(c(3,3), nrow=2)
sigma1 <- rbind(c(4,-1), c(-1,6))
quadratic <- function(a,b) {
X <- matrix(c(a,b),nrow=2)
Q <- (-1/2)*t(X-mu1)%*%solve(sigma1)%*%(X-mu1)
}
NormalPDF <- function(x) {
f <- (1/(2*pi))*(1/sqrt(det(sigma1)))*exp(quadratic(x[1],x[2]))
}
# Solving for P(1 < X1 < 3, 1 < X2 < 3)
P <- adaptIntegrate( NormalPDF, lowerLimit= c(1,1), upperLimit=c(3,3))
P
#==============
$integral
[1] 0.09737084
$error
[1] 1.131395e-08
$functionEvaluations
[1] 17
$returnCode
[1] 0
This integrates that density over the square with "lower left hand corner" at (1,1) and "upper right corner" at (3,3). The invocation in the question would have always returned 0, since the domain was a single point. Would need to be extracted from the list with P$integral if you were going to do anything "numerical" with it. Seems reasonable that the result was less than 0.25 since we were only evaluating in the quarter-plane from the maximum at (3,3).

Related

R - finding roots for a cartesian product of function parameters

Given a function f(x,c,d) of x that also depends on some parameters c and d. I would like to find the zeroes for a cartesian product of certain values c_1,...,c_n and d_1,...,d_m of the parameters, i.e. an x_ij such that f(x_ij,c_i,d_j)=0 for i=1,...,n and j=1,...,m. Although not that crucial I am applying a Newton-Raphson algorithm for the root finding:
newton.raphson <- function(f, a, b, tol = 1e-5, n = 1000){
require(numDeriv) # Package for computing f'(x)
x0 <- a # Set start value to supplied lower bound
k <- n # Initialize for iteration results
# Check the upper and lower bounds to see if approximations result in 0
fa <- f(a)
if (fa == 0.0){
return(a)
}
fb <- f(b)
if (fb == 0.0) {
return(b)
}
for (i in 1:n) {
dx <- genD(func = f, x = x0)$D[1] # First-order derivative f'(x0)
x1 <- x0 - (f(x0) / dx) # Calculate next value x1
k[i] <- x1 # Store x1
# Once the difference between x0 and x1 becomes sufficiently small, output the results.
if (abs(x1 - x0) < tol) {
root.approx <- tail(k, n=1)
res <- list('root approximation' = root.approx, 'iterations' = k)
return(res)
}
# If Newton-Raphson has not yet reached convergence set x1 as x0 and continue
x0 <- x1
}
print('Too many iterations in method')
}
The actual function that I am interest is more complicated, but the following example illustrates my problem.
test.function <- function(x=1,c=1,d=1){
return(c*d-x)
}
Then for any given c_i and d_j I can easily calculate the zero by
newton.raphson(function(x) test.function(x,c=c_i,d=d_j),0,1)[1]
which here is obviously just the product c_i*d_j.
Now I tried to define a function that finds for two given vectors (c_1,...,c_n) and (d_1,...,d_m) the zeroes for all combinations. For this, I tried to define
zeroes <- function(ci=1,dj=1){
x<-newton.raphson(function(x) test.function(x,c=ci,d=dj),0,1)[1]
return(as.numeric(x))
}
and then use the outer-function, e.g.
outer(c(1,2),c(1,2,3),FUN=zeroes)
Unfortunately, this did not work. I got an error message
Error during wrapup: dims [product 6] do not match the length of object [1]
There might be also a much better solution to my problem. I am happy for any input.

For a polynomial, get all its extrema and plot it by highlighting all monotonic pieces

Someone asked me this interesting question and I think it worthwhile posting it here, as there has not been any relevant thread on Stack Overflow.
Suppose I have polynomial coefficients in a length-n vector pc, where a polynomial of degree n - 1 for variable x can be expressed in its raw form:
pc[1] + pc[2] * x + pc[3] * x ^ 2 + ... + pc[n] * x ^ (n - 1)
R core function polyroot can find all roots of this polynomial in complex domain. But often we are also interested in extrema, as for a univariate function, local minima and maxima turn up alternately, breaking the function into monotonic pieces.
My questions are:
How to obtain all extrema (actually all saddle points) in real domain of a polynomial?
How to sketch this polynomial with 2-colour scheme: red for ascending pieces and green for descending pieces?
It would be good to write this up as a function so that we can easily explore / visualize a polynomial.
As an example, consider a polynomial of degree 5:
pc <- c(1, -2.2, -13.4, -5.1, 1.9, 0.52)
obtain all saddle points of a polynomial
In fact, saddle points can be found by using polyroot on the 1st derivative of the polynomial. Here is a function doing it.
SaddlePoly <- function (pc) {
## a polynomial needs be at least quadratic to have saddle points
if (length(pc) < 3L) {
message("A polynomial needs be at least quadratic to have saddle points!")
return(numeric(0))
}
## polynomial coefficient of the 1st derivative
pc1 <- pc[-1] * seq_len(length(pc) - 1)
## roots in complex domain
croots <- polyroot(pc1)
## retain roots in real domain
## be careful when testing 0 for floating point numbers
rroots <- Re(croots)[abs(Im(croots)) < 1e-14]
## note that `polyroot` returns multiple root with multiplicies
## return unique real roots (in ascending order)
sort(unique(rroots))
}
xs <- SaddlePoly(pc)
#[1] -3.77435640 -1.20748286 -0.08654384 2.14530617
evaluate a polynomial
We need be able to evaluate a polynomial in order to plot it. My this answer has defined a function g that can evaluate a polynomial and its arbitrary derivatives. Here I copy this function in and rename it to PolyVal.
PolyVal <- function (x, pc, nderiv = 0L) {
## check missing aruments
if (missing(x) || missing(pc)) stop ("arguments missing with no default!")
## polynomial order p
p <- length(pc) - 1L
## number of derivatives
n <- nderiv
## earlier return?
if (n > p) return(rep.int(0, length(x)))
## polynomial basis from degree 0 to degree `(p - n)`
X <- outer(x, 0:(p - n), FUN = "^")
## initial coefficients
## the additional `+ 1L` is because R vector starts from index 1 not 0
beta <- pc[n:p + 1L]
## factorial multiplier
beta <- beta * factorial(n:p) / factorial(0:(p - n))
## matrix vector multiplication
base::c(X %*% beta)
}
For example, we can evaluate the polynomial at all its saddle points:
PolyVal(xs, pc)
#[1] 79.912753 -4.197986 1.093443 -51.871351
sketch a polynomial with a 2-colour scheme for monotonic pieces
Here is a function to view / explore a polynomial.
ViewPoly <- function (pc, extend = 0.1) {
## get saddle points
xs <- SaddlePoly(pc)
## number of saddle points (if 0 the whole polynomial is monotonic)
n_saddles <- length(xs)
if (n_saddles == 0L) {
message("the polynomial is monotonic; program exits!")
return(NULL)
}
## set a reasonable xlim to include all saddle points
if (n_saddles == 1L) xlim <- c(xs - 1, xs + 1)
else xlim <- extendrange(xs, range(xs), extend)
x <- c(xlim[1], xs, xlim[2])
## number of monotonic pieces
k <- length(xs) + 1L
## monotonicity (positive for ascending and negative for descending)
y <- PolyVal(x, pc)
mono <- diff(y)
ylim <- range(y)
## colour setting (red for ascending and green for descending)
colour <- rep.int(3, k)
colour[mono > 0] <- 2
## loop through pieces and plot the polynomial
plot(x, y, type = "n", xlim = xlim, ylim = ylim)
i <- 1L
while (i <= k) {
## an evaluation grid between x[i] and x[i + 1]
xg <- seq.int(x[i], x[i + 1L], length.out = 20)
yg <- PolyVal(xg, pc)
lines(xg, yg, col = colour[i])
i <- i + 1L
}
## add saddle points
points(xs, y[2:k], pch = 19)
## return (x, y)
list(x = x, y = y)
}
We can visualize the example polynomial in the question by:
ViewPoly(pc)
#$x
#[1] -4.07033952 -3.77435640 -1.20748286 -0.08654384 2.14530617 2.44128930
#
#$y
#[1] 72.424185 79.912753 -4.197986 1.093443 -51.871351 -45.856876
An alternative solution, re-implementing SaddlePoly and PolyVal with R package polynom.
library(polynom)
SaddlePoly <- function (pc) {
## a polynomial needs be at least quadratic to have saddle points
if (length(pc) < 3L) {
message("A polynomial needs be at least quadratic to have saddle points!")
return(numeric(0))
}
## polynomial coefficient of the 1st derivative
## pc1 <- pc[-1] * seq_len(length(pc) - 1) ## <- removed
## roots in complex domain
croots <- solve(deriv(polynomial(pc))) ## <- use package "polynom"
## retain roots in real domain
## be careful when testing 0 for floating point numbers
rroots <- Re(croots)[abs(Im(croots)) < 1e-14]
## note that `polyroot` returns multiple root with multiplicies
## return unique real roots (in ascending order)
sort(unique(rroots))
}
xs <- SaddlePoly(pc)
#[1] -3.77435640 -1.20748286 -0.08654384 2.14530617
## a complete re-implementation using package "polynom"
PolyVal <- function (x, pc, nderiv = 0L) {
## check missing aruments
if (missing(x) || missing(pc)) stop ("arguments missing with no default!")
## create "polynomial" object
p <- polynomial(pc)
## take derivatives
i <- 0
while (i < nderiv) {
p <- deriv(p)
i <- i + 1
}
## evaluate "polynomial" with "predict"
predict(p, x)
}
PolyVal(xs, pc)
#[1] 79.912753 -4.197986 1.093443 -51.871351
## use `ViewPoly` as it is
ViewPoly(pc)
#$x
#[1] -4.07033952 -3.77435640 -1.20748286 -0.08654384 2.14530617 2.44128930
#
#$y
#[1] 72.424185 79.912753 -4.197986 1.093443 -51.871351 -45.856876
In my opinion, polynom package makes construction of a polynomial easy. The poly.calc function allows a polynomial to be constructed from its roots or a Lagrange interpolation.
## (x - 1) ^ 3
p1 <- poly.calc(rep(1, 3))
## x * (x - 1) * (x - 2) * (x - 3)
p2 <- poly.calc(0:3)
## Lagrange interpolation through 0:4 and rnorm(5, 0:4, 1)
set.seed(0); x <- 0:4; y <- rnorm(5, 0:4, 1)
p3 <- poly.calc(x, y)
To view those polynomials, we can use function plot.polynomial from polynom or PolyView. However, the two functions have different logic in choosing xlim for the plot.
par(mfrow = c(3, 2), mar = c(4, 4, 1, 1))
## plot `p1`
plot(p1)
ViewPoly(unclass(p1))
## plot `p2`
plot(p2)
ViewPoly(unclass(p2))
## plot `p3`
plot(p3)
ViewPoly(unclass(p3))

Unsure what's causing this error in R (pmvt - Package: mvtnorm)?

I have a simple hazard function, the line causing the error is marked.
h <- function(t,u) {
x <- 1 - Sa(t)
y <- 1 - Sm(u)
invx <- as.numeric(qt(x,df=d1))
invy <- as.numeric(qt(x,df=d1))
[ERROR LINE] copula <- pmvt(lower=as.numeric(cbind(-9999,-9999)),upper=cbind(invx,invy),df=d1,corr=matrix(cbind(1,d2,d2,1),byrow=T,ncol=2) )
density <- dmvt(cbind(invx,invy),sigma=matrix(cbind(1,d2,d2,1),byrow=T,ncol=2),df=d1)
num <- (sa(t)*sm(u))*density/dt(invx,df=d1)/dt(invy,df=d1)
den <- 1 - x - y + copula
hazard <- num/den
return(hazard)
}
This hazard function is then called on by a likelihood function:
# log Likelihood function for each individual car i
lli <- function(data) {
result <- 0;
# for all claims, evaluate hazard function at that point
if (nrow(data)> 2) {
for (k in 1:nrow(data)) {
if (data[k,3] == 1) {
result <- result + log(h(data[k,2],data[k,1]));
}
}
}
# integrate hazard function over areas between claims
for (k in 1:(nrow(data)-1)) {
integral <- quad2d(h,data[k,2],data[k+1,2],data[k,1],data[k+1,1]);
result <- result - integral;
}
return(result)
}
Now this likelihood function is then called on by a third function to use over my entire data set; however it is the above function that is causing the error, not the function below
# log Likelihood function over all vehicles
ll <- function(x) {
# Unpack parameters
d1 <<- x[1];
d2 <<- x[2];
total <- 0;
# Get log Likelihood for each vehicle
for (i in 1:length(alldata)) {
total <- total + lli(alldata[[i]]);
#print(sprintf("Found candidate solution %d value: %f",i,total));
}
#print(sprintf("Found candidate solution value: %f",total));
if (is.nan(total)) { #If it is undefined, make it a large negative number
total <- -2147483647 ;
}
return(-1*total); # Minimise instead of maximise
}
Error message is as follows:
> ll(cbind(50,0.923))
Error in checkmvArgs(lower = lower, upper = upper, mean = delta, corr = corr, :
‘diag(corr)’ and ‘lower’ are of different length
I kept getting this same error when using pmvnorm, and ended up having to use the pbivnorm package to get around this. I can't find an alternative package for the bivariate t distribution though. I don't understand what the problem is. When I call the function h(t,u) by itself it executes without a problem. But when lli(data) calls h(t,u), it doesn't work. What's even more bizarre is that they are the same length.
> length(as.numeric(cbind(-9999,-9999)))
[1] 2
> length(diag(matrix(cbind(1,d2,d2,1),byrow=T,ncol=2)))
[1] 2
I apologize for the messy code. I don't use R much. Anyway this has me completely stumped.
Data file is here: https://files.fm/u/yx9pw2b3
Additional code I forgot to include, basically some constants and marginal CDF functions:
Marginals.R:
p1 <- 0.4994485;
p2 <- 0.2344439;
p3 <- 0.1151654;
p4 <- 0.1509421;
b1 <- 0.7044292
t1 <- 1713.3170267
mu1 <- 7.014415
sig1 <- 1.394735
mu2 <- 6.926146
sig2 <- 1.056647
mu3 <- 6.7995896
sig3 <- 0.7212853
b2 <- 0.6444582
t2 <- 762.9962093
b3 <- 1.494303
t3 <- 410.828780
b1 <- 0.903
t1 <- 864.896
b2 <- 0.9109
t2 <- 314.2946
# Marginal survival distribution and density
Sa <- function(t) {return(exp(-(t / t1) ** b1))}
Sm <- function(u) {return(exp(-(u / t2) ** b2))}
sa <- function(t) {return((t / t1) ** b1 * b1 * exp(-(t / t1) ** b1) / t ) }
sm <- function(u) {return((u / t2) ** b2 * b2 * exp(-(u / t2) ** b2) / u ) }
Summary:
The problem is the difference length between lower and upper when calling pvmt, which upper has a length of 2048, while lower has length of 2.
Reasoning:
1. pmvt checks the coming parameters by calling checkmvArgs in mvtnorm package.
2. In checkmvArgs, lower, upper and mean have been put together by rec <- cbind(lower, upper, mean). Here the new data rec has 2048 row instead of 2.
3. lower is then replace by lower <- rec[, "lower"], which lower now has length 2048 instead of 2.
4. Given corr is still a 2 * 2 matrix, error occurs when checking length(corr) != length(lower)
Solution:
invx <- as.numeric(qt(x,df=d1))
invy <- as.numeric(qt(x,df=d1))
upper mean to be a length 2 vector, hence invx and invy needs to be single numbers.
As not sure what's the upper range you want to define, I cannot solve it further. Possible one is :
invx <- as.numeric(qt(x,df=d1))
invy <- as.numeric(qt(x,df=d1))
copula <- pmvt(lower=as.numeric(cbind(-9999,-9999)),upper=range(c(invx,invy)),df=d1,corr=matrix(c(1,d2,d2,1),byrow=T,ncol=2) )
Which is using the range of invx and invy as the input. Hence the dmvt would not be affect.
Note:
As value a is not provided, the next line below (calling dmvt) the error line failed.
Edit:
To make the issue more specific:
1. quad2d will generate a Gauss-Legendre Quadrature which will be created by default a length of 32 on a given range. And,
2. Your function h is then called with the x and y from this Gauss-Legendre Quadrature. Hence, the t and u defined in h is not a single mumber, instead, it is a vector.

Finding mean of standard normal distribution in a given interval

I want to find mean of standard normal distribution in a given interval.
For example, if I divide standard normal distribution into two ([-Inf:0] [0:Inf]) I want to get the mean of each half.
Following code does almost exactly what I want:
divide <- 2
boundaries <- qnorm(seq(0,1,length.out=divide+1))
t <- sort(rnorm(100000))
means.1 <- rep(NA,divide)
for (i in 1:divide) {
means.1[i] <- mean(t[(t>boundaries[i])&(t<boundaries[i+1])])
}
But I need a more precise (and elegant) method to calculate these numbers (means.1).
I tried the following code but it did not work (maybe because of the lack of my probability knowledge).
divide <- 2
boundaries <- qnorm(seq(0,1,length.out=divide+1))
means.2 <- rep(NA,divide)
f <- function(x) {x*dnorm(x)}
for (i in 1:divide) {
means.2[i] <- integrate(f,lower=boundaries[i],upper=boundaries[i+1])$value
}
Any ideas?
Thanks in advance.
The problem is that the integral of dnorm(x) in the interval (-Inf to 0) isn't 1, that's why you got the wrong answer. To correct you must divide the result you got by 0.5 (the integral result). Like:
func <- function(x, ...) x * dnorm(x, ...)
integrate(func, -Inf, 0, mean=0, sd=1)$value / (pnorm(0, mean=0, sd=1) - pnorm(-Inf, mean=0, sd=1))
Adapt it to differents intervals should be easy.
Thanks for answering my question.
I combined all answers as I understand:
divide <- 5
boundaries <- qnorm(seq(0,1,length.out=divide+1))
# My original thinking
t <- sort(rnorm(1e6))
means.1 <- rep(NA,divide)
for (i in 1:divide) {
means.1[i] <- mean(t[((t>boundaries[i])&(t<boundaries[i+1]))])
}
# Based on #DWin
t <- sort(rnorm(1e6))
means.2 <- tapply(t, findInterval(t, boundaries), mean)
# Based on #Rcoster
means.3 <- rep(NA,divide)
f <- function(x, ...) x * dnorm(x, ...)
for (i in 1:divide) {
means.3[i] <- integrate(f, boundaries[i], boundaries[i+1])$value / (pnorm(boundaries[i+1]) - pnorm(boundaries[i]))
}
# Based on #Kith
t <- sort(rnorm(1e6))
means.4 <- rep(NA,divide)
for (i in 1:divide) {
means.4[i] <- fitdistr(t[t > boundaries[i] & t < boundaries[i+1]], densfun="normal")$estimate[1]
}
Results
> means.1
[1] -1.4004895486 -0.5323784986 -0.0002590746 0.5313539906 1.3978177100
> means.2
[1] -1.3993590768 -0.5329465789 -0.0002875593 0.5321381745 1.3990997391
> means.3
[1] -1.399810e+00 -5.319031e-01 1.389222e-16 5.319031e-01 1.399810e+00
> means.4
[1] -1.399057073 -0.531946615 -0.000250952 0.531615180 1.400086731
I believe #Rcoster is the one that I wanted. Rest is innovative approaches compared to mine but still approximate.
Thanks.
You can use a combination of fitdistr and vector indexing.
Here's an example of how to get mean and std of just the positive values:
library("MASS")
x = rnorm(10000)
fitdistr(x[x > 0], densfun="normal")
or just the values in the interval (0,2):
fitdistr(x[x > 0 & x < 2], densfun="normal")
Let's say your cutpoints are -1, 0, 1, and 2 and you are interested in the mean of sections simulating a standard Normal.
samp <- rnorm(1e5)
(res <- tapply(samp, findInterval(samp, c( -1, 0, 1, 2)), mean) )
# 0 1 2 3 4
#-1.5164151 -0.4585519 0.4608587 1.3836470 2.3824633
Please do note that the labeling could be improved. One improvement could be:
names(res) <- paste("[", c(-Inf, -1, 0, 1, 2, Inf)[-6], " , ",
c(-Inf, -1, 0, 1, 2, Inf)[-1], ")", sep="")
> res
[-Inf , -1) [-1 , 0) [0 , 1) [1 , 2) [2 , Inf)
-1.5278185 -0.4623743 0.4621885 1.3834442 2.3835116
Using the distrEx and distr packages:
library(distrEx)
E(Truncate(Norm(mean=0, sd=1), lower=0, upper=Inf))
# [1] 0.797884
(See vignette(distr) in the distrDoc package for an excellent overview of the suite of distr and related packages.)
Or, using just base R, here's an alternative that constructs a discrete approximation of the expectation within the interval between lb and ub. The bases of the approximating rectangles are adjusted so that they all have equal areas (i.e. so that the probability of a point falling in each one of them is identical).
intervalMean <- function(lb, ub, n=1e5, ...) {
## Get x-values at n evenly-spaced quantiles between lower and upper bounds
xx <- qnorm(seq(pnorm(lb, ...), pnorm(ub, ...), length = n), ...)
## Calculate expectation
mean(xx[is.finite(xx)])
}
## Your example
intervalMean(lb=0, ub=1)
# [1] 0.4598626
## The mean of the complete normal distribution
intervalMean(-Inf, Inf)
## [1] -6.141351e-17
## Right half of standard normal distribution
intervalMean(lb=0, ub=Inf)
# [1] 0.7978606
## Right half of normal distribution with mean 0 and standard deviation 100
intervalMean(lb=0, ub=Inf, mean=0, sd=100)
# [1] 79.78606

efficient way of calculating lots of matrices

I'm trying to write a program that does the following:
Given two intervals A and B, for every (a,b) with a in A and b in B
create a variance matrix ymat, depending on (a,b)
calculate the (multivariate normal) density of some vector y
with mean 0 and variance matrix ymat
I learned that using loops is bad in R, so I wanted to use outer(). Here are my two functions:
y_mat <- function(n,lambda,theta,sigma) {
L <- diag(n);
L[row(L) == col(L) + 1] <- -1;
K <- t(1/n * L - theta*diag(n))%*%(1/n * L - theta*diag(n));
return(sigma^2*diag(n) + 1/lambda*K);
}
make_plot <- function(y,sigma,theta,lambda) {
n <- length(y)
sig_intv <- seq(.1,2*sigma,.01);
th_intv <- seq(-abs(2*theta),abs(2*theta),.01);
z <- outer(sig_intv,th_intv,function(s,t){dmvnorm(y,rep(0,n),y_mat(n,lambda,theta=t,sigma=s))})
contour(sig_intv,th_intv,z);
}
The shape of the variance matrix isn't relevant for this question. n and lambda are just two scalars, as are sigma and theta.
When I try
make_plot(y,.5,-3,10)
I get the following error message:
Error in t(1/n * L - theta * diag(n)) :
dims [product 25] do not match the length of object [109291]
In addition: Warning message:
In theta * diag(n) :
longer object length is not a multiple of shorter object length
Could someone enlighten me as to what's going wrong? Am I maybe going about this the wrong way?
The third argument of outer should be a vectorized function. Wrapping it with Vectorize should suffice:
make_plot <- function(y, sigma, theta, lambda) {
n <- length(y)
sig_intv <- seq(.1,2*sigma,.01);
th_intv <- seq(-abs(2*theta),abs(2*theta),.01);
z <- outer(
sig_intv, th_intv,
Vectorize(function(s,t){dmvnorm(y,rep(0,n),y_mat(n,lambda,theta=t,sigma=s))})
)
contour(sig_intv,th_intv,z);
}

Resources