I just started using the 'wbs' package and I thought I could get a better sense of it if I code the pseudocode for it in r before I use the package. So, I'm currently turning the wild binary segmentation pseudocode into R programming language material.
function WildBinSeg(s, e, ζT)
if e−s < 1 then
STOP
else
Ms,e := set of those indices m for which [sm,em] ∈ FM T is such that [sm,em] ⊆ [s,e]
(Optional: augment Ms,e := Ms,e ∪{0}, where [s0,e0] = [s,e])
(m0,b0) := argmaxm∈Ms,e,b∈{sm,...,em−1}| ˜ Xb sm,em|
if | Xb0 sm0,em0| > ζT then
add b0 to the set of estimated change-points
WildBinSeg(s, b0, ζT)
WildBinSeg(b0 + 1, e, ζT)
else
STOP
end if
end if
end function
I got confused at the line:
Ms,e := set of those indices m for which [sm,em] ∈ FM T is such that [sm,em] ⊆ [s,e]
I know this is a the pseudocode is a function, but I'm not sure if I should create another function for this line since it has two commands. Can anyone help ne code this line?
You first need to write a function which would draw M intervals and find maximiser of the absolute value of the CUSUM statistic in each interval (for details see the WBS paper).
In the R code below, I assume that the output of such a function is stored in 'res' variable, which is a 4 by M matrix. The first two columns contain left and right endpoints of the intervals drawn (sm, em), the third one the maximiser of the CUSUM statistic for each interval (bm), the last contains the value of corresponding CUSUM statistic (Xbm sm, em).
WildBinSeg <- function(s, e, threshold,res){
if(e-s <1) return(NULL)
else{
#we leave only intervals contained in [s,e]
res <- res[res[,1]>= s & res[,2]<= e,,drop=FALSE]
#place for the optional augmentation
#check if there are any intervals left
if(nrow(res)==0) return(NULL)
else{
#we find the maximum
max.id <- which.max(abs(res[,4]))
b0 <- res[max.id,3]
Xb0 <- res[max.id,4]
if(abs(Xb0) > threshold)
return(c(
WildBinSeg(s, b0, threshold,res),
b0,
WildBinSeg(b0+1,e, threshold,res)
))
else return(NULL)
}
}
}
This function returns the localisations of the change-points. You can check how it work using this piece of code.
require(wbs)
set.seed(12)
#we generate a piecewise constant function plus Gaussian noise vector
x <- rnorm(1000)
x[1:500] <- x[1:500]+1
res <- wbs(x)$res[,1:4]
#we set the threshold to the recommended value
threshold <- 1.3*sqrt(2*log(1000))
WildBinSeg(1,1000,threshold,res)
Related
I want to integrate a function involving while loop in R. I have pasted here an MWE. Could anyone please guide about how to get rid of warning messages when integrating such a function?
Thank You
myfun <- function(X, a, b, kmin, kmax){
term <- 0
k <- 1
while(k < kmax | term < 10000){
term <- term + a * b * X^k
k <- k+1
}
fx <- exp(X) * term
return(fx)
}
a <- 5
b <- 4
kmax <- 20
integrate(myfun, lower = 0, upper = 10, a = a, b = b, kmax = kmax)
Produces a warning, accessed via warnings():
In while (k < kmax | term < 10000) { ... :
the condition has length > 1 and only the first element will be used
From the integrate() documentation:
f must accept a vector of inputs and produce a vector of function evaluations at those points.
This is the crux of the problem here, which you can see by running myfun(c(1, 2), a, b, kmin, kmax) and reproducing a similar warning. What's happening is that integrate() wants to pass a vector of inputs to myfun in X; this means that inside your while loop, term will become a vector as well. This creates a problem when the while loop kicks back to the evaluation stage, because now the condition k < kmax | term < 10000 has a vector structure as well (since term does), which while doesn't like.
This warning is very good in this case, because it strongly suggests that integrate() isn't doing what you want it to do. Your goal here isn't to get rid of the warning messages; the function as written simply won't work with integrate() due to the while loop structure.
Your choices for how to proceed are to either (1) rewrite the function in a way that doesn't use a while loop, or (2) just hard-code some numeric integration yourself, perhaps with a for loop. The best way to use R is to vectorize everything and to avoid things like while and for when at all possible.
Finally, I'll note that there seems to be some problem with the underlying function, since myfun(0.5, a, b, kmin, kmax) does not converge (note the problem with the mathematics when the supplied X term is less than 1), so you won't be able to integrate it on the interval [0, 10] no matter what you do.
I used Kernel estimation to get a non parametric probability density function. Then, I want to compare the tails 'distance' between two Kernel distribution of continuous variables, using Kullback-leiber divergence. I have tried the following code:
kl_l <- function(x,y) {
integrand <- function(x,y) {
f.x <- fitted(density(x, bw="nrd0"))
f.y <- fitted(density(y, bw="nrd0"))
return((log(f.x)-log(f.y))*f.x)
}
return(integrate(integrand, lower=-Inf,upper=quantile(density(x, bw="nrd0"),0.25))$value)
#the Kullback-leiber equation
}
When I run kl_l(a,b) for a, b = 19 continuous variables, it returns a warning
Error in density(y, bw = "nrd0") : argument "y" is missing, with no default
Is there any way to calculate this?
(If anyone wants to see the actual equation: https://www.bankofengland.co.uk/-/media/boe/files/working-paper/2019/attention-to-the-tails-global-financial-conditions-and-exchange-rate-risks.pdf page 13.)
In short, I think you just need to move the f.x and f.y outside the integrand (and possibly replace fitted with approxfun):
kl_l <- function(x, y) {
f.x <- approxfun(density(x, bw = "nrd0"))
f.y <- approxfun(density(y, bw = "nrd0"))
integrand <- function(z) {
return((log(f.x(z)) - log(f.y(z))) * f.x(z))
}
return(integrate(integrand, lower = -Inf, upper = quantile(density(x, bw="nrd0"), 0.25))$value)
#the Kullback-leiber equation
}
Expanding a little:
Looking at the paper you referenced, it appears as though you need to first create the two fitted distributions f and g. So if your variable a contains observations under the 1-standard-deviation increase in global financial conditions, and b contains the observations under average global financial conditions, you can create two functions as in your example:
f <- approxfun(density(a))
g <- approxfun(density(b))
Then define the integrand:
integrand <- function(x) log(f(x) / g(x)) * f(x)
The upper bound:
upper <- quantile(density(b, bw = "nrd0"), 0.25)
And finally do the integration on x within the specified bounds. Note that each value of x in the numerical computation has to go into both f and g; in your function kl_l, the x and y were separately going into the integrand, which I think is incorrect; and in any case, integrate will only have operated on the first variable.
integrate(integrand, lower = -Inf, upper = upper)$value
One thing to check for is that approxfun returns NA for values outside the range specified in the density, which can mess up your operation, so you'll need to adjust for those (if you expect the density to go to zero, for example).
I have two input matrices, dt(10,3) & wt(3,3), that i need to use to find the optimal decision matrix (same dimension), Par(10,3) so as to maximize an objective function. Below R code would give some direction into the problem (used Sample inputs here) -
#Input Matrices
dt <- matrix(runif(300),100,3)
wt <- matrix(c(1,0,0,0,2,0,0,0,1),3,3) #weights
#objective function
Obj <- function(Par) {
P = matrix(Par, nrow = 10, byrow=F) # Reshape
X = t((dt%*%wt)[,1])%*%P[,1]
Y = t((dt%*%wt)[,2])%*%P[,2]
Z = t((dt%*%wt)[,3])%*%P[,3]
as.numeric(X+Y+Z) #maximize
}
Now I am struggling to apply the following constraints to the problem :
1) Matrix, Par can only have binary values (0 or 1)
2) rowSums(Par) = 1 (Basically a row can only have 1 in one of the three columns)
3) colSums(Par[,1]) <= 5, colSums(Par[,2]) <= 6, & colSums(Par[,3]) <= 4
4) X/(X+Y+Z) < 0.35, & Y/(X+Y+Z) < 0.4 (X,Y,Z are defined in the objective function)
I tried coding the constraints in constrOptim, but not sure how to input binary & integer constraints. I am reading up on lpSolve, but not able to figure out. Any help much appreciated. Thanks!
I believe this is indeed a MIP so no issues with convexity. If I am correct the model can look like:
This model can be easily transcribed into R. Note that LP/MIP solvers do not use functions for the objective and constraints (opposed to NLP solvers). In R typically one builds up matrices with the LP coefficients.
Note: I had to make the limits on the column sums much larger (I used 50,60,40).
Based on Erwin's response, I am able to formulate the model using lpSolve in R. However still struggling to add the final constraint to the model (4th constraint in my question above). Here's what I am able to code so far :
#input dimension
r <- 10
c <- 3
#input matrices
dt <- matrix(runif(r*c),r,c)
wt <- matrix(c(1,0,0,0,2,0,0,0,1),3,3) #weights
#column controller
c.limit <- c(60,50,70)
#create structure for lpSolve
ncol <- r*c
lp.create <- make.lp(ncol=ncol)
set.type(lp.create, columns=1:ncol, type = c("binary"))
#create objective values
obj.vals <- as.vector(t(dt%*%wt))
set.objfn(lp.create, obj.vals)
lp.control(lp.create,sense='max')
#Add constraints to ensure sum of parameters for every row (rowSum) <= 1
for (i in 1:r){
add.constraint(lp.create, xt=c(1,1,1),
indices=c(3*i-2,3*i-1,3*i), rhs=1, type="<=")
}
#Add constraints to ensure sum of parameters for every column (colSum) <= column limit (defined above)
for (i in 1:c){
add.constraint(lp.create, xt=rep(1,r),
indices=seq(i,ncol,by=c), rhs=c.limit[i], type="<=")
}
#Add constraints to ensure sum of column objective (t((dt%*%wt)[,i])%*%P[,i) <= limits defined in the problem)
#NOT SURE HOW TO APPLY A CONSTRAINT THAT IS DEPENDENT ON THE OBJECTIVE FUNCTION
solve(lp.create)
get.objective(lp.create) #20
final.par <- matrix(get.variables(lp.create), ncol = c, byrow=T) # Reshape
Any help that can get me to the finish line is much appreciated :)
Thanks
in numerical analysis we students are obligated to implement code in R that given a function f(x) finds its Fourier interpolation tN(x) and computes the interpolation error
$||f(x)-t_{N}(x)||=\int_{0}^{2\pi}$ $|f(x)-t_{N}(x)|^2$
or a variety of different $N$
I first tried to compute the d-coefficients according to this formular:
$d = \frac 1N M y$
with M denoting the DFT matrix and y denoting a series of equidistant function values with
$y_j = f(x_j)$ and
$x_j = e^{\frac{2*pi*i}N*j}$
for $j = 1,..,N-1$.
My goal was to come up with a sum that can be described by:
$t_{N}(x) = \Sigma_{k=0}^{N-1} d_k * e^{i*k*x}$
Which would be easier to later integrate in sort of a subsequently additive notation.
f <- function(x) 3/(6+4*cos(x)) #first function to compare with
g <- function(x) sin(32*x) #second one
xj <- function(x,n) 2*pi*x/n
M <- function(n){
w = exp(-2*pi*1i/n)
m = outer(0:(n-1),0:(n-1))
return(w^m)
}
y <- function(n){
f(xj(0:(n-1),n))
}
transformFunction <- function(n, f){
d = 1/n * t(M(n)) %*% f(xj(0:(n-1),n))
script <- paste(d[1])
for(i in 2:n)
script <- paste0(script,paste0("+",d[i],"*exp(1i*x*",i,")"))
#trans <- sum(d[1:n] * exp(1i*x*(0:(n-1))))
return(script)
}
The main purpose of the transform function was, initially, to return a function - or rather: a mathematical expression - which could then be used in order to declarate my Fourier Interpolation Function. Problem is, based on my fairly limited knowledge, that I cannot integrate functions that still have sums nested in them (which is why I commented the corresponding line in the code).
Out of absolute desperation I then tried to paste each of the summands in form of text subsequently, only to parse them again as an expression.
So the main question that remains is: how do I return mathmatical expressions in a manner that allow me to use them as a function and later on integrate them?
I am sincerely sorry for any misunderstanding or confusion, as well as my seemingly amateurish coding.
Thanks in advance!
A function in R can return any class, so specifically also objects of class function. Hence, you can make trans a function of x and return that.
Since the integrate function requires a vectorized function, we use Vectorize before outputting.
transformFunction <- function(n, f){
d = 1/n * t(M(n)) %*% f(xj(0:(n-1),n))
## Output function
trans <- function(x) sum(d[1:n] * exp(1i*x*(0:(n-1))))
## Vectorize output for the integrate function
Vectorize(trans)
}
To integrate, now simply make a new variable with the output of transformFunction:
myint <- transformFunction(n = 10,f = f)
Test: (integrate can only handle real-valued functions)
integrate(function(x) Re(myint(x)),0,2)$value +
1i*integrate(function(x) Im(myint(x)),0,2)$value
# [1] 1.091337-0.271636i
Refer to the R code below. The function (someRfunction) operates on a vector and returns a scalar value. The data are pairs (x,y), where x and y are vectors of length n, which may be large.
I want to know the value of x* such that the result of someRfunction on y where {x>x*} is maximized. The function operates on y values and is non-monotonic in x*. I need to evaluate for all x* (i.e. each element of x). Speed is not an issue if executed once, but the code would be executed many times in a simulation. Is there any way to make this code more efficient/faster?
### x and y are vectors of length n
### sort x and y such that they are ordered by descending x
xord <- x[order(-x)]
yord <- y[order(-x)]
maxf <- -99999
maxcut <- NA
for (i in 1:n) {
### yi is a subvector of y that corresponds to y[x>x{i}]
### where x{i} is the (n-i+1)th order statistic of x
yi <- yord[1:(i-1)]
fxi <- someRfunction(yi)
if (fxi>maxf) {
maxf <- fxi
maxcut <- xord[i]
}
}
Thanks.
Edit: let someRfunction(yi)=t.test(yi)$statistic.
If you can say anything more about the function, particularly whether it is smooth and whether its gradient can be determine, you will get a better answer. At the moment the only increase in speed will be modest due to the ability to pre-specify a vector to hold the results, omit that if-max clause and then use which.max() on the vector. You might want to look at the function optimx in package "optimx".