the integrand has the sum of a vector - r

I need to integrate a function integrand. The function integrand is a product of A and B. A = 2/(upper-lower), and B is the sum of a vector depending on the input parameter.
if I have
X = 7,
N = 50,
Ck # a vector of N elements,
uk # a vector of N elements,
upper = 10,
lower = -10
and my R-code is as follow:
integrand<-function(y)
{
df<-matrix(,nrow = N,ncol = 1);
res<-NA;
for(k in 1:N)
df[k]<-Ck[k]*cos(y-lower)*uk[k]
res<-2/(upper-lower)*sum(df);
return(res)
}
integrate(function(x){integrand(x)},upper=X,lower = lower)$value
I got an error message after running the code:
Error in integrate(function(x) { :
evaluation of function gave a result of wrong length
what is my mistake?
Additionally, if df[k]<-Ck[k]*(cos(y-lower)*uk[k]), may I write the code as:
integrand<-function(y)
{
df <-Ck*cos((y - lower)*uk)
2 * sum(df) / (upper - lower)
}
integrate(Vectorize(integrand),upper=X,lower = lower)$value
THANKS!

Use
integrand <- function(y) {
mat <- tcrossprod(Ck * uk, cos(y - lower))
2 * colSums(mat) / (upper - lower)
}
Explanation:
If you read the documentation of function integrate, you see that f must be a vectorized function (i.e. you give it a vector argument and it returns a vector of the same length).

Related

Why can't I use uniroot with plot?

I am working on code that uses the uniroot function to approximate the root of an equation. I am trying to plot the behaviour of the function being passed through uniroot as the value of a free variable changes:
library(Deriv)
f1 <- function(s) {
(1 - 2*s)^(-3/2)*exp((8*s)/(1-2*s))
}
f2 <- function(s) {
log(f1(s))
}
f3 <- Deriv(f2, 's')
f4 <- Deriv(f3, 's')
f5 <- Deriv(f4, 's')
upp_s <- 1/2 - 1e-20
f_est <- function(x) {
f3a <- function(s) {f3(s = s) - x}
s_ <- uniroot(f3a,
lower = -9,
upper = upp_s)$root
return(s_)
}
plot(f_est, from = 0, to=100, col="red", main="header")
The output of f_est works as expected. However, when passed through the plot function, uniroot seems to break:
> plot(f_est, from = 0, to=100, col="red", main="header")
Error in uniroot(f3a, lower = -9, upper = upp_s) :
f() values at end points not of opposite sign
In addition: Warning messages:
1: In if (is.na(f.lower)) stop("f.lower = f(lower) is NA") :
the condition has length > 1 and only the first element will be used
2: In if (is.na(f.upper)) stop("f.upper = f(upper) is NA") :
Error in uniroot(f3a, lower = -9, upper = upp_s) :
f() values at end points not of opposite sign
The function is set up such that the endpoints specified in uniroot are always of opposite sign, and that there is always exactly one real root. I have also checked to confirm that the endpoints are non-missing when f_est is run by itself. I've tried vectorising the functions involved to no avail.
Why is this happening?
I was able to get most of the way there with
upp_s <- 0.497
plot(Vectorize(f_est), from = 0.2, to = 100)
Not only is 1/2 - epsilon exactly equal to 1/2 for values of epsilon that are too small (due to floating point error), I found that f3() gives NaN for values >= 0.498. Setting upp_s to 0.497 worked OK.
plot() applied to a function calls curve(), which needs a function that can take a vector of x values.
The curve broke with "f() values at end points not of opposite sign" if I started the curve from 0.1; I didn't dig in further and try to diagnose what was going wrong.
PS. It is generally more numerically stable and efficient to do computations directly on the log scale where possible. In this case, that means using
f2 <- function(s) { (-3/2)*log(1-2*s) + (8*s)/(1-2*s) }
instead of
f1 <- function(s) {
(1 - 2*s)^(-3/2)*exp((8*s)/(1-2*s))
}
f2_orig <- function(s) {
log(f1(s))
}
## check
all.equal(f2(0.25), f2_orig(0.25)) ## TRUE
Doing this and setting the lower bound of uniroot() to -500 lets us get pretty close to the zero boundary (although it looks both analytically and computationally as though the function diverges to -∞ as x goes to 0).
f3 <- Deriv(f2, 's')
upp_s <- 1/2 - 1e-10
lwr_a <- -500
f_est <- function(x) {
f3a <- function(s) { f3(s = s) - x}
s_ <- uniroot(f3a,
lower = lwr_a,
upper = upp_s)$root
return(s_)
}
plot(Vectorize(f_est), from = 0.005, to = 100, log = "x")
You can also solve this analytically, or ask caracas (an R interface to sympy) to do it for you:
library(caracas)
x <- symbol("x"); s <- symbol("s")
## peek at f3() guts to find the expression for the derivative;
## could also do the whole thing in caracas/sympy
solve_sys((11 +16*(s/(1-s*2)))/(1-s*2), x, list(s))
sol <- function(x) { (2*x - sqrt(32*x + 9) -3)/(4*x) }
curve(sol, add = TRUE, col = 2)

Setting up a Horner polynomial in R

I am trying to set up a function in R that computes a polynomial
P(x) = c1 + c2*x + c3*x^2 + ... + cn-1*x^n-2 + cn*x^n-1
for various values of x and set coefficients c.
Horner's method is to
Set cn = bn
For i = n-1, n-1, ..., 2, 1, set bi = bi+1*x + ci
Return the output
What I have so far:
hornerpoly1 <- function(x, coef, output = tail(coef,n=1), exp = seq_along(coef)-1) {
for(i in 1:tail(exp,n=1)) {
(output*x)+head(tail(coef,n=i),n=1)
}
}
hornerpoly <- function(x, coef) {
exp<-seq_along(coef)-1
output<-tail(coef,n=1)
if(length(coef)<2) {
stop("Must be more than one coefficient")
}
sapply(x, hornerpoly1, coef, output,exp)
}
I also need to error check on the length of coef, that's what the if statement is for but I am not struggling with that part. When I try to compute this function for x = 1:3 and coef = c(4,16,-1), I get three NULL statements, and I can't figure out why. Any help on how to better construct this function or remedy the null output is appreciated. Let me know if I can make anything more clear.
How about the following:
Define a function that takes x as the argument at which to evaluate the polynomial, and coef as the vector of coefficients in decreasing order of degree. So the vector coef = c(-1, 16, 4) corresponds to P(x) = -x^2 + 16 * x + 4.
The Horner algorithm is implemented in the following function:
f.horner <- function(x, coef) {
n <- length(coef);
b <- rep(0, n);
b[n] <- coef[n];
while (n > 0) {
n <- n - 1;
b[n] <- coef[n] + b[n + 1] * x;
}
return(b[1]);
}
We evaluate the polynomial at x = 1:3 for coef = c(-1, 16, 4):
sapply(1:3, f.horner, c(-1, 16, 4))
#[1] 19 47 83
Some final comments:
Note that the check on the length of coef is realised in the statement while (n > 0) {...}, i.e. we go through the coefficients starting from the last and stop when we reach the first coefficient.
You don't need to save the intermediate b values as a vector in the function. This is purely for (my) educational/trouble-shooting purposes. It's easy to rewrite the code to store bs last value, and then update b every iteration. You could then also vectorise f.horner to take a vector of x values instead of only a scalar.

Non-comformable arguments in R

I am re-writting an algorithm I did in C++ in R for practice called the Finite Difference Method. I am pretty new with R so I don't know all the rules regarding vector/matrix multiplication. For some reason I am getting a non-conformable arguments error when I do this:
ST_u <- matrix(0,M,1)
ST_l <- matrix(0,M,1)
for(i in 1:M){
Z <- matrix(gaussian_box_muller(i),M,1)
ST_u[i] <- (S0 + delta_S)*exp((r - (sigma*sigma)/(2.0))*T + sigma*sqrt(T)%*%Z)
ST_l[i] <- (S0 - delta_S)*exp((r - (sigma*sigma)/(2.0))*T + sigma*sqrt(T)%*%Z)
}
I get this error:
Error in sqrt(T) %*% Z : non-conformable arguments
Here is my whole code:
gaussian_box_muller <- function(n){
theta <- runif(n, 0, 2 * pi)
rsq <- rexp(n, 0.5)
x <- sqrt(rsq) * cos(theta)
return(x)
}
d_j <- function(j, S, K, r, v,T) {
return ((log(S/K) + (r + (-1^(j-1))*0.5*v*v)*T)/(v*(T^0.5)))
}
call_delta <- function(S,K,r,v,T){
return (S * dnorm(d_j(1, S, K, r, v, T))-K*exp(-r*T) * dnorm(d_j(2, S, K, r, v, T)))
}
Finite_Difference <- function(S0,K,r,sigma,T,M,delta_S){
ST_u <- matrix(0,M,1)
ST_l <- matrix(0,M,1)
for(i in 1:M){
Z <- matrix(gaussian_box_muller(i),M,1)
ST_u[i] <- (S0 + delta_S)*exp((r - (sigma*sigma)/(2.0))*T + sigma*sqrt(T)%*%Z)
ST_l[i] <- (S0 - delta_S)*exp((r - (sigma*sigma)/(2.0))*T + sigma*sqrt(T)%*%Z)
}
Delta <- matrix(0,M,1)
totDelta <- 0
for(i in 1:M){
if(ST_u[i] - K > 0 && ST_l[i] - K > 0){
Delta[i] <- ((ST_u[i] - K) - (ST_l[i] - K))/(2*delta_S)
}else{
Delta <- 0
}
totDelta = totDelta + exp(-r*T)*Delta[i]
}
totDelta <- totDelta * 1/M
Var <- 0
for(i in 1:M){
Var = Var + (Delta[i] - totDelta)^2
}
Var = Var*1/M
cat("The Finite Difference Delta is : ", totDelta)
call_Delta_a <- call_delta(S,K,r,sigma,T)
bias <- abs(call_Delta_a - totDelta)
cat("The bias is: ", bias)
cat("The Variance of the Finite Difference method is: ", Var)
MSE <- bias*bias + Var
cat("The marginal squared error is thus: ", MSE)
}
S0 <- 100.0
delta_S <- 0.001
K <- 100.0
r <- 0.05
sigma <- 0.2
T <- 1.0
M <- 10
result1 <- Finite_Difference(S0,K,r,sigma,T,M,delta_S)
I can't seem to figure out the problem, any suggestions would be greatly appreciated.
In R, the %*% operator is reserved for multiplying two conformable matrices. As one special case, you can also use it to multiply a vector by a matrix (or vice versa), if the vector can be treated as a row or column vector that conforms to the matrix; as a second special case, it can be used to multiply two vectors to calculate their inner product.
However, one thing it cannot do is perform scalar multipliciation. Scalar multiplication of vectors or matrices always uses the plain * operator. Specifically, in the expression sqrt(T) %*% Z, the first term sqrt(T) is a scalar, and the second Z is a matrix. If what you intend to do here is multiply the matrix Z by the scalar sqrt(T), then this should just be written sqrt(T) * Z.
When I made this change, your program still didn't work because of another bug -- S is used but never defined -- but I don't understand your algorithm well enough to attempt a fix.
A few other comments on the program not directly related to your original question:
The first loop in Finite_Difference looks suspicious: guassian_box_muller(i) generates a vector of length i as i varies in the loop from 1 up to M, and forcing these vectors into a column matrix of length M to generate Z is probably not doing what you want. It will "reuse" the values in a cycle to populate the matrix. Try these to see what I mean:
matrix(gaussian_box_muller(1),10,1) # all one value
matrix(gaussian_box_muller(3),10,1) # cycle of three values
You also use loops in many places where R's vector operations would be easier to read and (typically) faster to execute. For example, your definition of Var is equivalent to:
Var <- sum((Delta - totDelta)^2)/M
and the definitions of Delta and totDelta could also be written in this simplified fashion.
I'd suggest Googling for "vector and matrix operations in r" or something similar and reading some tutorials. Vector arithmetic in particular is idiomatic R, and you'll want to learn it early and use it often.
You might find it helpful to consider the rnorm function to generate random Gaussians.
Happy R-ing!

fzero function from MATLAB to R

I want to use R's fzero function to find roots of a function. The problem gets complicated, as the function in question calls some other functions which in turn call another ones. I do have MATLAB code that does it and I am trying to translate it to R, but cannot make in work. My experience with MATLAB is limited, so it's probable I just missed some feature of the MATLAB code while translating. My ultimate goal is to obtain R's working equivalent of the MATLAB code. Any hints will by highly appreciated!
The error I got is in function psi():
Error in (-t(I) * pi^2) %*% time : non-conformable arguments
Although the sizes of matrices do match and this part of code works with some naive input when ran in isolation.
NB: I have tried using mrdivide (R's equivalent of MATLAB's right matrix division) in some places, but with no effect.
NB2: I obtain the same error trying function uniroot instead of fzero.
# Global parameters:
N = 140
A2 = (256 times 256) matrix with data
I = vector of size 256: (0, 1, 2^2, 3^2, 4^2, ..., 255^2)
# ----------------------------------------------------------------
MATLAB working code:
fzero( #(t)(t-evolve(t)),[0,0.1])
function [out,time]=evolve(t)
global N
Sum_func = func([0,2],t) + func([2,0],t) + 2*func([1,1],t);
time=(2*pi*N*Sum_func)^(-1/3);
out=(t-time)/time;
end
function out=func(s,t)
global N
if sum(s)<=4
Sum_func=func([s(1)+1,s(2)],t)+func([s(1),s(2)+1],t); const=
(1+1/2^(sum(s)+1))/3;
time=(-2*const*K(s(1))*K(s(2))/N/Sum_func)^(1/(2+sum(s)));
out=psi(s,time);
else
out=psi(s,t);
end
end
function out=psi(s,Time)
global I A2
% s is a vector
w=exp(-I*pi^2*Time).*[1,.5*ones(1,length(I)-1)];
wx=w.*(I.^s(1));
wy=w.*(I.^s(2));
out=(-1)^sum(s)*(wy*A2*wx')*pi^(2*sum(s));
end
function out=K(s)
out=(-1)^s*prod((1:2:2*s-1))/sqrt(2*pi);
end
# ----------------------------------------------------------------
My attempt at R translation (not working):
fzero(subtract_evolve, c(0, 0.1))
K <- function(s) {
out <- (-1)^s * prod(seq(from = 1,to = 2*s-1, by = 2))/sqrt(2*pi)
return(out)
}
psi <- function(s, time) {
w <- (exp((-t(I) * pi^2) %*% time)) *
t(c(cbind(1, 0.5*ones(1,length(I)-1))))
wx <- t(w * (I^s[1]))
wy <- t(w * (I^s[2]))
out <- (-1)^sum(s) * (wy %*% A2 %*% t(wx)) * pi^(2*sum(s))
return(out)
}
func <- function(s, t) {
if (sum(s) <= 4) {
sum_func <- func(c(s[1]+1,s[2]), t) + func(c(s[1],s[2]+1), t)
const <- (1+1/2^(sum(s)+1))/3
time <- (-2 * const * K(s[1]) * K(s[2]) / N / sum_func)^(1/(2+sum(s)))
out <- psi(s, time)
} else {
out <- psi(s, t)
}
return(out)
}
evolve <- function(t) {
sum_func = func(c(0,2), t) + func(c(2,0), t) + 2*func(c(1,1),t)
time <- (2*pi*N*Sum_func)^(-1/3)
out <- (t-time)/time
return(c(out, time))
}
subtract_evolve <- function(t) {
return(t - evolve(t))
}

function minimization in r with constraints on function of parameters

Many libraries are available in R to perform minimisation. However, all the ones I could find (e.g. rcgmin, or optimx) only allow lower and upper bounds on the input parameters:
opt_Params <- Rcgmin(par = Params_init,
fn = cost_func,
gr = params_grad,
lower = min_par,
upper = max_par
)
I'm looking for something different: boundaries not on the input parameters, but on the values of a function that takes them.
Concretely, my cost_func is a cost function that measures the sum of the squared residuals between my (fixed) observed data Y_obs and the prediction from my fitted parameters Y_calc:
cost_func <- function(Params) {
X <- Params[1:(num_items*num_features)]
dim(X) <- c(num_items,num_features)
Theta <- Params[(num_items*num_features+1):length(Params)]
dim(Theta) <- c(num_users,num_features)
Y_calc <- X * t(Theta)
J <- ((Y_calc - Y_obs) * (Y_calc - Y_obs))
cost <- sum(rowSums(J)[])
return(cost)
}
Minimising the cost function, I can ensure that my predicted Y_calc get ever closer to Y_obs.
This however allows for arbitrary values in Y_calc. What I would like to do is to impose the same boundaries as I know to be present in Y_obs (between 0 and 10 - this is a collaborative filtering algorithm).
So I'm not trying to put constraints on my input parameters, but on a function of them (namely, on every element of Y_calc <- X * t(Theta)).
Is there an R minimisation library in which this is possible? Or do I need to change my approach?
I think you can try something like this :
library(DEoptim)
cost_func <- function(Params)
{
X <- Params[1 : (num_items * num_features)]
dim(X) <- c(num_items,num_features)
Theta <- Params[(num_items * num_features + 1) : length(Params)]
dim(Theta) <- c(num_users,num_features)
Y_calc <- X * t(Theta)
if((Y_calc < 0) | (Y_calc > 10))
{
return(10 ^ 30)
}else
{
J <- ((Y_calc - Y_obs) * (Y_calc - Y_obs))
cost <- sum(rowSums(J)[])
return(cost)
}
}
DEoptim(par = Params_init,
fn = cost_func,
lower = min_par,
upper = max_par)
If a parameter set generates a value of Y_calc that is between 0 and 10, the objective function will return a very high value. Hence, the DEoptim algorithm will not consider this set of parameters as a candidate solution.

Resources