I have found a very subtle bug in my R code just now. The following code takes a list of objects as input and create new fields for each of the objects.
Each object originally has two fields (w, p, s, u), and then I create more, beta, phi, etc.. The normal variables are OK. However the dynamic functions (Q, K, K1, K2) are not correct. Suppose I have two nigs, nigs[[1]] and nigs[[2]], the functions Q, K, K1 and K2 for nigs[[1]] would be the same as nigs[[2]]!
I just found this bug and would consult on how to get this code correct (while keeping its elegance:) Thanks!
D <- length(nigs)
for (i in 1:D) {
w <- nigs[[i]]$w
p <- nigs[[i]]$p
s <- nigs[[i]]$s
u <- nigs[[i]]$u
nigs[[i]]$beta <- beta <- w / s * p * (1-p^2)^(-1/2);
nigs[[i]]$phi <- phi <- w^2 / s^2;
nigs[[i]]$z <- z <- (x-u)/s;
nigs[[i]]$alpha_bar <- alpha_bar <- w * (1-p^2)^(-1/2);
nigs[[i]]$y_bar <- y_bar <- sqrt(1+z^2);
nigs[[i]]$Q <- Q <- function(t) { sqrt(1 - (2*beta*t+t^2)/phi) }
nigs[[i]]$K <- K <- function(t) { u*t - w*Q(t) + w }
nigs[[i]]$K1 <- K1 <- function(t) { (u + w * (beta+t) / (Q(t)*phi)) }
nigs[[i]]$K2 <- K2 <- function(t) { qt = Q(t); (w/(qt * phi) + w * (beta+t)^2 / (qt^3 * phi^2)); }
}
EDIT
The primary error I made is that I assumed that for { } introduced new scopes, in that case, w,p,s,u are different w,p,s,u every time, actually not. Only functions in R introduce new scopes. And this scoping rule is different from C/Java.
That is normal behavior of the lexical scope.
You can use closure instead.
f <- list()
g <- list()
for (i in 1:2) {
j <- i * 2
f[[i]] <- function() print(j)
g[[i]] <- (function() {j <- j; function() print(j)}) ()
}
then,
> for (i in 1:2) f[[i]]()
[1] 4
[1] 4
> for (i in 1:2) g[[i]]()
[1] 2
[1] 4
In object oriented terminology each nigs[[i]] is an object and the functions Q, K, etc. are methods which act on the object's properties w, p, etc. Using the proto package we set each nigs[[i]] to a proto object and then update the object as indicated. Note that all methods take the object as the first argument so if p is a proto object containing method Q then p$Q(t) means to look in p for Q and then run it with the arguments p and t so p$Q(t) is the same as with(p, Q(p, t)). Thus we have added the extra first argument to each of the methods below. See proto home page for more.
library(proto)
# initialize
x <- 1
nigs <- lapply(1:2, function(i) proto(w = i/3, p = i/3, s = i/3, u = i/3))
for(p in nigs) with(p, {
beta <- w / s * p * (1-p^2)^(-1/2)
phi <- w^2 / s^2
z <- (x-u)/s
alpha_bar <- w * (1-p^2)^(-1/2)
y_bar <- sqrt(1+z^2)
Q <- function(., t) { sqrt(1 - (2*beta*t+t^2)/phi) }
K <- function(., t) { u*t - w*.$Q(t) + w }
K1 <- function(., t) { (u + w * (beta+t) / (.$Q(t)*phi)) }
K2 <- function(., t) {
qt = .$Q(t)
(w/(qt * phi) + w * (beta+t)^2 / (qt^3 * phi^2))
}
})
EDIT: A second possible design would be to create a parent object, meths to hold the methods instead of defining them over again in each separate proto object. In that case, within each method we must be sure that we use the properties of the object passed in the first argument since the methods and properties are now located in different objects:
meths <- proto(
Q = function(., t) sqrt(1 - (2*.$beta*t+t^2)/.$phi),
K = function(., t) .$u*t - .$w*.$Q(t) + .$w,
K1 = function(., t) (.$u + .$w * (.$beta+t) / (.$Q(t)*.$phi)),
K2 = function(., t) {
qt = .$Q(t)
(.$w/(qt * .$phi) + .$w * (.$beta+t)^2 / (qt^3 * .$phi^2))
}
)
# initialize - meths$proto means define proto object with parent meths
x <- 1
nigs <- lapply(1:2, function(i) meths$proto(w = i/3, p = i/3, s = i/3, u = i/3))
for(p in nigs) with(p, {
beta <- w / s * p * (1-p^2)^(-1/2)
phi <- w^2 / s^2
z <- (x-u)/s
alpha_bar <- w * (1-p^2)^(-1/2)
y_bar <- sqrt(1+z^2)
})
Now the following works by looking up Q in nigs[[1]] but not finding it there looking into its parent, meths, and running the Q found there. In nigs[[1]]$Q(.1) the call implicitly passes nigs[[1]] to Q as its first argument and we have defined all properties within the body of Q relative to the first argument so everything works:
> nigs[[1]]$Q(.1)
[1] 0.9587958
Related
How i can write this equation inside R as a function?
subject to: 20* x1 + 170*x2 = 20000
#ATTEMPT
library(Rsolnp)
fn <- function(h, s){
z=200 * x[1]^(2/3) * x[2]^(1/3)
return(-z)}
# constraint z1: 20*x+170*y=20000
eqn <- function(x) {
z1=20*x[1] + 170*x[2]
return(c(z1))
}
constraints = c(20000)
x0 <- c(1, 1) # setup init values
sol1 <- solnp(x0, fun = fn, eqfun = eqn, eqB = constraints)
sol1$pars
In R, we would use the keyword function, and we would pass the necessary parameters:
for example in this case.
R <- function(h, s)200 * h^(2/3) * s^(1/3)
We now have a function called R, that takes in arguments h and s and gives us an output.
For example, we could do:
R(27, 8)
I am not very familiar with R. I have been trying to use the implementation of the adaptive rejection sampling method in R, in order to sample from the following distribution:
here is my R code:
library(ars)
g1 <- function(x,r){(1./r)*((1-x)^r)}
f1 <- function(x,a,k) {
add<-0
for(i in 1:k) {
add<- add+g1(x,i)
}
res <- (a* add)+(a-1)*log(x)+k*log(1-x)
return(res)
}
g2 <- function(x,r){(1-x)^(r-1)}
f1prima <- function(x,a,k) {
add<-0
for(i in 1:k) {
add<- add-g2(x,i)
}
res <- (a* add)+(a-1)/x-k/(1-x)
return(res)
}
mysample1<-ars(20,f1,f1prima,x=c(0.001,0.09),m=2,emax=128,lb=TRUE,xlb=0.0, ub=TRUE, xub=1,a=0.5,k=100)
The function is a log-concave, but I get different error messages when I run ars and fiddling around with the input parameters won't help here. Any suggestion would be appreciated.
First thing, which you already noticed is that your log-concave function is not very well defined at x=0 and x=1.0. So useful interval would be something like 0.01...0.99, not 0.0...1.0
Second, I don't like the idea to compute hundreds of terms in your summation term.
So, good idea might be to express it in following way, starting with derivative
S1N-1 qi is obviously geometric series and could be replaced with
(1-qN)/(1-q), where q=1-x.
This is derivative, so to get to similar term in function itself, just integrate it.
http://www.wolframalpha.com/input/?i=integrate+(1-q%5EN)%2F(1-q)+dq will return Gauss Hypergeometric function 2F1 plus logarithm
-qN+1 2F1(1, N+1; N+2; q)/(N+1) - log(1-q)
NB: It is the same integral as Beta before, but dealing with it was a bit more cumbersome
So, code to compute those terms:
library(gsl)
library(ars)
library(ggplot2)
Gauss2F1 <- function(a, b, c, x) {
ifelse(x >= 0.0 & x < 1.0, hyperg_2F1(a, b, c, x), hyperg_2F1(c - a, b, c, 1.0 - 1.0/(1.0 - x))/(1.0 - x)^b)
}
f1sum <- function(x, N) {
q <- 1.0 - x
- q^(N+1) * Gauss2F1(1, N+1, N+2, q)/(N+1) - log(1.0 - q)
}
f1sum.1 <- function(x, N) {
q <- 1.0 - x
res <- rep(0.0, length.out = length(x))
s <- rep(1.0, length.out = length(x))
for(k in 1:N) {
s <- s * q / as.numeric(k)
res <- res + s
}
res
}
f1 <- function(x, a, N) {
a * f1sum(x, N) + (a - 1.0)*log(x) + N*log(1.0 - x)
}
f1.1 <- function(x, a, N) {
a * f1sum.1(x, N) + (a - 1.0)*log(x) + N*log(1.0 - x)
}
f1primesum <- function(x, N) {
q <- 1.0 - x
(1.0 - q^N)/(1.0 - q)
}
f1primesum.1 <- function(x, N) {
res <- rep(0.0, length.out = length(x))
s <- rep(1.0, length.out = length(x))
for(k in 1:N) {
res <- res + s
s <- s * q
}
-res
}
f1prime <- function(x, a, N) {
a* f1primesum(x, N) + (a - 1.0)/x - N/(1.0 - x)
}
f1prime.1 <- function(x, a, N) {
a* f1primesum.1(x, N) + (a - 1.0)/x - N/(1.0 - x)
}
p <- ggplot(data.frame(x = c(0, 1)), aes(x = x)) +
stat_function(fun = f1, args = list(0.5, 100), colour = "#4271AE") +
stat_function(fun = f1.1, args = list(0.5, 100), colour = "#1F3552") +
scale_x_continuous(name = "X", breaks = seq(0, 1, 0.2), limits=c(0.001, 0.5)) +
scale_y_continuous(name = "F") +
ggtitle("Log-concave function")
p
As you can see, I've implemented both versions - one using summation and another using analytical form of sums. Computed data for a=0.5, N=100.
First, there is a bit of a difference between direct sum and 2F1 - I attribute it to precision loss in summation.
Second, more important result - function is NOT log-concave. No questions why ars() if failing left and right. See graph below
I want to use R's fzero function to find roots of a function. The problem gets complicated, as the function in question calls some other functions which in turn call another ones. I do have MATLAB code that does it and I am trying to translate it to R, but cannot make in work. My experience with MATLAB is limited, so it's probable I just missed some feature of the MATLAB code while translating. My ultimate goal is to obtain R's working equivalent of the MATLAB code. Any hints will by highly appreciated!
The error I got is in function psi():
Error in (-t(I) * pi^2) %*% time : non-conformable arguments
Although the sizes of matrices do match and this part of code works with some naive input when ran in isolation.
NB: I have tried using mrdivide (R's equivalent of MATLAB's right matrix division) in some places, but with no effect.
NB2: I obtain the same error trying function uniroot instead of fzero.
# Global parameters:
N = 140
A2 = (256 times 256) matrix with data
I = vector of size 256: (0, 1, 2^2, 3^2, 4^2, ..., 255^2)
# ----------------------------------------------------------------
MATLAB working code:
fzero( #(t)(t-evolve(t)),[0,0.1])
function [out,time]=evolve(t)
global N
Sum_func = func([0,2],t) + func([2,0],t) + 2*func([1,1],t);
time=(2*pi*N*Sum_func)^(-1/3);
out=(t-time)/time;
end
function out=func(s,t)
global N
if sum(s)<=4
Sum_func=func([s(1)+1,s(2)],t)+func([s(1),s(2)+1],t); const=
(1+1/2^(sum(s)+1))/3;
time=(-2*const*K(s(1))*K(s(2))/N/Sum_func)^(1/(2+sum(s)));
out=psi(s,time);
else
out=psi(s,t);
end
end
function out=psi(s,Time)
global I A2
% s is a vector
w=exp(-I*pi^2*Time).*[1,.5*ones(1,length(I)-1)];
wx=w.*(I.^s(1));
wy=w.*(I.^s(2));
out=(-1)^sum(s)*(wy*A2*wx')*pi^(2*sum(s));
end
function out=K(s)
out=(-1)^s*prod((1:2:2*s-1))/sqrt(2*pi);
end
# ----------------------------------------------------------------
My attempt at R translation (not working):
fzero(subtract_evolve, c(0, 0.1))
K <- function(s) {
out <- (-1)^s * prod(seq(from = 1,to = 2*s-1, by = 2))/sqrt(2*pi)
return(out)
}
psi <- function(s, time) {
w <- (exp((-t(I) * pi^2) %*% time)) *
t(c(cbind(1, 0.5*ones(1,length(I)-1))))
wx <- t(w * (I^s[1]))
wy <- t(w * (I^s[2]))
out <- (-1)^sum(s) * (wy %*% A2 %*% t(wx)) * pi^(2*sum(s))
return(out)
}
func <- function(s, t) {
if (sum(s) <= 4) {
sum_func <- func(c(s[1]+1,s[2]), t) + func(c(s[1],s[2]+1), t)
const <- (1+1/2^(sum(s)+1))/3
time <- (-2 * const * K(s[1]) * K(s[2]) / N / sum_func)^(1/(2+sum(s)))
out <- psi(s, time)
} else {
out <- psi(s, t)
}
return(out)
}
evolve <- function(t) {
sum_func = func(c(0,2), t) + func(c(2,0), t) + 2*func(c(1,1),t)
time <- (2*pi*N*Sum_func)^(-1/3)
out <- (t-time)/time
return(c(out, time))
}
subtract_evolve <- function(t) {
return(t - evolve(t))
}
Trying to run a simple ROI optimisation in R, but after hours of fidgeting I'm at a loss. I keep getting the error:
Error in .check_function_for_sanity(F, n) :
cannot evaluate function 'F' using 'n' = 5 parameters.
Here is the sample code:
library(ROI)
library(nloptr)
library(ROI.plugin.nloptr)
#Generate some random data for this example
set.seed(3142)
myRet = matrix(runif(100 * 5, -0.1, 0.1), ncol = 5)
myCovMatrix = cov(myRet)
myRet <- myRet
myCovMatrix <- myCovMatrix
# Sample weights
w <- rep(1/ncol(myRet), ncol(myRet))
#Define functions for the optimisation
diversificationRatio = function(w, covMatrix)
{
weightedAvgVol = sum(w * sqrt(diag(covMatrix)))
portfolioVariance = (w %*% covMatrix %*% w)[1,1]
- 1 * weightedAvgVol / sqrt(portfolioVariance)
}
# Check that the F_objective function works:
diversificationRatio(w, myCovMatrix)
# Now construct the F_objective
foo <- F_objective(F = diversificationRatio, n = (ncol(myRet)))
Any ideas on how many parameters to pass to n?
F_objective expects a function with only one argument so you have to write a wrapper function.
#Define functions for the optimisation
diversificationRatio <- function(w, covMatrix) {
weightedAvgVol <- sum(w * sqrt(diag(covMatrix)))
portfolioVariance <- (w %*% covMatrix %*% w)[1,1]
- 1 * weightedAvgVol / sqrt(portfolioVariance)
}
# Check that the F_objective function works:
wrapper <- function(x) diversificationRatio(x, myCovMatrix)
# Now construct the F_objective
o <- OP(F_objective(F = wrapper, n = (ncol(myRet))))
ROI_applicable_solvers(o)
start <- runif(ncol(myRet))
s <- ROI_solve(o, solver = "nloptr", start = start, method = "NLOPT_LD_SLSQP")
s
solution(s)
Below is the code I have. It works for primitive functions, such as sin. However, when using a function called gllik, it returns an error in f(y0): unused argument (y0). I'm not sure how to correct this.
newton_search2 <- function(f, h, guess, conv=0.001) {
y0 <- guess
N = 100
i <- 1; y1 <- y0
p <- numeric(N)
while (i <= N) {
make_derivative <- function(f, h) {
(f(y0 + h) - f(y0 - h)) / (2*h)
}
y1 <- (y0 - (f(y0)/make_derivative(f, h)))
p[i] <- y1
i <- i + 1
if (abs(y1 - y0) < conv) break
y0 <- y1
}
return (p[(i-1)])
}
The gllik function is as follows:
x <- rgamma(n=30, shape=4.5)
gllik <- function() {
s <- sum(log(x))
n <- length(x)
function(a) {
(a - 1) * s - n * lgamma(a)
}
}
The code I used was:
newton_search2(gllik, 0.001, mean(x), conv = 0.001)
I'm not sure how to fix the error or get the correct answer which is supposed to be 4.5 (the maximum liklihood estimate of a).
The problem is that gllik does not take any arguments. Furthermore, it returns a function and not a value.
Perhaps what you want to to is the following?
gllik <- function(a) {
s <- sum(log(x))
n <- length(x)
return((a - 1) * s - n * lgamma(a))
}
EDIT: An alternative solution is to just use the returned function. While this type of construction is often elegant, it does seem like overkill in this case:
newton_search2(gllik(), 0.001, mean(x), conv = 0.001)