I was wondering why my object CI doesn't correctly return the full (11 paired answers) outputs from the for() loop in the following function? Instead, the CI returns 11 single numbers.
N = 30 ; df = 118 ; d = 1
f <- function (ncp, alpha, q, df) {
abs(suppressWarnings(pt(q = d*sqrt(N), df = df, ncp, lower.tail = FALSE)) -
alpha)
}
a = mapply(c, as.list(20:30), as.list(-20:-30), SIMPLIFY = FALSE) # a list of paired values
CI <- numeric(length(a))
for(i in 1:length(a)){
CI[i] = sapply(c(0.025, 0.975),
function(x) optimize(f, interval = a[[i]], alpha = x, q = d*sqrt(N), df = df, tol = 1e-10)[[1]])
}
CI # just returns one paired of the 11 paired answers expected!
How about:
N = 30 ; df = 118 ; d = 1
f <- function (ncp, alpha, q, df) {
abs(suppressWarnings(pt(q = d*sqrt(N), df = df, ncp, lower.tail = FALSE)) -
alpha)
}
a = mapply(c, as.list(20:30), as.list(-20:-30), SIMPLIFY = FALSE) # a list of paired values
CI <- matrix(NA, 11,2)
for(i in 1:length(a)){
CI[i,] = sapply(c(0.025, 0.975),
function(x) optimize(f, interval = a[[i]], alpha = x, q = d*sqrt(N), df = df, tol = 1e-10)[[1]])
}
CI
Related
I'm fitting linear models with MatrixModels:::lm.fit.sparse and MatrixModels::glm4 (also sparse).
However, these functions return coeff, residuals and fitted.values only.
What's the fastest and easiest way to get/calculate another values such as stderr, t-value, p-value, predict value?
I use the data from MatrixModels:::lm.fit.sparse example.
I built a custom function summary_sparse to perform a summary for this model.
All matrix operations are performed with Matrix package.
Results are compared with dense type model.
Note lm.fit.sparse have to be evaluated with method = "chol" to get proper results.
Functions:
summary_sparse <- function(l, X) {
XXinv <- Matrix::chol2inv(Matrix::chol(Matrix::crossprod(X)))
se <- sqrt(Matrix::diag(XXinv*sum(l$residuals**2)/(nrow(X)-ncol(X))))
ts <- l$coef/se
pvals <- 2*c(1 - pnorm(abs(ts)))
list(coef = l$coef, se = se, t = ts, p = pvals)
}
predict_sparse <- function(X, coef) {
X %*% coef
}
Application:
dd <- expand.grid(a = as.factor(1:3),
b = as.factor(1:4),
c = as.factor(1:2),
d= as.factor(1:8))
n <- nrow(dd <- dd[rep(seq_len(nrow(dd)), each = 10), ])
set.seed(17)
dM <- cbind(dd, x = round(rnorm(n), 1))
## randomly drop some
n <- nrow(dM <- dM[- sample(n, 50),])
dM <- within(dM, { A <- c(2,5,10)[a]
B <- c(-10,-1, 3:4)[b]
C <- c(-8,8)[c]
D <- c(10*(-5:-2), 20*c(0, 3:5))[d]
Y <- A + B + A*B + C + D + A*D + C*x + rnorm(n)/10
wts <- sample(1:10, n, replace=TRUE)
rm(A,B,C,D)
})
X <- Matrix::sparse.model.matrix( ~ (a+b+c+d)^2 + c*x, data = dM)
Xd <- as(X,"matrix")
fmDense <- lm(dM[,"Y"]~Xd-1)
ss <- summary(fmDense)
r1 <- MatrixModels:::lm.fit.sparse(X, y = dM[,"Y"], method = "chol")
f <- summary_sparse(r1, X)
all.equal(do.call(cbind, f), ss$coefficients, check.attributes = F)
#TRUE
all.equal(predict_sparse(X, r1$coef)#x, predict(fmDense), check.attributes = F, check.names=F)
#TRUE
I have written the following code.
library(quantreg)
# return the g function:
G = function(m, N, gamma) {
Tm = m * N
k = 1:Tm
Gvalue = sqrt(m) * (1 + k/m) * (k/(m + k))^gamma
return(Gvalue)
}
sqroot <- function(A) {
e = eigen(A)
v = e$vectors
val = e$values
sq = v %*% diag(sqrt(val)) %*% solve(v)
return(t(sq))
}
fa = function(m, N, a) {
Tm = m * N
k = 1:Tm
t = (m + k)/m
f_value = (t - 1) * t * (a^2 + log(t/(t - 1)))
return(sqrt(f_value))
}
m = 50
N = 2
n= 50*3
x1 = matrix(runif(n, 0, 1), ncol = 1)
x = cbind(1, x1)
beta = c(1, 1)
xb = x %*% beta
pr = 1/(1+exp(-xb))
y = rbinom(n,1,pr)
# calculate statistic:
stat = function(y, x, m, N, a) {
y_train = y[1:m]
x_train = x[(1:m),]
y_test = y[-(1:m)]
x_test = x[-(1:m),]
fit = glm(y ~ 0 + x, family="binomial")
coef = coef(fit)
log_predict = predict(fit, type="response")
sigma = sqrt(1/(m-1)* sum((y_train - log_predict)^2))
Jvalue = t(x_train) %*% x_train/m * sigma^2
Jsroot = sqroot(Jvalue)
fvalue = fa(m, N, a)
score1 = apply((x_test * as.vector((y_test - x_test %*% coef))), 2, cumsum)
statvalue1 = t(solve(Jsroot) %*% t(score1))/fvalue/sqrt(m)
statmax1 = pmax(abs(statvalue1[, 1]), abs(statvalue1[, 2]))
result = list(stat = statmax1)
return(result)
}
m =50
N = 2
a = 2.795
value = stat(y, x, m, N, a)
value
I want to perform bootstrap to obtain B = 999 number of statistics. I use the following r code. But it produces an error saying "Error in statistic(data, original, ...) :
argument "m" is missing, with no default"
library(boot)
data1 = data.frame(y = y, x = x1, m = m , N = N, a = a)
head(data1)
boot_value = boot(data1, statistic = stat, R = 999)
Can anyone give me a hint? Also, am I able to get the bootstrap results in a matrix format? Since the stat function gives 100 values.
There are different kinds of bootstrapping. If you want to draw from your data 999 samples with replications of same size of your data you may just use replicate, no need for packages.
We put the data to be resampled into a data frame. It looks to me like m, N, a remain constant, so we just provide it as vectors.
data2 <- data.frame(y=y, x=x)
stat function needs to be adapted to unpack y and x-matrix. At the bottom we remove the list call to get just a vector back. unnameing will just give us the numbers.
stat2 <- function(data, m, N, a) {
y_train <- data[1:m, 1]
x_train <- as.matrix(data[1:m, 2:3])
y_test <- data[-(1:m), 1]
x_test <- as.matrix(data[-(1:m), 2:3])
y <- data[, "y"]
x <- as.matrix(data[, 2:3])
fit <- glm(y ~ 0 + x, family="binomial")
coef <- coef(fit)
log_predict <- predict(fit, type="response")
sigma <- sqrt(1/(m-1) * sum((y_train - log_predict)^2))
Jvalue <- t(x_train) %*% x_train/m * sigma^2
Jsroot <- sqroot(Jvalue)
fvalue <- fa(m, N, a)
score1 <- apply((x_test * as.vector((y_test - x_test %*% coef))), 2, cumsum)
statvalue1 <- t(solve(Jsroot) %*% t(score1))/fvalue/sqrt(m)
statmax1 <- pmax(abs(statvalue1[, 1]), abs(statvalue1[, 2]))
result <- unname(statmax1)
return(result)
}
replicate is a cousin of sapply, designed for repeated evaluation. In the call we just sample the rows 999 times and already get a matrix back. As in sapply we need to transform our result.
res <- t(replicate(999, stat2(data2[sample(1:nrow(data2), nrow(data2), replace=TRUE), ], m, N, a)))
Result
As result we get 999 bootstrap replications in the rows with 100 attributes in the columns.
str(res)
# num [1:999, 1:100] 0.00205 0.38486 0.10146 0.12726 0.47056 ...
The code also runs quite fast.
user system elapsed
3.46 0.01 3.49
Note, that there are different kinds of bootstrapping. E.g. sometimes just a part of the sample is resampled, weights are used, clustering is applied etc. Since you attempted to use boot the method shown should be the default, though.
The Fisher's Exact Test is related to the hypergeometric distribution, and I would expect that these two commands would return identical pvalues. Can anyone explain what I'm doing wrong that they do not match?
#data (variable names chosen to match dhyper() argument names)
x = 14
m = 20
n = 41047
k = 40
#Fisher test, alternative = 'greater'
(fisher.test(matrix(c(x, m-x, k-x, n-(k-x)),2,2), alternative='greater'))$p.value
#returns 2.01804e-39
#geometric distribution, lower.tail = F, i.e. P[X > x]
phyper(x, m, n, k, lower.tail = F, log.p = F)
#returns 5.115862e-43
In this case, the actual call to phyper that is relevant is phyper(x - 1, m, n, k, lower.tail = FALSE). Look at the source code for fisher.test relevant to your call of fisher.test(matrix(c(x, m-x, k-x, n-(k-x)),2,2), alternative='greater'). At line 138, PVAL is set to:
switch(alternative, less = pnhyper(x, or),
greater = pnhyper(x, or, upper.tail = TRUE),
two.sided = {
if (or == 0) as.numeric(x == lo) else if (or ==
Inf) as.numeric(x == hi) else {
relErr <- 1 + 10^(-7)
d <- dnhyper(or)
sum(d[d <= d[x - lo + 1] * relErr])
}
})
Since alternative = 'greater', PVAL is set to pnhyper(x, or, upper.tail = TRUE). You can see pnhyper defined on line 122. Here, or = 1, which is passed to ncp, so the call is phyper(x - 1, m, n, k, lower.tail = FALSE)
With your values:
x = 14
m = 20
n = 41047
k = 40
phyper(x - 1, m, n, k, lower.tail = FALSE)
# [1] 2.01804e-39
I have an R function that provides the 95% confidence Interval for the ncp (non-centrality parameter) of a t distribution.
Via simulation in R, is it possible to show that in the long-run the CIs from this R function capture a given TRUE ncp (here "2" same as input t) 95% of the time?
(I appreciate any ideas as to how to do this)
CI.ncp <- function(t, N){
f <- function (ncp, alpha, q, df) {
abs(suppressWarnings(pt(q = t, df = N - 1, ncp, lower.tail = FALSE)) - alpha) }
sapply(c(0.025, 0.975),
function(x) optim(1, f, alpha = x, q = t, df = N - 1, control = list(reltol = (.Machine$double.eps)))[[1]]) }
#Example of Use:
CI.ncp(t = 2, N = 20) # gives: -0.08293755 4.03548862
#(in the long-run 95% of the time, "2" is contained within these
# two numbers, how to show this in R?)
Here is what I have tried with no success:
fun <- function(t = 2, N = 20){
ncp = rt(1, N - 1, t)
CI.ncp(t = 2, N = 20)
mean(ncp <= 2 & 2 <= ncp )
}
R <- 1000
sim <- t(replicate(R, fun()))
coverage <- mean(sim[,1] <= 2 & 2 <= sim[,2])
The problem is the that we need to feed the random ncp obtained from the fun in the CI.ncp:
fun <- function(t = 2, N = 20){ ;
ncp = rt(1, N - 1, t);
CI.ncp(t = ncp, N = 20);
}
R <- 1e4 ;
sim <- t(replicate(R, fun()));
coverage <- mean(sim[,1] <= 2 & 2 <= sim[,2])
I would use package MBESS.
#install.packages("MBESS")
library(MBESS)
fun <- function(t = 2, N = 20, alpha = 0.95){
x = rt(1, N - 1, t)
conf.limits.nct(x, df = N, conf.level = alpha)[c(1, 3)]
}
set.seed(5221)
R <- 1000
sim <- t(replicate(R, fun()))
head(sim)
coverage <- mean(sim[,1] <= 2 & 2 <= sim[,2])
coverage
[1] 0.941
I am wondering how I could make my function Bpp to accept a vector for its first argument t?
Bpp = function(t, n1, n2 = NULL){
N = ifelse(is.null(n2), n1, n1*n2/(n1+n2))
df = ifelse(is.null(n2), n1 - 1, n1 + n2 - 2)
H1 = integrate(function(delta)dcauchy(delta, 0, sqrt(2)/2)*dt(t, df, delta*sqrt(N)), -Inf, Inf)[[1]]
H0 = dt(t, df)
BF10 = H1/H0
p.value = 2*(1-pt(abs(t), df))
list(BF10 = BF10, p.value = p.value)
}
Bpp(t = -6:6, 20, 20) ## This will give error because `t` is now a vector?
Looks like I could give a quick answer without testing. Use the following in your Bpp:
# joint density
joint <- function(delta, t) dcauchy(delta, 0, sqrt(2)/2) * dt(t, df, delta*sqrt(N))
# marginal density of `t`
marginal.t <- function (t) integrate(joint, lower = -Inf, upper = Inf, t = t)[[1]]
H1 <- sapply(t, marginal.t)
So, here we also could use Vectorize how would that look like?
Use your original Bpp:
Bpp <- Vectorize(Bpp, vectorize.args = "t")
Bpp(-6:6, 20, 20)