I am new to Stackoverflow and this post is probably very basic. I get an unexpected "index out of list" mistake using the "gmm" package. More specifically, I am using the gel function of that package and I need to supply the argument "g" which is a function that returns a matrix. The function that I pass to the "g" argument works perfectly on its own but not in as an argument of the gel function. I am aware there are very closely related question:
https://stackoverflow.com/search?q=index+out+of+bounds+r
However none of these helped me fix the issue I am faced with.
I attach a reproducible example.
Thanks in advance.
rm(list=ls())
install.packages("gmm")
library(mvtnorm)
library(gmm)
#set.seed(1)
########################################
#functions declaration and construction#
########################################
moment.function <- function(data,alpha) {
instrus.index <- length(alpha)+1
data<-as.matrix(data)
nbr.instrus <- ncol(data)-instrus.index
data1 <-data[,1]-data[,(2:instrus.index)]%*%alpha
data1<-matrix(rep(data1,nbr.instrus),nrow(total.data),nbr.instrus)
g.fun <- data[,-(1:instrus.index)]*data1
#g.fun <- t(data[,-(1:instrus.index)])%*%(data[,1]-data[,(2:instrus.index)]%*%alpha)
return(g.fun)
}
##################
#DGP construction#
##################
#set params
n <- 70
beta1 <- 1
beta2 <- 1
beta.first.stage <- 0.1
rho <- 0.1
cov.exo.instrus <- 0.3
sigma2.epsilon <- 0.1
sigma2.V <- 0.1
sigma2.simus <-0.01
Sigma <- rbind(c(1,cov.exo.instrus,cov.exo.instrus),
c(cov.exo.instrus,1,cov.exo.instrus),
c(cov.exo.instrus,cov.exo.instrus,1))
#generate obs according to DGP
#instruments and exogenous covariates
X <- rmvnorm(n, rep(0,3), Sigma)
#two disturbance terms
epsilon<-rnorm(n,0,sigma2.epsilon)
V <- rnorm(n,0,sigma2.V)
#endogenous regressor
Y2 <- beta.first.stage*(X[,2]+X[,3])+V
#outcome variable with structural error term
#h()=()^2
Y1 <- beta1*X[,1]+beta2*(Y2^2+sigma2.V-V^2-2*beta.first.stage*(X[,2]+X[,3])*V)+epsilon
#matrices for the finite-dimensional case
second.stage.vars <- cbind(Y1,X[,1],Y2^2)
total.data <- cbind(second.stage.vars,X)
###################################
#simulations in the finite-dimensional case
#with gel there is a problem
gel(moment.function, total.data, c(1.5, 1.5))
#moment.function alone has no problem
moment.function(total.data,c(1.5,1.5))
The gmm function expects the arguments for the data and the parameters to be the other way round, i.e. your moment function should be
moment.function <- function(alpha, data) {
## function body
}
With that change your example works for me.
Related
I notice searching through stackoverflow for similar questions that this has been asked several times hasn't really been properly answered. Perhaps with help from other users this post can be a helpful guide to programming a numerical estimate of the parameters of a multivariate normal distribution.
I know, I know! The closed form solutions are available and trivial to implement. In my case I am interested in modifying the likelihood function for a specific purpose and I don't expect an exact analytic solution so this is a test case to check the procedure.
So here is my attempt. Please comment. Especially if I am missing opportunities for optimization. Note, I'm not a statistician so I'd appreciate any pointers.
ll_multN <- function(theta,X) {
# theta = c(mu, diag(Sigma), Sigma[upper.tri(Sigma)])
# X is an nxk dataset
# MLE: L = - (nk/2)*log(2*pi) - (n/2)*log(det(Sigma)) - (1/2)*sum_i(t(X_i-mu)^2 %*% Sigma^-1 %*% (X_i-mu)^2)
# summation over i is performed using a apply call for efficiency
n <- nrow(X)
k <- ncol(X)
# def mu
mu.vec <- theta[1:k]
# def Sigma
Sigma.diag <- theta[(k+1):(2*k)]
Sigma.offd <- theta[(2*k+1):length(theta)]
Sigma <- matrix(NA, k, k)
Sigma[upper.tri(Sigma)] <- Sigma.offd
Sigma <- t(Sigma)
Sigma[upper.tri(Sigma)] <- Sigma.offd
diag(Sigma) <- Sigma.diag
# compute summation
sum_i <- sum(apply(X, 1, function(x) (matrix(x,1,k)-mu.vec)%*%solve(Sigma)%*%t(matrix(x,1,k)-mu.vec)))
# compute log likelihood
logl <- -.5*n*k*log(2*pi) - .5*n*log(det(Sigma))
logl <- logl - .5*sum_i
return(-logl)
}
Simulated dataset generated using the rmvnorm() function in the package "mvtnorm". Random positive definite covariance matrix generated using the additional function Posdef() (taken from here: https://stat.ethz.ch/pipermail/r-help/2008-February/153708)
library(mvtnorm)
Posdef <- function (n, ev = runif(n, 0, 5)) {
# generates a random positive definite covariance matrix
Z <- matrix(ncol=n, rnorm(n^2))
decomp <- qr(Z)
Q <- qr.Q(decomp)
R <- qr.R(decomp)
d <- diag(R)
ph <- d / abs(d)
O <- Q %*% diag(ph)
Z <- t(O) %*% diag(ev) %*% O
return(Z)
}
set.seed(2)
n <- 1000 # number of data points
k <- 3 # number of variables
mu.tru <- sample(0:3, k, replace=T) # random mean vector
Sigma.tru <- Posdef(k) # random covariance matrix
eigen(Sigma.tru)$val # check positive def (all lambda > 0)
# Generate simulated dataset
X <- rmvnorm(n, mean=mu.tru, sigma=Sigma.tru)
# initial parameter values
pars.init <- c(mu=rep(0,k), sig_ii=rep(1,k), sig_ij=rep(0, k*(k-1)/2))
# limits for optimization algorithm
eps <- .Machine$double.eps # get a small value for bounding the paramter space to avoid things such as log(0).
lower.bound <- c(rep(-Inf,k), # bound on mu
rep(eps,k), # bound on sigma_ii
rep(-Inf,k)) # bound on sigma_ij i=/=j
upper.bound <- c(rep(Inf,k), # bound on mu
rep(100,k), # bound on sigma_ii
rep(100,k)) # bound on sigma_ij i=/=j
system.time(
o <- optim(pars.init,
ll_multN, X=X, method="L-BFGS-B",
lower = lower.bound,
upper = upper.bound)
)
plot(x=c(mu.tru,diag(Sigma.tru),Sigma.tru[upper.tri(Sigma.tru)]),
y=o$par,
xlab="Parameter",
ylab="Estimate",
pch=20)
abline(c(0,1), col="red", lty=2)
This currently runs on my laptop in
user system elapsed
47.852 24.014 24.611
and gives this graphical output:
Estimated mean and variance
In particular any advice on limit setting or algorithm choice would be much appreciated.
Thanks
I am running a survdiff using the survival package and the p-value is 0.02. I would like to see it have more precision(ie. 0.02xxxx). Is there an argument that I can pass to specify the length of the pvalue. I read the documentation for the survival package and did not find any mention on how to specify it.
survdiff(surv_object~access_sam2$Area_mTLSHL)
Credits.
The computation of the p-value for objects of class "survdiff" is not completely obvious. I had to see what is going on in the print method for objects of that class to understand the way the degrees of freedom are computed.
The code below is a simplification of the code of print.survdiff and therefore the credits go to
citation("survival")
#
#Therneau T (2015). _A Package for Survival Analysis
#in S_. version 2.38, <URL:
#https://CRAN.R-project.org/package=survival>.
#
#Terry M. Therneau, Patricia M. Grambsch (2000).
#_Modeling Survival Data: Extending the Cox Model_.
#Springer, New York. ISBN 0-387-98784-3.
#
#To see these entries in BibTeX format, use
#'print(<citation>, bibtex=TRUE)', 'toBibtex(.)', or
#set 'options(citation.bibtex.max=999)'.
The code itself can be seen in the sources or by running
getAnywhere("print.survdiff")
Now for the question's problem.
I have written a generic pvalue function to make it easier to call a method for objects of the class returned by function survdiff. The example is the taken from the help page of that function.
The return value is a named list with 3 members, the names are self explanatory. One of them, chisq is a repetition of a value returned by survdiff. I have included it for the sake of completeness.
pvalue <- function(x, ...) UseMethod("pvalue")
pvalue.survdiff <- function (x, ...)
{
if (length(x$n) == 1) {
df <- 1
pval <- pchisq(x$chisq, 1, lower.tail = FALSE)
} else {
if (is.matrix(x$obs)) {
otmp <- rowSums(x$obs)
etmp <- rowSums(x$exp)
} else {
otmp <- x$obs
etmp <- x$exp
}
df <- sum(etmp > 0) - 1
pval <- pchisq(x$chisq, df, lower.tail = FALSE)
}
list(chisq = x$chisq, p.value = pval, df = df)
}
srv <- survdiff(Surv(futime, fustat) ~ rx, data = ovarian)
pvalue(srv)
#$chisq
#[1] 1.06274
#
#$p.value
#[1] 0.3025911
#
#$df
#[1] 1
I am not sure about the survival package and you did not provide a reproducible code (please do so next time). But in general, if you want to see more digits what you need to do is
print(value, digits= n)
# n is the number of digits you want to see
In your case it is
print(survdiff(surv_object~access_sam2$Area_mTLSHL), 6)
I have an array of outputs from hundreds of segmented linear models (made using the segmented package in R). I want to be able to use these outputs on new data, using the predict function. To be clear, I do not have the segmented linear model objects in my workspace; I just saved and reimported the relevant outputs (e.g. the coefficients and breakpoints). For this reason I can't simply use the predict.segmented function from the segmented package.
Below is a toy example based on this link that seems promising, but does not match the output of the predict.segmented function.
library(segmented)
set.seed(12)
xx <- 1:100
zz <- runif(100)
yy <- 2 + 1.5*pmax(xx-35,0) - 1.5*pmax(xx-70,0) +
15*pmax(zz-0.5,0) + rnorm(100,0,2)
dati <- data.frame(x=xx,y=yy,z=zz)
out.lm<-lm(y~x,data=dati)
o<-## S3 method for class 'lm':
segmented(out.lm,seg.Z=~x,psi=list(x=c(30,60)),
control=seg.control(display=FALSE))
# Note that coefficients with U in the name are differences in slopes, not slopes.
# Compare:
slope(o)
coef(o)[2] + coef(o)[3]
coef(o)[2] + coef(o)[3] + coef(o)[4]
# prediction
pred <- data.frame(x = 1:100)
pred$dummy1 <- pmax(pred$x - o$psi[1,2], 0)
pred$dummy2 <- pmax(pred$x - o$psi[2,2], 0)
pred$dummy3 <- I(pred$x > o$psi[1,2]) * (coef(o)[2] + coef(o)[3])
pred$dummy4 <- I(pred$x > o$psi[2,2]) * (coef(o)[2] + coef(o)[3] + coef(o)[4])
names(pred)[-1]<- names(model.frame(o))[-c(1,2)]
# compute the prediction, using standard predict function
# computing confidence intervals further
# suppose that the breakpoints are fixed
pred <- data.frame(pred, predict(o, newdata= pred,
interval="confidence"))
# Try prediction using the predict.segment version to compare
test <- predict.segmented(o)
plot(pred$fit, test, ylim = c(0, 100))
abline(0,1, col = "red")
# At least one segment not being predicted correctly?
Can I use the base r predict() function (not the segmented.predict() function) with the coefficients and break points saved from segmented linear models?
UPDATE
I figured out that the code above has issues (don't use it). Through some reverse engineering of the segmented.predict() function, I produced the design matrix and use that to predict values instead of directly using the predict() function. I do not consider this a full answer of the original question yet because predict() can also produce confidence intervals for the prediction, and I have not yet implemented that--question still open for someone to add confidence intervals.
library(segmented)
## Define function for making matrix of dummy variables (this is based on code from predict.segmented())
dummy.matrix <- function(x.values, x_names, psi.est = TRUE, nameU, nameV, diffSlope, est.psi) {
# This function creates a model matrix with dummy variables for a segmented lm with two breakpoints.
# Inputs:
# x.values: the x values of the segmented lm
# x_names: the name of the column of x values
# psi.est: this is legacy from the predict.segmented function, leave it set to 'TRUE'
# obj: the segmented lm object
# nameU: names (class character) of 3rd and 4th coef, which are "U1.x" "U2.x" for lm with two breaks. Example: names(c(obj$coef[3], obj$coef[4]))
# nameV: names (class character) of 5th and 6th coef, which are "psi1.x" "psi2.x" for lm with two breaks. Example: names(c(obj$coef[5], obj$coef[6]))
# diffSlope: the coefficients (class numeric) with the slope differences; called U1.x and U2.x for lm with two breaks. Example: c(o$coef[3], o$coef[4])
# est.psi: the estimated break points (class numeric); these are the estimated breakpoints from segmented.lm. Example: c(obj$psi[1,2], obj$psi[2,2])
#
n <- length(x.values)
k <- length(est.psi)
PSI <- matrix(rep(est.psi, rep(n, k)), ncol = k)
newZ <- matrix(x.values, nrow = n, ncol = k, byrow = FALSE)
dummy1 <- pmax(newZ - PSI, 0)
if (psi.est) {
V <- ifelse(newZ > PSI, -1, 0)
dummy2 <- if (k == 1)
V * diffSlope
else V %*% diag(diffSlope)
newd <- cbind(x.values, dummy1, dummy2)
colnames(newd) <- c(x_names, nameU, nameV)
} else {
newd <- cbind(x.values, dummy1)
colnames(newd) <- c(x_names, nameU)
}
# if (!x_names %in% names(coef(obj.seg)))
# newd <- newd[, -1, drop = FALSE]
return(newd)
}
## Test dummy matrix function----------------------------------------------
set.seed(12)
xx<-1:100
zz<-runif(100)
yy<-2+1.5*pmax(xx-35,0)-1.5*pmax(xx-70,0)+15*pmax(zz-.5,0)+rnorm(100,0,2)
dati<-data.frame(x=xx,y=yy,z=zz)
out.lm<-lm(y~x,data=dati)
#1 segmented variable, 2 breakpoints: you have to specify starting values (vector) for psi:
o<-segmented(out.lm,seg.Z=~x,psi=c(30,60),
control=seg.control(display=FALSE))
slope(o)
plot.segmented(o)
summary(o)
# Test dummy matrix fn with the same dataset
newdata <- dati
nameU1 <- c("U1.x", "U2.x")
nameV1 <- c("psi1.x", "psi2.x")
diffSlope1 <- c(o$coef[3], o$coef[4])
est.psi1 <- c(o$psi[1,2], o$psi[2,2])
test <- dummy.matrix(x.values = newdata$x, x_names = "x", psi.est = TRUE,
nameU = nameU1, nameV = nameV1, diffSlope = diffSlope1, est.psi = est.psi1)
# Predict response variable using matrix multiplication
col1 <- matrix(1, nrow = dim(test)[1])
test <- cbind(col1, test) # Now test is the same as model.matrix(o)
predY <- coef(o) %*% t(test)
plot(predY[1,])
lines(predict.segmented(o), col = "blue") # good, predict.segmented gives same answer
I'm working on an assignment for my Machine Learning course, and as part of it I'm trying to implement a neural network. Since it's for school, I have to implement the algorithm manually, and not use any of the neuralnet packages available.
I've been using the material in "Learning from Data" along with the CalTech lectures that follow it on youtube.
I've put together the algorithm in R to the best of my ability, but there's something going wrong along the way. I haven't been able to implement the difference in the cost function as a measure for when the last iteration should be, so for now I've just fixed the number of iterations as a constant.
** Edit **
Hey guys. Thanks for the response. I can see I'm missing a lot of needed information. Sorry about that, don't really know what I was thinking.
The data I'm using is simply "toy data" generated from the sinc function sinc(x)=sin(x)/x.
The problem I'm having specifically is that the estimates that I get at the end of the algorithm are completely off from the real values, and they are significantly different every time I run the algorithm. It seems like I've put the algorithm together the way the book states, but I can't see where the problem is.
Edit 2
Added the data to the code so it can be run without doing anything extra. I also separated the individual parts of the function. As i mentioned in a comment, I was able to numerically verify the partial derivatives, so I think that part is ok. The problem I have is when I need to update the weights in order to train the network.
It's not in this part of the code, but I thought that in order to update the weights, you simply took the old weight and subtracted the partial derivative of that weight scaled by the learning rate? (wNew = wOld - eta*djdwOld)
theta <- function(a){
a / (1+abs(a)) # Here we apply the sigmoid function as our
# non-linearity.
}
theta.prime <- function(a){
1 / (1+abs(a))^2
}
x <- c( 5.949110, -1.036600, 3.256780, 7.824520, -3.606010, 3.115640, -7.786960,
-7.598090, 2.083880, 3.983000, 8.060120, 7.879760, -2.456670,
-2.152720, 3.471950, 3.567960, -4.232630, 6.831610, -9.486860, 8.692330,
-1.551860, 0.917305, 4.669480, -7.760430, 2.835410)
y <- c(-0.10804400, 0.78264000, -0.05313330, 0.13484700, -0.05522470, -0.05758530,
0.19566100, 0.13846000, 0.43534100, -0.16861400, 0.10625000,
0.08427310, 0.27012900, 0.44004800, -0.00880575, -0.10711400, -0.18671100,
0.01158470, 0.02767190, 0.06319830, 0.61802000, 0.87124300,
-0.25668100, 0.06160800, 0.10575700)
inputlayer <- 1
outputlayer <- 1
hiddenlayer <- 2
w1 <- t(matrix(rnorm(hiddenlayer,0,.01),hiddenlayer,inputlayer))
w2 <- matrix(rnorm(hiddenlayer,0,.01),hiddenlayer,outputlayer)
### Forwardprop ###
forward <- function(x,w1,w2,theta){
s2 <- x%*%w1
a2 <- apply(s2,c(1,2),theta)
s3 <- a2%*%w2
yhat <- apply(s3,c(1,2),theta)
return(yhat)
}
### Forwardpropagation maunally ###
s2 <- x%*%w1
a2 <- apply(s2,c(1,2),theta)
s3 <- a2%*%w2
yhat <- apply(s3,c(1,2),theta)
### Error function ###
#yhat <- forward(x,w1,w2,theta)
E <- sum((y-yhat)^2)/(length(x))
### Backward Propagation ###
delta3 <- (-2*(y-yhat)) * apply(s3,c(1,2),theta.prime)
djdw2 <- t(a2) %*% delta3
delta2 <- delta3 %*% t(w2) * apply(s2,c(1,2),theta.prime)
djdw1 <- t(x)%*%delta2
### Numerically estimated gradients ###
e <- 1e-8
numgrad1 <- matrix(0,1,2)
eps <- matrix(0,1,2)
w1e <- matrix(0,1,2)
for(j in 1:2) {
eps[1,j] <- e
w1e <- w1 + eps
loss2 <- sum((y-forward(x,w1e,w2,theta))^2)
w1e <- w1
loss1 <- sum((y-forward(x,w1e,w2,theta))^2)
numgrad1[1,j] <- (loss2 - loss1)/(e)
eps[1,j] <- 0
}
numgrad2 <- matrix(0,2,1)
eps <- matrix(0,2,1)
w2e <- matrix(0,2,1)
for(j in 1:2) {
eps[j,1] <- e
w2e <- w2 + eps
loss2 <- sum((y-forward(x,w1,w2e,theta))^2)
w2e <- w2
loss1 <- sum((y-forward(x,w1,w2e,theta))^2)
numgrad2[j,1] <- (loss2 - loss1)/(e)
eps[j,1] <- 0
}
# Comparison of our gradients from backpropagation
# and numerical estimation.
c(djdw1,djdw2)
c(numgrad1,numgrad2)
novice here. I am fitting a negative binomial model on count data where Y is the count of events, D is the treatment, and X is a logarithmic offset:
out <- glm.nb(y ~ d + offset(log(x)),data=d1)
I would like to bootstrap the confidence intervals of the first difference between D=1 and D=0. I've gotten this far, but not sure if it is the correct approach:
holder <- matrix(NA,1200,1)
out <- out <- glm.nb(y ~ d + offset(log(x)),data=d1)
for (i in 1:1200){
q <- sample(1:nrow(d1), 1)
d2 <- d1[q,]
d1_1 <- d1_2 <- d2
d1_1$d <- 1
d1_2$d <- 0
d1pred <- predict(out,d1_1,type="response")
d2pred <- predict(out,d1_2,type="response")
holder[i,1] <- (d1pred[1] - d2pred[1])
}
mean(holder)
Is this the correct way to bootstrap the first difference?
Generally, your approach is ok, but you can do it in more R-ish way. Firstly, if you are serious about bootstrapping you can employ boot library and benefit from more compact code, no loops and many other advanced options.
In your case it can look like:
## Data generation
N <- 100
set.seed(1)
d1 <- data.frame(y=rbinom(N, N, 0.5),
d=rbinom(N, 1, 0.5),
x=rnorm(N, 10, 3))
## Model
out <- glm.nb(y ~ d + offset(log(x)), data=d1)
## Statistic function (what we are bootstrapping)
## Returns difference between D=1 and D=0
diff <- function(x,i,model){
v1 <- v2 <- x[i,]
v1$d <- 1
v2$d <- 0
predict(model,v1,type="response") - predict(model,v2,type="response")
}
## Bootstrapping itself
b <- boot(d1, diff, R=5e3, model=out)
mean(b$t)
Now b$t holds bootstrapped values. See names(b) and/or ?boot for extra information.
Bootstrapping is time consuming operation, and one of the obvious advantage of boot library is support for parallel operations. It's as easy as:
b <- boot(d1, diff, R=5e3, model=out, parallel="multicore", ncpus=2)
If you are on Windows use parallel="snow" instead.