coefficient of determination for several models - r

I have several nonlinear regression models (nls) saved as a1, a2,..., a_n. I would like to get a vector of related determinantion coefficients.
E.g.
y <- c(1.0385, 1.0195, 1.0176)
x <- c(3,4,5)
data <- data.frame(x,y)
b1 <- function(x,a,b) {a/b^x}
b2 <- function(x,a,b) {a^b^x}
a1 <- nls(y ~ b1(x,a,b), data = data, start = c(a=0.9, b=0.6))
a2 <- nls(y ~ b2(x,a,b), data = data, start = c(a=0.9, b=0.6))
I can get both coefficients of detetermination using
a <- sum(residuals(a1)^2)
b <- sum((y - mean(y))^2)
1 - (a/b)
#[1] 0.8198396
a <- sum(residuals(a2)^2)
b <- sum((y - mean(y))^2)
1 - (a/b)
#[1] 0.9066859
but what if I have let say 20 models?
I tried to use a cycle for, which didn't work for me as the class is nls, neither a vector nor a matrix.

Use a list of all your results and then apply a function to it:
results <- list(a1,a2)
b <- sum((y - mean(y))^2)
1 - (sapply(results,function(x) sum(residuals(x)^2) ) / b )
#[1] 0.8198396 0.9066859

Related

Quadratic Programming Setting up Constraint

I am trying to set up a simple OLS model with constraints on the coefficients in R. The code below is working. However, this demonstrates
y = c + a1x1 + a2x2 + a3x3 with constraint a1+a2 = 1
I would like to revise this constraint to:
a1*a2 - a3 = 0
thanks for your help!
WORKING CODE:
'''
set.seed(1000)
n <- 20
x1 <- seq(100,length.out=n)+rnorm(n,0,2)
x2 <- seq(50,length.out=n)+rnorm(n,0,2)
x3 <- seq(10,length.out=n)+rnorm(n,0,2)
constant <- 100
ymat <- constant + .5*x1 + .5*x2 + .75*x3 + rnorm(n,0,4)
xmat <- cbind(x1,x2,x3)
X <- cbind(rep(1,n),xmat) # explicitly include vector for constant
bh <- solve(t(X)%*%X)%*%t(X)%*%ymat
XX <- solve(t(X)%*%X)
cmat <- matrix(1,1,1)
Q <- matrix(c(0,1,1,0),ncol(X),1) # a1+a2=1 for y = c + a1x1 + a2x2 + a3x3
bc <- bh-XX%*%Q%*%solve(t(Q)%*%XX%*%Q)%*%(t(Q)%*%bh-cmat)
library(quadprog)
d <- t(ymat) %*% X
Rinv = solve(chol(t(X)%*%X))
qp <- solve.QP(Dmat=Rinv, dvec=d, Amat=Q, bvec=cmat, meq=1, factorized=TRUE)
qp
cbind(bh,qp$unconstrained.solution)
cbind(bc,qp$solution)
'''
Assuming the problem is to minimize || ymat - X b || ^2 subject to b[2] * b[3] == b[4] we can substitute for b[4] giving the unconstrained nls problem shown below. b below is the first 3 elements of b and we can get b[4] by multiplying the last two elements of b below together. No packages are used.
fm <- nls(ymat ~ X %*% c(b, b[2] * b[3]), start = list(b = 0:2))
fm
giving:
Nonlinear regression model
model: ymat ~ X %*% c(b, b[2] * b[3])
data: parent.frame()
b1 b2 b3
76.9718 0.6275 0.7598
residual sum-of-squares: 204
Number of iterations to convergence: 4
Achieved convergence tolerance: 6.555e-06
To compute b4
prod(coef(fm)[-1])
## [1] 0.476805
Note
In a similar way the original problem (to minimize the same objective but with the original constraint) can be reduced to an unconstrained problem and solved using nls via substitution:
nls(ymat ~ X %*% c(b[1], b[2], 1-b[2], b[3]), start = list(b = 0:2))
giving:
Nonlinear regression model
model: ymat ~ X %*% c(b[1], b[2], 1 - b[2], b[3])
data: parent.frame()
b1 b2 b3
105.3186 0.3931 0.7964
residual sum-of-squares: 222.3
Number of iterations to convergence: 1
Achieved convergence tolerance: 4.838e-08
It would even be possible to reparameterize to make this original problem solvable by lm
lm(ymat ~ I(X[, 2] - X[, 3]) + X[, 4] + offset(X[, 3]))
giving
Call:
lm(formula = ymat ~ I(X[, 2] - X[, 3]) + X[, 4] + offset(X[, 3]))
Coefficients:
(Intercept) I(X[, 2] - X[, 3]) X[, 4]
105.3186 0.3931 0.7964
G. grothendieck - thank you for your response. Unfortunately this didn't work for me.
I decided to work out the Lagrangian long handed, which turned out too complicated for me to solve.
Then realized,
a1*a2-a3 =0
a1*a2 = a3
ln(a1*a2)= ln(a3)
ln(a1) + ln(a2) -ln(a3) = 0
This leaves me with an additive constraint which I can solve with the quadprog package.
Maybe you can try the code below, using fmincon()
library(pracma)
library(NlcOptim)
# define objective function
fn <- function(v) norm(ymat- as.vector( xmat %*% v),"2")
# the constraint a1*a2 - a3 = 0
heq1 = function(v) prod(v[1:2])-v[3]
# solve a1, a2 and a3
res <- fmincon(0:2,fn,heq = heq1)
such that
> res$par
[1] 1.9043754 -0.1781830 -0.3393272

Predict function for lm object in R

Why are prediction_me and prediction_R not equal? I'm attempting to follow the formula given by Lemma 5 here. Does the predict function use a different formula, have I made a mistake in my computation somewhere, or is it just rounding error? (the two are pretty close)
set.seed(100)
# genrate data
x <- rnorm(100, 10)
y <- 3 + x + rnorm(100, 5)
data <- data.frame(x = x, y = y)
# fit model
mod <- lm(y ~ x, data = data)
# new observation
data2 <- data.frame(x = rnorm(5, 10))
# prediction for new observation
d <- as.matrix(cbind(1, data[,-2]))
d2 <- as.matrix(cbind(1, data2))
fit <- d2 %*% mod$coefficients
t <- qt(1 - .025, mod$df.residual)
s <- summary(mod)$sigma
half <- as.vector(t*s*sqrt(1 + d2%*%solve(t(d)%*%d, t(d2))))
prediction_me <- cbind(fit, fit - half, fit + half)
prediction_R <- predict(mod, newdata = data2, interval = 'prediction')
prediction_me
prediction_R
Your current code is almost fine. Just note that the formula in Lemma 5 is for a single newly observed x. For this reason, half contains not only relevant variances but also covariances, while you only need the former ones. Thus, as.vector should be replaced with diag:
half <- diag(t * s * sqrt(1 + d2 %*% solve(t(d) %*%d , t(d2))))
prediction_me <- cbind(fit, fit - half, fit + half)
prediction_R <- predict(mod, newdata = data2, interval = 'prediction')
range(prediction_me - prediction_R)
# [1] 0 0

Is there a cleaner way of getting individual regression estimates out from a multifactor model in R

I have solved what I want to get out of my code, I'm in search of a cleaner way of getting this result out? As in any built in functions, who I don't know about?
We have 2 correlated variables and a lot of binomial factors (around 200),
here illustrated with just f1 and f2:
x <- rnorm(100)
y <- rnorm(100)
f1 <- rbinom(100, 1, 0.5)
f2 <- rbinom(100, 1, 0.5)
# which gives the possible groups:
group <- rep(NA, 100)
group[which(f1 & f2)] <- "A"
group[which(!f1 & f2)] <- "B"
group[which(f1 & !f2)] <- "C"
group[which(!f1 & !f2)] <- "D"
df <- data.frame(group,y,x,f1,f2)
We run a model selection adding and removing terms and interactions and end up
with a model, here we say both f1 and f2 and their interactions with x
came out as predictors
m <- glm(y ~ x * f1 + x * f2)
Then my aim is to make a simple linear model output for each group i.e.:
y = a * x + b
# The possible groups:
groups <- data.frame(groups = c("A", "B", "C", "D"), f1=c(1,0,1,0), f2=c(1,1,0,0))
interactions <- grep(":", attr(m$terms, "term.labels"))
factors <- attr(m$terms, "term.labels")[-c(1,interactions)]
interaction.terms <- substring(attr(m$terms, "term.labels")[interactions], 3)
functions <- data.frame(groups$groups, intercept=NA, slope=NA)
for(i in seq(along=groups$groups)) {
intercept <- coef(m)["(Intercept)"] + sum(groups[i, factors]*coef(m)[factors])
slope <- coef(m)["x"] + sum(groups[i, interaction.terms]*coef(m)[paste("x:", interaction.terms, sep="")])
functions[i, "intercept"] <- intercept
functions[i, "slope"] <- slope
}
Which gives an output like this:
> functions
groups.groups intercept slope
1 A -0.10932806 -0.07468630
2 B -0.37755949 -0.17769345
3 C 0.23635139 0.18406047
4 D -0.03188004 0.08105332
The output is the correct, and what I would like. So that is fine. I just think that this method is a quite complicated mess. I can't seem to find a cleaner way of getting these functions out.
I would probably recommend using predict() for this. The intercept is just the value a time x=0, and the slope is the difference in the values between x=1 and x=0. So you can do
int <- predict(m, cbind(groups,x=0))
t1 <- predict(m, cbind(groups,x=1))
data.frame(group=groups$groups, int=int, slope=t1-int)
You didn't set a seed for your example so your exact results aren't reproducible, but if you do set.seed(15) before the sample generation, you should get
group int slope
1 A -0.08372785 -0.16037708
2 B -0.03904330 0.14322623
3 C 0.16455660 -0.02951151
4 D 0.20924114 0.27409179
with both methods

Two models in one Winbugs script

I am conducting a Bayesian analysis using Winbugs from R. I need to combine two Winbugs scripts into one: however, I am receiving an error message (Variable x2 is not defined in model or in data set). Here is the winbugs code:
model{
# Model’s likelihood
for (i in 1:n) {
tto[i] ~ dnorm( mu[i], tau ) # stochastic componenent
b[i] ~ dnorm(0.0, tau2)
# link and linear predictor
mu[i] <- 1 - (beta.concern2*concern2[i] + beta.concern3*concern3[i] + b[i])
}
for (i in 1:1002) {
# Linear regression on logit
logit(p[i]) <- beta.concern2*x2[i,1] + beta.concern2*x2[i,2]
# Likelihood function for each data point
y2[i] ~ dbern(p[i])
}
s2<-1/tau
s <-sqrt(s2)
a2<-1/tau2
a <-sqrt(a2)
}
where x2 is a 1002*2 matrix and y is a vector
This is the R code definining the data:
combined.data <- list(n=n,tto=tto,concern2=concern2,
concern3=concern3,y2=y2, x2=x2)
Anyone know what is wrong?
I'm going to be making quite a few assumptions here...
Perhaps you could add a diagram illustrating the relationships between the variables, and which are deterministic vs stochastic. I find this helpful when making models in BUGS. Also, it would be helpful to have the dimensions of all your data, the meaning of n and perhaps some context or detail on what you're modelling and the nodes in which you're interested.
I'm guessing that y is a binary (0,1) vector of length 1002, and has corresponding values for x2[,1] and x2[,2] (herein x1, x2) and concern2, concern3 (herein c2, c3) and tto i.e.
nrow(x2) == 1002
Here's some sample data with of nrow==10 to work with:
y <- sample(x=c(0,1), size=10, replace=TRUE, prob=c(0.5,0.5))
x2 <- matrix(rnorm(20), nrow=10, ncol=2)
c2 <- rnorm(10)
c3 <- rnorm(10)
tto <- rnorm(10)
It appears that you're trying to determine the values of beta.concern2 (herein b2) for both values of x2 in the logit. Not sure why you'd want to fit it with the same parameter for two different predictors. In case this is a typo I'm giving b2 and b3 as parameters instead. I hope you'll be able to adapt this to your needs.
The product of these values of b2, b3 (stochastic) and c2, c3 (given) are used to generate a variable mu, which also has an error term. (I'm presuming b[i] (herein b1[i]) is a normally distributed error term.)
Then tto is a normally distributed variable which depends on the value of mu, and itself has an error term. I have set the precision of the error terms as being equal in both cases.
So for such a model:
require(rjags)
### The data
dataList <- list(
x1 = x2[,1],
x2 = x2[,2],
y = y,
c2 = c2,
c3 = c3,
tto = tto,
nRowX = nrow(x2)
)
### make sure logistic model can be fitted
f1 <- stats::glm(dataList$y ~ dataList$x1 + dataList$x2 -1, family=binomial(logit))
show(f1)
### set some approximate initial values
b1Init <- 0.1 # arbitrary
b2Init <- f1$coef[2]
b3Init <- f1$coef[3]
initsList <- list(
b1 = b1Init,
b2 = b2Init,
b3 = b3Init)
### Model: varying parameters (b2, b3) per observation; 2x error terms
modelstring <- "
model {
for(i in 1:nRowX){
tto[i] ~ dnorm(mu[i], prec)
mu[i] <- 1 - (b1 + b2*c2[i] + b3*c3[i])
y[i] ~ dbern(L[i]) # L for logit
L[i] <- 1/(1+exp(- ( b2*x1[i] + b3*x2[i]) ))
}
b1 ~ dnorm(0, prec) # precision
prec <- 1/sqrt(SD) # convert to Std Deviation
SD <- 0.5
b2 ~ dnorm(0, 1.4) # arbitrary
b3 ~ dnorm(0, 1.4)
}
"
writeLines(modelstring,con="model.txt")
parameters <- c("b1","b2","b3") # to monitor
adaptSteps <- 1e4 # "tune in" samplers
burnInSteps <- 2e4 # "burn in" samplers
nChains <- 3
numSavedSteps <-2e3
thinSteps <- 1 # Steps to "thin" (1=keep every step).
nPerChain <- ceiling(( numSavedSteps * thinSteps ) / nChains) # Steps per chain
rm(jagsModel) # in case already present
jagsModel <- rjags::jags.model(
"model.txt", data=dataList,
inits=initsList, n.chains=nChains,
n.adapt=adaptSteps)
stats::update(jagsModel, n.iter=burnInSteps)
### MCMC chain
MCMC1 <- as.matrix(rjags::coda.samples(
jagsModel, variable.names=parameters,
n.iter=nPerChain, thin=thinSteps))
### Extract chain values
b2Sample <- as.vector(MCMC1[,grep("b2",colnames(MCMC1))])

profile confidence intervals in R: mle2

I am trying to use the command mle2, in the package bbmle. I am looking at p2 of "Maximum likelihood estimation and analysis with the bbmle package" by Bolker. Somehow I fail to enter the right start values. Here's the reproducible code:
l.lik.probit <-function(par, ivs, dv){
Y <- as.matrix(dv)
X <- as.matrix(ivs)
K <-ncol(X)
b <- as.matrix(par[1:K])
phi <- pnorm(X %*% b)
sum(Y * log(phi) + (1 - Y) * log(1 - phi))
}
n=200
set.seed(1000)
x1 <- rnorm(n)
x2 <- rnorm(n)
x3 <- rnorm(n)
x4 <- rnorm(n)
latentz<- 1 + 2.0 * x1 + 3.0 * x2 + 5.0 * x3 + 8.0 * x4 + rnorm(n,0,5)
y <- latentz
y[latentz < 1] <- 0
y[latentz >=1] <- 1
x <- cbind(1,x1,x2,x3,x4)
values.start <-c(1,1,1,1,1)
foo2<-mle2(l.lik.probit, start=list(dv=0,ivs=values.start),method="BFGS",optimizer="optim", data=list(Y=y,X=x))
And this is the error I get:
Error in mle2(l.lik.probit, start = list(Y = 0, X = values.start), method = "BFGS", :
some named arguments in 'start' are not arguments to the specified log-likelihood function
Any idea why? Thanks for your help!
You've missed a couple of things, but the most important is that by default mle2 takes a list of parameters; you can make it take a parameter vector instead, but you have to work a little bit harder.
I have tweaked the code slightly in places. (I changed the log-likelihood function to a negative log-likelihood function, without which this would never work!)
l.lik.probit <-function(par, ivs, dv){
K <- ncol(ivs)
b <- as.matrix(par[1:K])
phi <- pnorm(ivs %*% b)
-sum(dv * log(phi) + (1 - dv) * log(1 - phi))
}
n <- 200
set.seed(1000)
dat <- data.frame(x1=rnorm(n),
x2=rnorm(n),
x3=rnorm(n),
x4=rnorm(n))
beta <- c(1,2,3,5,8)
mm <- model.matrix(~x1+x2+x3+x4,data=dat)
latentz<- rnorm(n,mean=mm%*%beta,sd=5)
y <- latentz
y[latentz < 1] <- 0
y[latentz >=1] <- 1
x <- mm
values.start <- rep(1,5)
Now we do the fit. The main thing is to specify vecpar=TRUE and to use parnames to let mle2 know the names of the elements in the parameter vector ...
library("bbmle")
names(values.start) <- parnames(l.lik.probit) <- paste0("b",0:4)
m1 <- mle2(l.lik.probit, start=values.start,
vecpar=TRUE,
method="BFGS",optimizer="optim",
data=list(dv=y,ivs=x))
As pointed out above for this particular example you have just re-implemented the probit regression (although I understand that you now want to extend this to allow for heteroscedasticity in some way ...)
dat2 <- data.frame(dat,y)
m2 <- glm(y~x1+x2+x3+x4,family=binomial(link="probit"),
data=dat2)
As a final note, I would say that you should check out the parameters argument, which allows you to specify a sub-linear model for any one of the parameters, and the formula interface:
m3 <- mle2(y~dbinom(prob=pnorm(eta),size=1),
parameters=list(eta~x1+x2+x3+x4),
start=list(eta=0),
data=dat2)
PS confint(foo2) appears to work fine (giving profile CIs as requested) with this set-up.
ae <- function(x,y) all.equal(unname(coef(x)),unname(coef(y)),tol=5e-5)
ae(m1,m2) && ae(m2,m3)

Resources