I have a question on minimizing the sum of squared residuals to estimate "theta" in the below regression function. I intend not to use any built-in functions or packages in R, and write the iterative algorithm.
The regression function is: y_k=exp(-theta |x_k|)+e_k, for k=1,...,n
Here is my code, but it gives me the following error for some sets of x and y. Thanks in advance for your suggestions!
Error in if (abs(dif) < 10^(-5)) break :
missing value where TRUE/FALSE needed"
Code:
theta <- -sum(log(abs(y)))/sum(abs(x))
#Alg:
rep <- 1
while(rep<=1000){
Ratio <- sum((abs(x)*exp(-theta*abs(x)))*(y-exp(-theta*abs(x))))/
sum((abs(x)^2*exp(-theta*abs(x)))*(y-2*exp(-theta*abs(x))))
if(is.na(Ratio)){
thetanew <- theta
}
else{
thetanew <- theta+Ratio
}
dif <- thetanew-theta
theta <- thetanew
if(abs(dif)<10^(-5)) break
rep=rep+1
}
I am trying to reproduce the SPSS output for significance a linear trend among means when equal variances are not assumed.
I have gratefully used code from http://www-personal.umich.edu/~gonzo/coursenotes/file3.pdf to create a function for calculating separate variances, which based on my searching I understand as the “equal variances not assumed” output in SPSS.
My problem/goal:
I am only assessing polynomial orthogonal trends (mostly linear). I want to adapt the code creating the function so that the contrast argument can take pre-made contrast matrices rather than manually specifying the coefficients each time (room for typos!).
… I have tried those exact commands but receive Error in contrast %*% means : non-conformable arguments . I have played around with the code but I can’t get it to work.
Code for creating the function from the notes:
sepvarcontrast <- function(dv, group, contrast) {
means <- c(by(dv, group, mean))
vars <- c(by(dv, group, var))
ns <- c(by(dv, group, length))
ihat <- contrast %*% means
t.denominator <- sqrt(contrast^2 %*% (vars/ns))
t.welch <- ihat/ t.denominator
num.contrast <- ifelse(is.null(dim(contrast)),1,dim(contrast)[1])
df.welch <- rep(0, num.contrast)
if (is.null(dim(contrast))) contrast <- t(as.matrix(contrast))
for (i in 1:num.contrast) {
num <- (contrast[i,]^2 %*% (vars))^2
den <- sum((contrast[i,]^2 * vars)^2 / (ns-1))
df.welch[i] <- num/den
}
p.welch <- 2*(1- pt(abs(t.welch), df.welch))
result <- list(ihat = ihat, se.ihat = t.denominator, t.welch = t.welch,
df.welch = df.welch, p.welch = p.welch)
return(result)
}
I would like to be able to use the function like this:
# Create a polynomial contrast matrix for 5 groups, then save
contr.mat5 <- contr.poly(5)
# Calculate separate variance
sepvarcontrast(dv, group, contrast = contr.mat5)
I have tried those exact commands to see if they would work but receive Error in contrast %*% means : non-conformable arguments.
All suggestions are appreciated! I am still learning how to create a reprex...
The case is that I am trying to construct an MLE algortihm for a bivariate normal case. Yet, I stuck somewhere that seems there is no error, but when I run the script it ends up with a warning.
I have a sample of size n (a fixed constant, trained with 100, but can be anything else) from a bivariate normal distribution with mean vector = (0,0) and covariance matrix = matrix(c(2.2,1.8,1.8,3),2,2)
I've tried several optimization functions (including nlm(), mle(), spg() and optim()) to maximize the likelihood function (,or minimize neg-likelihood), but there are warnings or errors.
require(MASS)
require(tmvtnorm)
require(BB)
require(matrixcalc)
I've defined the first likelihood function as follows;
bvrt_ll = function(mu,sigma,rho,sample)
{
n = nrow(sample)
mu_hat = c(mu[1],mu[2])
p = length(mu)
if(sigma[1]>0 && sigma[2]>0)
{
if(rho<=1 && rho>=-1)
{
sigma_hat = matrix(c(sigma[1]^2
,sigma[1]*sigma[2]*rho
,sigma[1]*sigma[2]*rho
,sigma[2]^2),2,2)
stopifnot(is.positive.definite(sigma_hat))
neg_likelihood = (n*p/2)*log(2*pi) + (n/2)*log(det(sigma_hat)) + 0.5*sum(((sample-mu_hat)%*%solve(sigma_hat)%*%t(sample-mu_hat)))
return(neg_likelihood)
}
}
else NA
}
I prefered this one since I could set the constraints for sigmas and rho, but when I use mle()
> mle(minuslogl = bvrt_ll ,start = list(mu = mu_est,sigma=sigma_est,rho =
rho_est)
+ ,method = "BFGS")
Error in optim(start, f, method = method, hessian = TRUE, ...) :
(list) object cannot be coerced to type 'double'
I also tried nlm and spg in package BB, but they did not help as well. I tried the same function without defining constraints (inside the likelihood, not in optimization function), I could have some results but with warnings, like in nlm and spg both said the process was failed due to covariance matrix being not positive definite while it was, I think that was due to iteration, when iterating covariance matrix might not have been positive definite, and the fact that I did not define the constraints.
Thus, as a result I need to construct an mle algorithm for bivariate normal, where do I do the mistake?
NOTE: I also tried the optimization functions with the following, (I am not sure I did it correct);
neg_likelihood = function(mu,sigma,rho)
{
if(rho>=-1 && rho<=1)
{
-sum(mvtnorm::dmvnorm(x=sample_10,mean=mu
,sigma = matrix(c(sigma[1]^2
,sigma[1]*sigma[2]*rho,sigma[1]*sigma[2]*rho
,sigma[2]^2),2,2),log = T))
}
else NA
}
Any help is appreciated.
Thanks.
EDIT : mu is a vector of length 2 specifying the population means, sigma is a vector of length 2 (specifying population standard deviations of the random variables), and rho is a scalar as correlation coefficient between the bivariate r.v s.
You can do it in closed form so there is no need for numeric optimization. See wiki. Just use colMeans and cov and take note of the method argument in help("cov") and this comment
The denominator n - 1 is used which gives an unbiased estimator of the
(co)variance for i.i.d. observations. These functions return NA when
there is only one observation (whereas S-PLUS has been returning NaN),
and fail if x has length zero.
I am quite new in sampling posterior distributions(so therefore Bayesian approach) using a MCMC technique based on Metropolis-Hastings algorithm.
I am using the mcmc library in R for this. My distribution is multidimensionnal. In order to check if this metro algorithm works for multivaiate distribution I did it successfully on a multidimensional student-t distribution (package mvtnorm, function dmvt).
Now I want to apply the same thing to my multivariate distribution (2 vars x and y) but it doesn't work; I get an error : Error in X[, 1] : incorrect number of dimensions
Here is my code:
library(mcmc)
library(mvtnorm)
my.seed <- 123
logprior<-function(X,...)
{
ifelse( (-50.0 <= X[,1] & X[,1]<=50.0) & (-50.0 <= X[,2] & X[,2]<=50.0), return(0), return(-Inf))
}
logpost<-function(X,...)
{
log.like <- log( exp(-((X[,1]^2 + X[,2]^2 - 4)/10 )^2) * sin(4*atan(X[,2]/X[,1])) )
log.prior<-logprior(X)
log.post<-log.like + log.prior # if flat prior, the posterior distribution is the likelihood one
return (log.post)
}
x <- seq(-5,5,0.15)
y <- seq(-5,5,0.15)
X<-cbind(x,y)
#out <- metrop(function(X) dmvt(X, df=3, log=TRUE), 0, blen=100, nbatch=100) ; this works
out <- metrop(function(X) logpost(X), c(0,0), blen=100, nbatch=100)
out <- metrop(out)
out$accept
So I tried to respect the same kind of format than for the MWE, but it doesn't work still as I got the error mentioned before.
Another thing, is that applying logpost to X works perfectly.
Thanks in advance for your help, best
The metrop function passes individual samples, and therefore a simple vector to logpost, not a matrix (which is what X is). Hence, the solution is to change X[,1] and X[,2] to X[1] and X[2], respectively.
I ran it like this, and it leads to other issues (X[2]/X[1] is NaN for the initialization), but that has more to do with your specific likelihood model and is out of the scope of your question.
I'm trying to get a 95% confidence interval around some predicted values, but am not capable of achieving this.
Basically, I estimated a growth curve like this:
set.seed(123)
dat=data.frame(size=rnorm(50,10,3),age=rnorm(50,5,2))
S <- function(t,ts,C,K) ((C*K)/(2*pi))*sin(2*pi*(t-ts))
sommers <- function(t,Linf,K,t0,ts,C)
Linf*(1-exp(-K*(t-t0)-S(t,ts,C,K)+S(t0,ts,C,K)))
model <- nls(size~sommers(age,Linf,K,t0,ts,C),data=dat,
start=list(Linf=10,K=4.7,t0=2.2,C=0.9,ts=0.1))
I have independent size measurements, for which I would like to predict the age. Therefore, the inverse of the function, which is not very straightforward, I calculated like this:
model.out=coef(model)
S.out <- function(t)
((model.out[[4]]*model.out[[2]])/(2*pi))*sin(2*pi*(t-model.out[[5]]))
sommers.out <- function(t)
model.out[[1]]*(1-exp(-model.out[[2]]*(t-model.out[[3]])-S.out(t)+S.out(model.out[[3]])))
inverse = function (f, lower = -100, upper = 100) {
function (y) uniroot((function (x) f(x) - y), lower = lower, upper = upper)[1]
}
sommers.inverse = inverse(sommers.out, 0, 25)
x= sommers.inverse(10) #this works with my complete dataset, but not with this fake one
Although this works fine, I need to know the confidence interval (95%) around this estimate (x). For linear models there is for example "predict(... confidence=)". I could also bootstrap the function somehow to get the quantiles associated with the parameters (didn't find how), to then use the extremes of those to calculate the maximum and minimum values predictable. But that doesn't really look like the good way of doing this....
Any help would be greatly appreciated.
EDIT after answer:
So this worked (explained in the book of Ben Bolker, see answer):
vmat = mvrnorm(1000, mu = coef(mfit), Sigma = vcov(mfit))
dist = numeric(1000)
for (i in 1:1000) {dist[i] = sommers_inverse(9.938,vmat[i,])}
quantile(dist, c(0.025, 0.975))
On the rather bad fake data I gave, this works of course rather horrible. But on the real data (which I have a problem recreating), this is ok!
Unless I'm mistaken, you're going to have to use either regular (parametric) bootstrapping or a method called either "population predictive intervals" (e.g., see section 5 of chapter 7 of Bolker 2008), which assumes that the sampling distributions of your parameters are multivariate Normal. However, I think you may have bigger problems, unless I've somehow messed up your model in adapting it ...
Generate data (note that random data may actually bad for testing your model - see below ...)
set.seed(123)
dat <- data.frame(size=rnorm(50,10,3),age=rnorm(50,5,2))
S <- function(t,ts,C,K) ((C*K)/(2*pi))*sin(2*pi*(t-ts))
sommers <- function(t,Linf,K,t0,ts,C)
Linf*(1-exp(-K*(t-t0)-S(t,ts,C,K)+S(t0,ts,C,K)))
Plot the data and the initial curve estimate:
plot(size~age,data=dat,ylim=c(0,16))
agevec <- seq(0,10,length=1001)
lines(agevec,sommers(agevec,Linf=10,K=4.7,t0=2.2,ts=0.1,C=0.9))
I had trouble with nls so I used minpack.lm::nls.lm, which is slightly more robust. (There are other options here, e.g. calculating the derivatives and providing the gradient function, or using AD Model Builder or Template Model Builder, or using the nls2 package.)
For nls.lm we need a function that returns the residuals:
sommers_fn <- function(par,dat) {
with(c(as.list(par),dat),size-sommers(age,Linf,K,t0,ts,C))
}
library(minpack.lm)
mfit <- nls.lm(fn=sommers_fn,
par=list(Linf=10,K=4.7,t0=2.2,C=0.9,ts=0.1),
dat=dat)
coef(mfit)
## Linf K t0 C ts
## 10.6540185 0.3466328 2.1675244 136.7164179 0.3627371
Here's our problem:
plot(size~age,data=dat,ylim=c(0,16))
lines(agevec,sommers(agevec,Linf=10,K=4.7,t0=2.2,ts=0.1,C=0.9))
with(as.list(coef(mfit)), {
lines(agevec,sommers(agevec,Linf,K,t0,ts,C),col=2)
abline(v=t0,lty=2)
abline(h=c(0,Linf),lty=2)
})
With this kind of fit, the results of the inverse function are going to be extremely unstable, as the inverse function is many-to-one, with the number of inverse values depending sensitively on the parameter values ...
sommers_pred <- function(x,pars) {
with(as.list(pars),sommers(x,Linf,K,t0,ts,C))
}
sommers_pred(6,coef(mfit)) ## s(6)=9.93
sommers_inverse <- function (y, pars, lower = -100, upper = 100) {
uniroot(function(x) sommers_pred(x,pars) -y, c(lower, upper))$root
}
sommers_inverse(9.938, coef(mfit)) ## 0.28
If I pick my interval very carefully I can get back the correct answer ...
sommers_inverse(9.938, coef(mfit), 5.5, 6.2)
Maybe your model will be better behaved with more realistic data. I hope so ...