r loess: coefficients of global "parametric" terms - r

Is there a way how I can extract coefficients of globally fitted terms in local regression modeling?
Maybe I do misunderstand the role of globally fitted terms in the function loess, but what I would like to have is the following:
# baseline:
x <- sin(seq(0.2,0.6,length.out=100)*pi)
# noise:
x_noise <- rnorm(length(x),0,0.1)
# known structure:
x_1 <- sin(seq(5,20,length.out=100))
# signal:
y <- x + x_1*0.25 + x_noise
# fit loess model:
x_seq <- seq_along(x)
mod <- loess(y ~ x_seq + x_1,parametric="x_1")
The fit is done perfectly, however, how can I extract the estimated value of the globally fitted term x_1 (i.e. some value near 0.25 for the example above)?

Finally, I found a solution to my problem using the function gam from the package gam:
require(gam)
mod2 <- gam(y ~ lo(x_seq,span=0.75,degree=2) + x_1)
However, the fits from the two models are not exactly the same (which might be due to different control settings?)...

Related

Categorical Regression with Centered Levels

R's standard way of doing regression on categorical variables is to select one factor level as a reference level and constraining the effect of that level to be zero. Instead of constraining a single level effect to be zero, I'd like to constrain the sum of the coefficients to be zero.
I can hack together coefficient estimates for this manually after fitting the model the standard way:
x <- lm(data = mtcars, mpg ~ factor(cyl))
z <- c(coef(x), "factor(cyl)4" = 0)
y <- mean(z[-1])
z[-1] <- z[-1] - y
z[1] <- z[1] + y
z
## (Intercept) factor(cyl)6 factor(cyl)8 factor(cyl)4
## 20.5021645 -0.7593074 -5.4021645 6.1614719
But that leaves me without standard error estimates for the former reference level that I just added as an explicit effect, and I need to have those as well.
I did some searching and found the constrasts functions, and tried
lm(data = mtcars, mpg ~ C(factor(cyl), contr = contr.sum))
but this still only produces two effect estimates. Is there a way to change which constraint R uses for linear regression on categorical variables properly?
Think I've figured it out. Using contrasts actually is the right way to go about it, you just need to do a little work to get the results into a convenient looking form. Here's the fit:
fit <- lm(data = mtcars, mpg ~ C(factor(cyl), contr = contr.sum))
Then the matrix cs <- contr.sum(factor(cyl)) is used to get the effect estimates and the standard error.
The effect estimates just come from multiplying the contrast matrix by the effect estimates lm spits out, like so:
cs %*% coef(fit)[-1]
The standard error can be calculated using the contrast matrix and the variance-covariance matrix of the coefficients, like so:
diag(cs %*% vcov(fit)[-1,-1] %*% t(cs))

Partial residual plot based on model average coefficients in R

I'm using the R package MuMIn to do multimodel inference and the function model.avg to average the coefficients estimated by a set of models. To visually compare the data to the estimated relationships based on the averaged coefficients, I want to use partial residual plots, similar to the ones created by the crPlots function of the car package. I've tried three ways and I'm not sure whether any is appropriate. Here is a demonstration.
library(MuMIn)
# Loading the data
data(Cement)
# Creating a full model with all the covariates we are interested in
fullModel <- lm(y ~ ., data = Cement, na.action=na.fail)
# Getting all possible models based on the covariates of the full model
muModel <- dredge(fullModel)
# Averaging across all models
avgModel <- model.avg(muModel)
# Getting the averaged coefficients
coefMod <- coef(avgModel)
coefMod
# (Intercept) X1 X2 X4 X3
# 65.71487660 1.45607957 0.61085531 -0.49776089 -0.07148454
Option 1: Using crPlots
library(car) # For crPlots
# Creating a duplicate of the fullMode
hackModel <- fullModel
# Changing the coefficents to the averaged coefficients
hackModel$coefficients <- coefMod[names(coef(fullModel))]
# Changing the residuals
hackModel$residuals <- Cement$y - predict(hackModel)
# Plot the hacked model vs the full model
layout(matrix(1:8, nrow=2, byrow=TRUE))
crPlots(hackModel, layout=NA)
crPlots(fullModel, layout=NA)
Notice that the crPlots of the full and hacked versions with the average coefficients are different.
The question here is: Is this appropriate? The results rely on a hack that I found in this answer. Do I need to change parts of the model other than the residuals and the coefficients?
Option 2: Homemade plots
# Partial residuals: residuals(hacked model) + beta*x
# X1
# Get partial residuals
prX1 <- resid(hackModel) + coefMod["X1"]*Cement$X1
# Plot the partial residuals
plot(prX1 ~ Cement$X1)
# Add modeled relationship
abline(a=0,b=coefMod["X1"])
# X2 - X4
plot(resid(hackModel) + coefMod["X2"]*X2 ~ X2, data=Cement); abline(a=0,b=coefMod["X2"])
plot(resid(hackModel) + coefMod["X3"]*X3 ~ X3, data=Cement); abline(a=0,b=coefMod["X3"])
plot(resid(hackModel) + coefMod["X4"]*X4 ~ X4, data=Cement); abline(a=0,b=coefMod["X4"])
The plot looks different from the ones produced by the crPlots above.
The partial residuals have similar patterns, but their values and modeled relationships are different. The difference in values appears to due to the fact that crPlots used centered partial residuals (see this answer for a discussion of partial residuals in R). This brings me to my third option.
Option 3: Homemade plots with centered partial residuals
# Get the centered partial residuals
pRes <- resid(hackModel, type='partial')
# X1
# Plot the partial residuals
plot(pRes[,"X1"] ~ Cement$X1)
# Plot the component - modeled relationship
lines(coefMod["X1"]*(X1-mean(X1))~X1, data=Cement)
# X2 - X4
plot(pRes[,"X2"] ~ Cement$X2); lines(coefMod["X2"]*(X2-mean(X2))~X2, data=Cement)
plot(pRes[,"X3"] ~ Cement$X3); lines(coefMod["X3"]*(X3-mean(X3))~X3, data=Cement)
plot(pRes[,"X4"] ~ Cement$X4); lines(coefMod["X4"]*(X4-mean(X4))~X4, data=Cement)
Now we have similar values than the crPlots above, but the relationships are still different. The difference may be related to the intercepts. But I'm not sure what I should use instead of 0.
Any suggestions of which method is more appropriate? Is there a more straightforward way to get the partial residual plots based on model averaged coefficients?
Many thanks!
From looking at the crPlot.lm source code, it looks like only the functions residuals(model, type="partial"), predict(model, type="terms", term=var), and functions associated with finding the names of the variables are used on the model object. It also looks like the relationship is regressed, as #BenBolker suggested. The code used in crPlot.lm is: abline(lm(partial.res[,var]~.x), lty=2, lwd=lwd, col=col.lines[1]). Thus, I think that changing the coefficients and residuals of the model is sufficient to be able to use crPlots on it. I can now also reproduce the results in a homemade way.
library(MuMIn)
# Loading the data
data(Cement)
# Creating a full model with all the covariates we are interested in
fullModel <- lm(y ~ ., data = Cement, na.action=na.fail)
# Getting all possible models based on the covariates of the full model
muModel <- dredge(fullModel)
# Averaging across all models
avgModel <- model.avg(muModel)
# Getting the averaged coefficients
coefMod <- coef(avgModel)
# Option 1 - crPlots
library(car) # For crPlots
# Creating a duplicate of the fullMode
hackModel <- fullModel
# Changing the coefficents to the averaged coefficient
hackModel$coefficients <- coefMod[names(coef(fullModel))]
# Changing the residuals
hackModel$residuals <- Cement$y - predict(hackModel)
# Plot the crPlots and the regressed homemade version
layout(matrix(1:8, nrow=2, byrow=TRUE))
par(mar=c(3.5,3.5,0.5,0.5), mgp=c(2,1,0))
crPlots(hackModel, layout=NA, ylab="Partial Res", smooth=FALSE)
# Option 4 - Homemade centered and regressed
# Get the centered partial residuals
pRes <- resid(hackModel, type='partial')
# X1 - X4 plot partial residuals and used lm for the relationship
plot(pRes[,"X1"] ~ Cement$X1); abline(lm(pRes[,"X1"]~Cement$X1))
plot(pRes[,"X2"] ~ Cement$X2); abline(lm(pRes[,"X2"]~Cement$X2))
plot(pRes[,"X3"] ~ Cement$X3); abline(lm(pRes[,"X3"]~Cement$X3))
plot(pRes[,"X4"] ~ Cement$X4); abline(lm(pRes[,"X4"]~Cement$X4))

Modifying a curve to prevent singular gradient matrix at initial parameter estimates

I want to use y=a^(b^x) to fit the data below,
y <- c(1.0385, 1.0195, 1.0176, 1.0100, 1.0090, 1.0079, 1.0068, 1.0099, 1.0038)
x <- c(3,4,5,6,7,8,9,10,11)
data <- data.frame(x,y)
When I use the non-linear least squares procedure,
f <- function(x,a,b) {a^(b^x)}
(m <- nls(y ~ f(x,a,b), data = data, start = c(a=1, b=0.5)))
it produces an error: singular gradient matrix at initial parameter estimates. The result is roughly a = 1.1466, b = 0.6415, so there shouldn't be a problem with intial parameter estimates as I have defined them as a=1, b=0.5.
I have read in other topics that it is convenient to modify the curve. I was thinking about something like log y=log a *(b^x), but I don't know how to deal with function specification. Any idea?
I will expand my comment into an answer.
If I use the following:
y <- c(1.0385, 1.0195, 1.0176, 1.0100, 1.0090, 1.0079, 1.0068, 1.0099, 1.0038)
x <- c(3,4,5,6,7,8,9,10,11)
data <- data.frame(x,y)
f <- function(x,a,b) {a^b^x}
(m <- nls(y ~ f(x,a,b), data = data, start = c(a=0.9, b=0.6)))
or
(m <- nls(y ~ f(x,a,b), data = data, start = c(a=1.2, b=0.4)))
I obtain:
Nonlinear regression model
model: y ~ f(x, a, b)
data: data
a b
1.0934 0.7242
residual sum-of-squares: 0.0001006
Number of iterations to convergence: 10
Achieved convergence tolerance: 3.301e-06
I always obtain an error if I use 1 as a starting value for a, perhaps because 1 raised to anything is 1.
As for automatically generating starting values, I am not familiar with a procedure to do that. One method I have read about is to simulate curves and use starting values that generate a curve that appears to approximate your data.
Here is the plot generated using the above parameter estimates using the following code. I admit that maybe the lower right portion of the line could fit a little better:
setwd('c:/users/mmiller21/simple R programs/')
jpeg(filename = "nlr.plot.jpeg")
plot(x,y)
curve(1.0934^(0.7242^x), from=0, to=11, add=TRUE)
dev.off()

change null hypothesis in lmtest in R

I have a linear model generated using lm. I use the coeftest function in the package lmtest go test a hypothesis with my desired vcov from the sandwich package. The default null hypothesis is beta = 0. What if I want to test beta = 1, for example. I know I can simply take the estimated coefficient, subtract 1 and divide by the provided standard error to get the t-stat for my hypothesis. However, there must be functionality for this already in R. What is the right way to do this?
MWE:
require(lmtest)
require(sandwich)
set.seed(123)
x = 1:10
y = x + rnorm(10)
mdl = lm(y ~ x)
z = coeftest(mdl, df=Inf, vcov=NeweyWest)
b = z[2,1]
se = z[2,2]
mytstat = (b-1)/se
print(mytstat)
the formally correct way to do this:
require(multcomp)
zed = glht(model=mdl, linfct=matrix(c(0,1), nrow=1, ncol=2), rhs=1, alternative="two.sided", vcov.=NeweyWest)
summary(zed)
Use an offset of -1*x
mdl<-lm(y~x)
mdl2 <- lm(y ~ x-offset(x) )
> mdl
Call:
lm(formula = y ~ x)
Coefficients:
(Intercept) x
0.5255 0.9180
> mdl2
Call:
lm(formula = y ~ x - offset(x))
Coefficients:
(Intercept) x
0.52547 -0.08197
You can look at summary(mdl2) to see the p-value (and it is the same as in mdl.
As far as I know, there is no default function to test the model coefficients against arbitrary value (1 in your case). There is the offset trick presented in the other answer, but it's not that straightforward (and always be careful with such model modifications). So, your expression (b-1)/se is actually a good way to do it.
I have two notes on your code:
You can use summary(mdl) to get the t-test for 0.
You are using lmtest with covariance structure (which will change the t-test values), but your original lm model doesn't have it. Perhaps this could be a problem? Perhaps you should use glm and specify the correlation structure from the start.

How to manually specify outer knots for smoother in gam (mgcv package)

I am fitting GAM models to data using the mgcv package in R. Some of my predictors are circular, so I am using a periodic smoother. I run into an issue in cross validation where my holdout dataset can contain values outside the range of the training data. Since the gam package automatically chooses knots for the smooths, this leads to an error (see my related question here -- thanks to #nograpes and #DWin for their explanations of the errors there).
How can I manually specify the outer knots in a periodic smooth?
Example code
The first block generates some data.
library(mgcv)
set.seed(223) # produces error.
# set.seed(123) # no error.
# generate data:
x <- runif(100,min=-pi,max=pi)
linPred <- 2*cos(x) # value of the linear predictor
theta <- 1 / (1 + exp(-linPred)) #
y <- rbinom(100,1,theta)
plot(x,theta)
df <- data.frame(x=x,y=y)
The next block fits the GAM model with the periodic smooth:
gamFit <- gam(y ~ s(x,bs="cc",k=5),data=df,family=binomial())
summary(gamFit)
plot(gamFit)
It will be somewhere in the specification of the smoother term s(x,bs="cc",k=5) where I'm sure you'll be able to set some knots, but this is not obvious to me from the help of gam or from googling.
This block will fit some holdout data and produce the error if you set the seed as above:
# predict y values for new data:
x.2 <- runif(100,min=-pi,max=pi)
df.2 <- data.frame(x=x.2)
predict(gamFit,newdata=df.2)
Ideally, I would only set the outer knots and let gam pick the rest.
Apologies if this question is better for CrossValidated than SO.
Try this:
gamFit <- gam(y ~ s(x,bs="cc",k=5),
knots=list( x=seq(-pi,pi, len=5) ),
data=df, family=binomial())
You will find a worked example at:
?smooth.construct.cr.smooth.spec
I learned in testing this code that the 'k' parameter in s() needs to match the 'len' parameter in the 'x'-seq() value passed to knots(). I thought incorrectly that the knots argument would get passed to s().
You can do this in {mgcv} now and for some years (but perhaps not at the time the question was posed and answered). Using the model in #IRTFM's answer, one can just specify the outer knots for a cyclic CRS:
gamFit <- gam(y ~ s(x, bs = "cc"),
knots = list(x = c(-pi, pi)),
data = df, family = binomial())

Resources