My current problem is to calculate the variance explained by the different variables of a general additive model (GAM) with R.
I followed the explanation given by Wood here :
https://stat.ethz.ch/pipermail/r-help/2007-October/142743.html
But I would like to do it with three variables.
I tried this :
library(mgcv)
set.seed(0)
n<-400
x1 <- runif(n, 0, 1)
x2 <- runif(n, 0, 1)
x3 <- runif(n, 0, 1)
f1 <- function(x) exp(2 * x) - 3.75887
f2 <- function(x) 0.2*x^11*(10*(1-x))^6+10*(10*x)^3*(1-x)^10
f3 <- function(x) 0.008*x^2 - 1.8*x + 874
f <- f1(x1) + f2(x2) + f3(x3)
e <- rnorm(n, 0, 2)
y <- f + e
b <- gam(y ~ s(x1, k = 3)+s(x2, k = 3)+ s(x3, k = 3))
b3 <- gam(y ~ s(x1) + s(x2), sp = c(b$sp[1], b$sp[2]))
b2 <- gam(y ~ s(x1) + s(x3), sp = c(b$sp[1], b$sp[3]))
b1 <- gam(y ~ s(x2) + s(x3), sp = c(b$sp[2], b$sp[3]))
b0 <- gam(y~1)
(deviance(b1)-deviance(b))/deviance(b0)
(deviance(b2)-deviance(b))/deviance(b0)
(deviance(b3)-deviance(b))/deviance(b0)
But I don't understand results. For example, the model with only x1 and x2 has a deviance smaller than deviance with the three explanatory variable.
Does the method I used to extract variance explained by variable with three variables is correct?
Does it mean that there is a confounding effect in the global model? Or is there another explanation?
Thanks a lot.
You did something wrong here:
b <- gam(y ~ s(x1, k = 3) + s(x2, k = 3) + s(x3, k = 3))
b3 <- gam(y ~ s(x1) + s(x2), sp = c(b$sp[1], b$sp[2]))
b2 <- gam(y ~ s(x1) + s(x3), sp = c(b$sp[1], b$sp[3]))
b1 <- gam(y ~ s(x2) + s(x3), sp = c(b$sp[2], b$sp[3]))
Why did you set k = 3 in the first line, while not setting k = 3 for the rest? Without specifying k, s() will take default value k = 10. Now you get a problem: b1, b2, b3 are not nested in b.
In Simon Wood's original example, he left k unspecified, so that k=10 is taken for all s(). In fact, you can vary k values, but you must gurantee that you always have the same k for the same covariate (to ensure nesting). For example, you can do:
b <- gam(y ~ s(x1, k = 4) + s(x2, k = 6) + s(x3, k = 3))
b3 <- gam(y ~ s(x1, k = 4) + s(x2, k = 6), sp = c(b$sp[1], b$sp[2])) ## droping s(x3) from b
b2 <- gam(y ~ s(x1, k = 4) + s(x3, k = 3), sp = c(b$sp[1], b$sp[3])) ## droping s(x2) from b
b1 <- gam(y ~ s(x2, k = 6) + s(x3, k = 3), sp = c(b$sp[2], b$sp[3])) ## droping s(x1) from b
Then let's do:
(deviance(b1)-deviance(b))/deviance(b0)
# [1] 0.2073421
(deviance(b2)-deviance(b))/deviance(b0)
# [1] 0.4323154
(deviance(b3)-deviance(b))/deviance(b0)
# [1] 0.02094997
The positive values imply that dropping any model term will inflate the deviance, which is sensible as our true model have all three terms.
Related
I am using 'KFAS' package from R to estimate a state-space model with the Kalman filter. My measurement and transition equations are:
y_t = b_0 + b_1xx_t + Z_t * x_t + \eps_t (measurement)
x_t = T_t * x_{t-1} + R_t * \eta_t (transition),
with \eps_t ~ N(0,H_t) and \eta_t ~ N(0,Q_t),
where xx_t are covariates. I have read this question and wrote the following code
library(KFAS)
set.seed(100)
xx <- rnorm(200)
beta0 <- 0.1
beta1 <- 0.1
eps <- rt(200, 4, 1)
y <- as.matrix(beta0 + beta1*xx + (arima.sim(n=200, list(ar=0.6), innov = rnorm(200)*sqrt(0.5)) + eps),
ncol=1)
Zt <- 1
Ht <- matrix(NA)
Tt <- matrix(NA)
Rt <- 1
Qt <- matrix(NA)
ss_model <- SSModel(y ~ xx + SSMcustom(Z = Zt, T = Tt, R = Rt,
Q = Qt), H = Ht)
updatefn <- function(pars, model) {
model$H[1] <- pars[1]
model$T[1] <- pars[2]
model$Q[1] <- pars[3]
model
}
fit <- fitSSM(ss_model, c(1, 0.5, 1), updatefn, method = "L-BFGS-B",
lower = c(0, -0.99, 0), upper = c(100, 0.99, 100))
I get the error
Error in is.SSModel(do.call(updatefn, args = c(list(inits, model), update_args)), :
System matrices (excluding Z) contain NA or infinite values, covariance matrices contain values larger than 1e+07
I have tried to change the initial vector to c(1, 0.5, 1, 1, 1) but it returns the same message. Does anyone know how can I do this?
Thanks!
I'm trying to run a simple OLS regression with a restriction that the sum of the coefficients of two variables add up to 1.
I want:
Y = α + β1 * x1 + β2 * x2 + β3 * x3,
where β1 + β2 = 1
I have found how to make a relation between coefficients like:
β1 = 2* β2
But I haven't found how to make restrictions like:
β1 = 1 - β2
How would I do it in this simple example?
data <- data.frame(
A = c(1,2,3,4),
B = c(3,2,2,3),
C = c(3,3,2,3),
D = c(5,3,3,4)
)
lm(formula = 'D ~ A + B + C', data = data)
Thanks!
β1 + β2 = 1
To have β1 + β2 = 1 the model you have to fit is
fit <- lm(Y ~ offset(x1) + I(x2 - x1) + x3, data = df)
That is
Y = α + x1 + β2 * (x2 - x1) + β3 * x3
after substituting β1 = 1 - β2; x_new = x2 - x1 and the coefficient for x1 is 1.
β1 + β2 + β3 = 1
fit <- lm(Y ~ offset(x1) + I(x2 - x1) + I(x3 - x1), data = df)
Y = α + x1 + β2 * (x2 - x1) + β3 * (x3 - x1)
after substituting β1 = 1 - β2 - β3
β1 + β2 + β3 + ... = 1
I think the pattern is clear... you just have to subtract one variable, x1, from the remaining variables(x2, x3, ...) and have the coefficient of that variable, x1, to 1.
Example β1 + β2 = 1
# Data
df <- iris[, 1:4]
colnames(df) <- c("Y", paste0("x", 1:3, collaapse=""))
# β1 + β2 = 1
fit <- lm(Y ~ offset(x1) + I(x2 - x1) + x3, data = df)
coef_2 <- coef(fit)
beta_1 <- 1 - coef_2[2]
beta_2 <- coef_2[2]
1) CVXR We can compute the coefficients using CVXR directly by specifying the objective and constraint. We assume that D is the response, the coefficients of A and B must sum to 1, b[1] is the intercept and b[2], b[3] and b[4] are the coefficients of A, B and C respectively.
library(CVXR)
b <- Variable(4)
X <- cbind(1, as.matrix(data[-4]))
obj <- Minimize(sum((data$D - X %*% b)^2))
constraints <- list(b[2] + b[3] == 1)
problem <- Problem(obj, constraints)
soln <- solve(problem)
bval <- soln$getValue(b)
bval
## [,1]
## [1,] 1.6428605
## [2,] -0.3571428
## [3,] 1.3571428
## [4,] -0.1428588
The objective is the residual sum of squares and it equals:
soln$value
## [1] 0.07142857
2) pracma We can also use the pracma package to compute the coefficients. We specify the X matrix, response vector, the constraint matrix (in this case the vector given as the third argument is regarded as a one row matrix) and the right hand side of the constraint.
library(pracma)
lsqlincon(X, data$D, Aeq = c(0, 1, 1, 0), beq = 1) # X is from above
## [1] 1.6428571 -0.3571429 1.3571429 -0.1428571
3) limSolve This package can also solve for the coefficients of regression problems with constraints. The arguments are the same as in (2).
library(limSolve)
lsei(X, data$D, c(0, 1, 1, 0), 1)
giving:
$X
A B C
1.6428571 -0.3571429 1.3571429 -0.1428571
$residualNorm
[1] 0
$solutionNorm
[1] 0.07142857
$IsError
[1] FALSE
$type
[1] "lsei"
4) nls This can be formulated as a problem for nls with the B coefficient equal to one minus the A coefficient.
nls(D ~ b0 + b1 * A + (1-b1) * B + b2 * C, data,
start = list(b0 = 1, b1 = 1, b2 = 1))
## D ~ b0 + b1 * A + (1 - b1) * B + b2 * C
## data: data
## b0 b1 b2
## 1.6429 -0.3571 -0.1429
## residual sum-of-squares: 0.07143
##
## Number of iterations to convergence: 1
## Achieved convergence tolerance: 2.803e-08
Check
We can double check the above by using the lm approach in the other answer:
lm(D ~ I(A-B) + C + offset(B), data)
giving:
Call:
lm(formula = D ~ I(A - B) + C + offset(B), data = data)
Coefficients:
(Intercept) I(A - B) C
1.6429 -0.3571 -0.1429
The I(A-B) coefficient equals the coefficient of A in the original formulation and one minus it is the coefficient of C. We see that all approaches do lead to the same coefficients.
I want to write a mle for finite mixture model in R,but coefficients estimated by model are not same as coefficients estimated by package flexmix. I wonder if you can point out my mistakes.
my code is as following:
#prepare data
slope1 <- -.3;slope2 <- .3;slope3 <- 1.8; slope4 <- 0.5;intercept1 <- 1.5
age <- sample(seq(18,60,len=401), 200)
grade <- sample(seq(0,100,len=401), 200)
not_smsa <- sample(seq(-2,2,len=401), 200)
unemployment <- rnorm(200,mean=0,sd=1)
wage <- intercept1 + slope1*age +slope2*grade + slope3*not_smsa + rnorm(length(age),0,.15)
y <- wage
X <- cbind(1, age , grade , not_smsa)
mydata <- cbind.data.frame(X,y)
anso <- lm(wage ~ age + grade + not_smsa,
data = mydata)
vi <- c(coef(anso),0.01,0.02,0.03,0.04,0.1)
#function
fmm <- function(beta) {
mu1 <- c(X %*% beta[1:4])
mu2 <- c(X %*% beta[5:8])
p1 <- 1 / (1 + exp(-beta[9]))
p2 <- 1-p1
llk <- p1*dnorm(y,mu1)+p2*dnorm(y,mu2)
-sum(log(llk),na.rm=T)
}
fit <- optim(vi,fmm , method = "BFGS", control = list(maxit=50000), hessian = TRUE)
fit$par
library(flexmix)
flexfit <- flexmix(wage ~ age + grade + not_smsa, data = mydata, k = 2)
flexfit$par
c1 <- parameters(flexfit,component=1)
c2 <- parameters(flexfit, component=2)
Are there any mistakes esisted in my code?
I have solved mistakes esisted in my code,parameters of main function should be added some constraints.
fmm <- function(pars) {
beta1 = pars[1:4]
sigma1 = log(1 + exp(pars[4]))
beta2 = pars[6:10]
sigma2 = log(1 + exp(pars[11]))
p1 = 1 / (1 + exp(-pars[12]))
mu1 <- c(X %*% beta1)
mu2 <- c(X %*% beta2)
p2 <- 1-p1
llk <- p1*dnorm(y,mu1,sigma1)+p2*dnorm(y,mu2,sigma2)
-sum(log(llk),na.rm=T)
}
data is:
d <- data.frame(x = rnorm(100, 0, 1),
y = rnorm(100, 0, 1),
z = rnorm(100, 0, 1))
function to fit 5 models
library(splines)
func <-function(d){
fit1 <- lm( y~ x + z, data = d)
fit2 <- lm( y~x + I(z^2), data = d)
fit3 <- lm( y~poly(x,3) + z, data = d)
fit4 <- lm( y~ns(x, 3) + z, data = d)
l <- list(fit1, fit2, fit3, fit4)
names(l) <- paste0("fit", 1:4)
return(l)
}
mods <- func(d)
mods[[1]]
stargazer(mods, type="text)
I want to construct real regression equations in real format of each one of the models by taking parameters from fitting models and ind variables automatically inside of R if it is possible. For example: for fit1 model, intercept = -0.20612, x = 0.17443, x = 0.03203. Then equation will be something like this: y = -0.206 + 0.174x + 0.032z etc and wanna list these equations of all models in a table along with very common useful statistics like R2, P value, adj.R2, observations etc. stargazer is not showing me my desired output. So I wanna make sure if there is any way to do this in R without doing it manually in excel?
Thanks in advance!
We can map through mods using #J.R.'s function here and broom::glance to the model R2, P-value, and adj.R2.
library(purrr)
library(broom)
map_dfr(mods,
function(x) data.frame('Eq'=regEq(lmObj = x, dig = 3), broom::glance(x), stringsAsFactors = FALSE),
.id='Model')
Model Eq r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
1 fit1 y = 0.091 - 0.022*x - 0.027*z 0.0012601436 -0.01933243 1.028408 0.06119408 0.9406769 3 -143.1721 294.3441 304.7648
2 fit2 y = 0.093 - 0.022*x - 0.003*I(z^2) 0.0006154188 -0.01999045 1.028740 0.02986619 0.9705843 3 -143.2043 294.4087 304.8294
3 fit3 y = 0.093 - 0.248*poly(x, 3)1 - 0.186*poly(x, 3)2 - 0.581*poly(x, 3)3 - 0.031*z 0.0048717358 -0.03702840 1.037296 0.11627016 0.9764662 5 -142.9909 297.9819 313.6129
4 fit4 y = 0.201 + 0.08*ns(x, 3)1 - 0.385*ns(x, 3)2 - 0.281*ns(x, 3)3 - 0.031*z 0.0032813558 -0.03868575 1.038125 0.07818877 0.9887911 5 -143.0708 298.1416 313.7726
deviance df.residual
1 102.5894 97
2 102.6556 97
3 102.2184 95
4 102.3818 95
The problem is that each of your models is not exactly ideal for tabular data, for example fit 3 returns 4 estimates while fit 1 returns just 3
If you are comfortable with lists I would suggest they are a great way of storing this kind of information
library(broom)
library(tidyverse)
library(splines)
d <- data.frame(x = rnorm(100, 0, 1),
y = rnorm(100, 0, 1),
z = rnorm(100, 0, 1))
func <-function(d){
fit1 <- lm( y~ x + z, data = d)
fit2 <- lm( y~x + I(z^2), data = d)
fit3 <- lm( y~poly(x,3) + z, data = d)
fit4 <- lm( y~ns(x, 3) + z, data = d)
l <- list(fit1, fit2, fit3, fit4)
names(l) <- paste0("fit", 1:4)
return(l)
}
mods <- func(d)
list_representation<- map(mods,tidy)
Assuming mods shown in the Note at the end and that what is wanted is a character vector of a text representation of the formulas with the coefficients substituted we have the following.
The fit2text function takes a fitted object and outputs a character string with the text representation of the formula. The round argument gives the number of digits that the coefficients are rounded to in the result. The rmI argument, if TRUE, removes any I(...) and just leaves the ... inside assuming, for ease of implementation, that the expression inside does not contain any parentheses. If FALSE then I is not removed.
Other statistics can be extracted from summary(mods[[1]]) or broom::glance(mods[[1]])
fit2text <- function(fit, round = 2, rmI = TRUE) {
fo <- formula(fit)
resp <- all.vars(fo)[1]
co <- round(coef(fit), round)
labs <- c(if (terms(fit, "intercept") == 1) "", labels(fit))
p <- gsub("\\+ *-", "- ", paste(resp, "~ ", paste(paste(co, labs), collapse = " + ")))
p2 <- if (rmI) gsub("I\\(([^)]+)\\)", "\\1", p) else p
gsub(" +", " ", p2)
}
sapply(mods, fit2text)
giving:
fit1
"y ~ -0.11 - 0.05 x + 0.03 z"
fit2
"y ~ -0.07 - 0.05 x - 0.04 z^2"
fit3
"y ~ -0.11 - 0.43 poly(x, 3) - 1.05 z + 0.27 + 0.04 poly(x, 3)"
fit4
"y ~ -0.55 + 0.23 ns(x, 3) + 0.79 z - 0.25 + 0.04 ns(x, 3)"
Note
The code in the question was not reproducible because the library calls were missing, it used random numbers without a set.seed and there were some further errors in the code. For clarity, we provide the following reproducible code that we used to provide the input for the above answer.
library(splines)
set.seed(123)
d <- data.frame(x = rnorm(100, 0, 1),
y = rnorm(100, 0, 1),
z = rnorm(100, 0, 1))
# function to fit 5 models
func <-function(d){
fit1 <- lm( y~ x + z, data = d)
fit2 <- lm( y~x + I(z^2), data = d)
fit3 <- lm( y~poly(x,3) + z, data = d)
fit4 <- lm( y~ns(x, 3) + z, data = d)
l <- list(fit1, fit2, fit3, fit4)
names(l) <- paste0("fit", 1:4)
return(l)
}
mods <- func(d)
I seem to have a problem with the splines::ns() function in R.
I created a simple dummy problem
dat <- data.frame(t <- seq(0, 6, .01),
x <- rnorm(length(t), sd = 1),
y <- 5 + t - x^2 + rnorm(length(t), sd = .33))
lm(y ~ t + I(x^2), data = dat)
library(splines)
lm(y ~ t + ns(x, knots = c(0), Boundary.knots = c(-3, 3)), data = dat)
While the first model works fine, the second one fails to identify the intercept correctly. What am I missing here?
There is nothing wrong, because you are not fitting exactly the same model, and they are not even equivalent.
To explain the different result you see, it is sufficient to use a simpler example with a single covariate x. We generate data from a quadratic polynomial: 5 + x + x^2, then fit several models.
set.seed(0)
x <- rnorm(500, mean = 1) ## `x` with non-zero mean
y <- 5 + x + x * x + rnorm(500, sd = 0.5)
library(splines)
fit1 <- lm(y ~ x + I(x^2))
#(Intercept) x I(x^2)
# 4.992 1.032 0.980
fit2 <- lm(y ~ poly(x, degree = 2))
#(Intercept) poly(x, degree = 2)1 poly(x, degree = 2)2
# 7.961 70.198 28.720
fit3 <- lm(y ~ bs(x, degree = 2, df = 2))
#(Intercept) bs(x, degree = 2, df = 2)1 bs(x, degree = 2, df = 2)2
# 6.583 -8.337 20.650
fit4 <- lm(y ~ ns(x, df = 2))
#(Intercept) ns(x, df = 2)1 ns(x, df = 2)2
# 5.523 10.737 21.265
The first 3 models are not the same, in terms of parameterization, but they are equivalent: they are all fitting a quadratic polynomial with 3 degree of freedom. To see their equivalence, we check their fitted values:
sum(abs(fit1$fitted - fit2$fitted))
# [1] 1.54543e-13
sum(abs(fit1$fitted - fit3$fitted))
# [1] 2.691181e-13
To see the difference in parameterization, we look at the design matrix:
X1 <- model.matrix(~ x + I(x^2))
X2 <- model.matrix(~ poly(x, degree = 2))
X3 <- model.matrix(~ bs(x, degree = 2, df = 2))
par(mfrow = c(3,3), oma = rep.int(1,4), mar = c(4, 4, 0, 0))
plot(x, X1[, 1], cex = 0.2)
plot(x, X1[, 2], cex = 0.2)
plot(x, X1[, 3], cex = 0.2)
plot(x, X2[, 1], cex = 0.2)
plot(x, X2[, 2], cex = 0.2)
plot(x, X2[, 3], cex = 0.2)
plot(x, X3[, 1], cex = 0.2)
plot(x, X3[, 2], cex = 0.2)
plot(x, X3[, 3], cex = 0.2)
Since the design matrix are not the same (either in shapes or in scales), you will not end up with the same set of coefficients. In case you are surprised, let's try a even simpler example:
x1 <- x - mean(x)
test <- lm(y ~ x1 + I(x1^2))
#(Intercept) x1 I(x1^2)
# 7.003 2.991 0.980
sum(abs(fit1$fitted - test$fitted))
# [1] 1.24345e-13
Here, I have just taken some simple transform for x, then the result is different (but still equivalent).
The 4th model fit4, is fitting a cubic polynomial with 3 degree of freedom, so it is not equivalent to all previous models. We can check fitted values:
sum(abs(fit1$fitted - fit4$fitted))
# [1] 39.36563
Ignoring ns() completely you are missing two things:
1) The comment above explaining how to define a dataframe:
t <- seq(0, 6, .01)
x <- rnorm(length(t), sd = 1)
y <- 5 + t - x^2 + rnorm(length(t), sd = .33)
df <- data.frame(t, x, y)
rm(t, x, y)
2) The way you are calling your models:
lm(y ~ t + I(t^2), data=df)
lm(y ~ splines::ns(t, knots = c(0), Boundary.knots = c(-3, 3)), data=df)
The first model does not properly identify what you think it does.