Exactly like this question but how do you also get the R squared value for each model? link
Sample data
test <- data.frame(row=c(1:16),
plot = c(1,1,1,1,1,2,2,2,3,3,3,3,3,3,3,3),
logT = c(1.092,1.091,1.0915,1.09,1.08,1.319,1.316,1.301,1.2134,1.213,1.21,1.22,1.23,1.20,1.19,1.19),
utc_datetime = c(2020-03-05T00:00:00Z,2020-03-05T00:30:00Z,2020-03-05T01:00:00Z,2020-03-05T01:30:00Z,2020-03-05T02:00:00Z, 2020-03-06T01:00:00Z,2020-03-06T01:30:00Z,2020-03-06T02:00:00Z,
2020-03-10T02:00:00Z,2020-03-10T02:30:00Z,2020-03-10T03:00:00Z,2020-03-10T03:30:00Z,2020-03-10T04:00:00Z,2020-03-10T04:30:00Z,2020-03-10T05:00:00Z,2020-03-10T05:30:00Z,),
hrs_since = 1,2,3,4,5,1,2,3,1,2,3,4,5,6,7,8))
A deeper explanation of the data I am dealing with is here but I believe the sample data provided above would suffice data. Ideally, I would want to use the utc_datetime as the x axis/IV value but no code I've tried works with using that so I created the hrs_since variable which works.
I am looking for an output datframe that looks something like this:
plot
slope(coeff)
r2 value
rsd
1
2.1
.96
.01
2
1.3
.85
.01
3
.8
.99
.02
When I run the code below...
output <- ddply(test, "plot", function(x) {
model <- lm(logT ~ hrs_since, data = x)
coef(model)
})
I create a dataframe that looks like this:
plot
(Intercept)
hrs_since
1
2.1
.96
2
1.3
.85
3
.8
.99
But when I add summary(model)$r.squared to it, such as below...
output <- ddply(test, "plot", function(x) {
model <- lm(logT ~ hrs_since, data = x)
coef(model)
summary(model)$r.squared
})
I create a dataframe that looks like this:
plot
V1
1
0.98
2
0.97
3
0.89
Where the correct R squared value has been added as column V1 to the df "output", but I have for some reason lost the coeff column? Ideally, I want to also add rsd and maybe st.dev columns but have not attempted yet because getting the R squared and coeff columns correct are the most important parameters I need. Also, originally I tried using r.squared(model) instead of summary(model)$r.squared in the line below coef(model), but this resulted in getting the error "Error in UseMethod("pmodel.response") :
no applicable method for 'pmodel.response' applied to an object of class "lm""
Also, I tried a method using this code as well and it worked but the coeff was not returned in the parameters returned for each plot
output <- test %>%
group_by(plot) %>%
do(glance(lm(lnT~hrs_since, data=.)))
Thank you in advance!
Here's an approach that nests each of the models in a dataframe and captures the results in the dataframe as well. Then uses the broom package to extract the statistics. There are two different broom functions that extract the intercept and r2, so I run them separately and combine into one dataframe.
library(dplyr)
library(modelr)
library(tidyverse)
dat_all <- data.frame()
#nest the datasets as separate dataframes
for (p in unique(test$plot)){
data <- data.frame(x = test$logT[test$plot == p], y = test$hrs_since[test$plot == p])
names(data) <- c("logT", "hrs_since")
dd <- data.frame(plot = p, data = data) %>%
group_by(plot) %>%
nest()
dat_all <- rbind(dat_all, dd)
}
myModel <- function(x){
lm(data.logT ~ data.hrs_since, data = x)
}
#use map to run the model and each of the nested dataframes
dat_all <- dat_all %>%
mutate(model = map(data, myModel))
#extract the intercepts
i <- dat_all %>%
mutate(tidy = map(model, broom::tidy)) %>%
unnest(tidy) %>%
filter(term == "(Intercept)") %>%
select(plot, intercept = estimate)
#extract r2
r <- dat_all %>%
mutate(glance = map(model, broom::glance)) %>%
unnest(glance) %>%
select(plot, r.squared)
#combine statistics by plot
results <- i %>%
left_join(r, by = "plot")
plot intercept r.squared
<dbl> <dbl> <dbl>
1 1 1.10 0.618
2 2 1.33 0.871
3 3 1.22 0.380
Related
I was wondering why lm() says 5 coefs not defined because of singularities and then gives all NA in the summary output for 5 coefficients.
Note that all my predictors are categorical.
Is there anything wrong with my data on these 5 coefficients or code? How can I possibly fix this?
d <- read.csv("https://raw.githubusercontent.com/rnorouzian/m/master/v.csv", h = T) # Data
nms <- c("Age","genre","Length","cf.training","error.type","cf.scope","cf.type","cf.revision")
d[nms] <- lapply(d[nms], as.factor) # make factor
vv <- lm(dint~Age+genre+Length+cf.training+error.type+cf.scope+cf.type+cf.revision, data = d)
summary(vv)
First 6 lines of output:
Coefficients: (5 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.17835 0.63573 0.281 0.779330
Age1 -0.04576 0.86803 -0.053 0.958010
Age2 0.46431 0.87686 0.530 0.596990
Age99 -1.64099 1.04830 -1.565 0.118949
genre2 1.57015 0.55699 2.819 0.005263 **
genre4 NA NA NA NA ## For example here is all `NA`s? there are 4 more !
As others noted, a problem is that you seem to have multicollinearity. Another is that there are missing values in your dataset. The missing values should probably just be removed. As for correlated variables, you should inspect your data to identify this collinearity, and remove it. Deciding which variables to remove and which to retain is a very domain-specific topic. However, you could if you wish decide to use regularisation and fit a model while retaining all variables. This also allows you to fit a model when n (number of samples) is less than p (number of predictors).
I've shown code below that demonstrates how to examine the correlation structure within your data, and to identify which variables are most correlated (thanks to this answer. I've included an example of fitting such a model, using L2 regularisation (commonly known as ridge regression).
d <- read.csv("https://raw.githubusercontent.com/rnorouzian/m/master/v.csv", h = T) # Data
nms <- c("Age","genre","Length","cf.training","error.type","cf.scope","cf.type","cf.revision")
d[nms] <- lapply(d[nms], as.factor) # make factor
vv <- lm(dint~Age+genre+Length+cf.training+error.type+cf.scope+cf.type+cf.revision, data = d)
df <- d
df[] <- lapply(df, as.numeric)
cor_mat <- cor(as.matrix(df), use = "complete.obs")
library("gplots")
heatmap.2(cor_mat, trace = "none")
## https://stackoverflow.com/questions/22282531/how-to-compute-correlations-between-all-columns-in-r-and-detect-highly-correlate
library("tibble")
library("dplyr")
library("tidyr")
d2 <- df %>%
as.matrix() %>%
cor(use = "complete.obs") %>%
## Set diag (a vs a) to NA, then remove
(function(x) {
diag(x) <- NA
x
}) %>%
as.data.frame %>%
rownames_to_column(var = 'var1') %>%
gather(var2, value, -var1) %>%
filter(!is.na(value)) %>%
## Sort by decreasing absolute correlation
arrange(-abs(value))
## 2 pairs of variables are almost exactly correlated!
head(d2)
#> var1 var2 value
#> 1 id study.name 0.9999430
#> 2 study.name id 0.9999430
#> 3 Location timed 0.9994082
#> 4 timed Location 0.9994082
#> 5 Age ed.level 0.7425026
#> 6 ed.level Age 0.7425026
## Remove some variables here, or maybe try regularized regression (see below)
library("glmnet")
## glmnet requires matrix input
X <- d[, c("Age", "genre", "Length", "cf.training", "error.type", "cf.scope", "cf.type", "cf.revision")]
X[] <- lapply(X, as.numeric)
X <- as.matrix(X)
ind_na <- apply(X, 1, function(row) any(is.na(row)))
X <- X[!ind_na, ]
y <- d[!ind_na, "dint"]
glmnet <- glmnet(
x = X,
y = y,
## alpha = 0 is ridge regression
alpha = 0)
plot(glmnet)
Created on 2019-11-08 by the reprex package (v0.3.0)
Under such situation you can use "olsrr" package in R for stepwise regression analysis. I am providing you a sample code to do stepwise regression analysis in R
library("olsrr")
#Load the data
d <- read.csv("https://raw.githubusercontent.com/rnorouzian/m/master/v.csv", h = T)
# stepwise regression
vv <- lm(dint ~ Age + genre + Length + cf.training + error.type + cf.scope + cf.type + cf.revision, data = d)
summary(vv)
k <- ols_step_both_p(vv, pent = 0.05, prem = 0.1)
# stepwise regression plot
plot(k)
# final model
k$model
It will provide you exactly the same output as that of SPSS.
As shown in the following example, what I want to achieve is to run the regression many times, each time R records the estimates of did in one data.frame.
Each time, I changed the year condition in "ifelse", ie., ifelse(mydata$year >= 1993, 1, 0), thus each time I run a different regression.
mydata$time = ifelse(mydata$year >= 1994, 1, 0)
Can anyone help it? My basic code is as below (the data can be downloaded through browser if R returned errors):
library(foreign)
mydata = read.dta("http://dss.princeton.edu/training/Panel101.dta")
mydata$time = ifelse(mydata$year >= 1994, 1, 0)
mydata$did = mydata$time * mydata$treated
mydata$treated = ifelse(mydata$country == "E" | mydata$country == "F" | mydata$country == "G", 1, 0)
didreg = lm(y ~ treated + time + did, data = mydata)
summary(didreg)
Generally if you want to repeat a process many times with some different input each time, you need a function. The following function takes a scalar value year_value as its input, creates local variables for regression and exports estimates for model term did.
foo <- function (year_value) {
## create local variables from `mydata`
y <- mydata$y
treated <- as.numeric(mydata$country %in% c("E", "F", "G")) ## use `%in%`
time <- as.numeric(mydata$year >= year_value) ## use `year_value`
did <- time * treated
## run regression using local variables
didreg <- lm(y ~ treated + time + did)
## return estimate for model term `did`
coef(summary(didreg))["did", ]
}
foo(1993)
# Estimate Std. Error t value Pr(>|t|)
#-2.784222e+09 1.504349e+09 -1.850782e+00 6.867661e-02
Note there are several places where your original code can be improved. Say, using "%in%" instead of multiple "|", and using as.numeric instead of ifelse to coerce boolean to numeric.
Now you need something like a loop to iterate this function over several different year_value. I would use lappy.
## raw list of result from `lapply`
year_of_choice <- 1993:1994 ## taken for example
result <- lapply(year_of_choice, foo)
## rbind them into a matrix
data.frame(year = year_of_choice, do.call("rbind", result), check.names = FALSE)
# year Estimate Std. Error t value Pr(>|t|)
#1 1993 -2784221881 1504348732 -1.850782 0.06867661
#2 1994 -2519511630 1455676087 -1.730819 0.08815711
Note, don't include year 1990 (the minimum of variable year) as a choice, otherwise time will be a vector of 1, as same as the intercept. The resulting model is rank-deficient and you will get "subscript out of bounds" error. R version since 3.5.0 has a new complete argument to generic function coef. So for stability we may use
coef(summary(didreg), complete = TRUE)["did", ]
But you should see all NA or NaN for year 1990.
Here is another option, here we create a matrix for all the years, join it to mydata, gather to long, nest by grouping, then run regression to extract the estimates. Note that "gt_et_**" stands for "greater than or equal to.."
library(foreign)
library(dplyr)
library(tidyr)
library(purrr)
mydata = read.dta("http://dss.princeton.edu/training/Panel101.dta")
mtrx <- matrix(0, length(min(mydata$year):max(mydata$year)), length(min(mydata$year):max(mydata$year)))
mtrx[lower.tri(mtrx, diag = TRUE)] <- 1
df <- mtrx %>% as.data.frame() %>% mutate(year = min(mydata$year):max(mydata$year))
colnames(df) <- c(paste0("gt_et_", df$year), "year")
models <- df %>%
full_join(., mydata, by = "year") %>%
gather(mod, time, gt_et_1990:gt_et_1999) %>%
nest(-mod) %>%
mutate(data = map(data, ~mutate(.x, treated = ifelse(country == "E"|country == "F"|country == "G", 1, 0),
did = time * treated)),
mods = map(data, ~lm(y ~ treated + time + did, data = .x) %>% summary() %>% coef())) %>%
unnest(mods %>% map(broom::tidy)) %>%
filter(.rownames == "did") %>%
select(-.rownames)
models
#> mod Estimate Std..Error t.value Pr...t..
#> 1 gt_et_1991 -2309823993 2410140350 -0.95837738 0.34137018
#> 2 gt_et_1992 -2036098728 1780081308 -1.14382344 0.25682856
#> 3 gt_et_1993 -2784221881 1504348732 -1.85078222 0.06867661
#> 4 gt_et_1994 -2519511630 1455676087 -1.73081886 0.08815711
#> 5 gt_et_1995 -2357323806 1455203186 -1.61992760 0.11001662
#> 6 gt_et_1996 250180589 1511322882 0.16553749 0.86902697
#> 7 gt_et_1997 405842197 1619653548 0.25057346 0.80292231
#> 8 gt_et_1998 -75683039 1852314277 -0.04085864 0.96753194
#> 9 gt_et_1999 2951694230 2452126428 1.20372840 0.23299421
Created on 2018-09-01 by the reprex
package (v0.2.0).
In the past I've used the lm function with matrix-type data and data.frame-type. But I guess this is the first time that I tried to use predict using a model fitted without a data.frame. And I'm can't figure out how to make it work.
I read some other questions (such as Getting Warning: " 'newdata' had 1 row but variables found have 32 rows" on predict.lm) and I'm pretty sure that my problem is related with the coefficient names I'm getting after fitting the model. For some reason the coefficients names are a paste of the matrix name with the column name... and I haven't been able to find how to fix that...
library(tidyverse)
library(MASS)
set.seed(1)
label <- sample(c(T,F), nrow(Boston), replace = T, prob = c(.6,.4))
x.train <- Boston %>% dplyr::filter(., label) %>%
dplyr::select(-medv) %>% as.matrix()
y.train <- Boston %>% dplyr::filter(., label) %>%
dplyr::select(medv) %>% as.matrix()
x.test <- Boston %>% dplyr::filter(., !label) %>%
dplyr::select(-medv) %>% as.matrix()
y.test <- Boston %>% dplyr::filter(., !label) %>%
dplyr::select(medv) %>% as.matrix()
fit_lm <- lm(y.train ~ x.train)
fit_lm2 <- lm(medv ~ ., data = Boston, subset = label)
predict(object = fit_lm, newdata = x.test %>% as.data.frame()) %>% length()
predict(object = fit_lm2, newdata = x.test %>% as.data.frame()) %>% length()
# they get different numbers of predicted data
# the first one gets a number a results consistent with x.train
Any help will be welcome.
I can't fix your tidyverse code because I don't work with this package. But I am able to explain why predict fails in the first case.
Let me just use the built-in dataset trees for a demonstration:
head(trees, 2)
# Girth Height Volume
#1 8.3 70 10.3
#2 8.6 65 10.3
The normal way to use lm is
fit <- lm(Girth ~ ., trees)
The variable names (on the RHS of ~) are
attr(terms(fit), "term.labels")
#[1] "Height" "Volume"
You need to provide these variables in the newdata when using predict.
predict(fit, newdata = data.frame(Height = 1, Volume = 2))
# 1
#11.16125
Now if you fit a model using a matrix:
X <- as.matrix(trees[2:3])
y <- trees[[1]]
fit2 <- lm(y ~ X)
attr(terms(fit2), "term.labels")
#[1] "X"
The variable you need to provide in newdata for predict is now X, not Height or Girth. Note that since X is a matrix variable, you need to protect it with I() when feeding it to a data frame.
newdat <- data.frame(X = I(cbind(1, 2)))
str(newdat)
#'data.frame': 1 obs. of 1 variable:
# $ X: AsIs [1, 1:2] 1 2
predict(fit2, newdat)
# 1
#11.16125
It does not matter that cbind(1, 2) has no column names. What is important is that this matrix is named X in newdat.
I have the following data frame:
input.df <- dplyr::data_frame(x = rnorm(4),
y = rnorm(4),
`z 1` = rnorm(4))
I would like to do a multiple regression for each column with the other columns and extract the R-squared from each model. This means that I could run the following code:
summary(lm(x ~ ., data = input.df))
summary(lm(y ~ ., data = input.df))
summary(lm(`z 1` ~ ., data = input.df))
And note down the R-squared.
I'd like to automate this task and have two column data frame where the first column is the dependent variable and the second column is the R-squared.
This is what I've tried:
n <- ncol(input.df)
replicate(n, input.df, simplify = F) %>%
dplyr::bind_rows() %>%
dplyr::mutate(group = rep(names(.), each = nrow(.) / n)) %>%
dplyr::group_by(group) %>%
dplyr::do({
tgt.var <- .$group[1]
# How do I get the formula to interpret . as all variables?
lm(get(tgt.var) ~ ., data = .) %>%
broom::glance() %>%
dplyr::select(r.squared)
})
I've put a comment on the part I am stuck. I get the following error:
Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) : contrasts can be applied only to factors with 2 or more levels
I think you've overcomplicated building your dataframe a little. There is no need for replicate as you are running all regressions on the same dataset. You could just use map from purrr, the idea is to try something like
library(purrr)
names(input.df) %>%
map(~ lm(get(.) ~ ., data = input.df))
This runs without errors but doesn't give the desired result. The reason is that get(.) gets added as a new variable in the dataset, so for example the first regression is x ~ x + y + `z 1` which is not what we want. This can be easily fixed though by changing the formula in lm as follows
names(input.df) %>%
map(~ lm(formula(paste0('`', ., '` ~ .')), data = input.df))
(note the need to include the escape backticks because of the name of your third variable, otherwise it wouldn't have been necessary). This now gives the desired results. If you don't want to keep everything and want to extract r2 you can just do
names(input.df) %>%
map(~ lm(formula(paste0('`', ., '` ~ .')), data = input.df)) %>%
map(summary) %>%
map_dbl('r.squared')
Not certain how to resolve your issue directly. Here's an alternative method to derive a data.frame with dependent variable and r.sq for separate models.
cond <- matrix(c(1,0,0,0,1,0,0,0,1), ncol=3)
colnames(cond)<- colnames(input.df)
cond
x y z 1
[1,] 1 0 0
[2,] 0 1 0
[3,] 0 0 1
xy <- lapply(1:nrow(cond), function(v)
list(y = colnames(cond)[which(cond[v,]==1)] %>% paste0("`", ., "`"),
x = colnames(cond)[which(cond[v,]==0)] %>% paste0("`", ., "`") %>% paste(., collapse="+")))
lm.form <- lapply(1:length(xy), function(v) paste(xy[[v]]$y, xy[[v]]$x, sep="~") %>% as.formula)
lm.mod <- lapply(lm.form, function(v)lm(v, data=input.df))
data.frame(pred = lapply(xy, function(v) v["x"]) %>% unlist,
r.sq = lapply(lm.mod, function(v) summary(v)$r.sq)%>% unlist)
pred r.sq
1 `y`+`z 1` 0.5806704
2 `x`+`z 1` 0.8500431
3 `x`+`y` 0.8335421
My question is very similar to this one, but the problem I am facing has a twist that those answers do not address. Specifically, I am estimating a spatial model, y=rho * lw * y + X *beta. Because the observations are related by the matrix lw, I must apply the model to the entire X matrix simultaneously. Because those answers operate row-wise, they do not apply.
Here is MWE data, consisting of twenty points across three groups and a spatial weights matrix:
library(spdep)
#Coordinates
pointcoords <- data.frame(x = runif(n=20, min =10, max = 100), y = runif(n=20, min = 10, max = 100), ID = as.character(1:20))
pointsSP <- SpatialPoints(pointcoords[,1:2])
# Weights matrix
lw <- nb2listw(knn2nb(knearneigh(pointsSP, k = 4, RANN = FALSE),
row.names = pointcoords$ID))
# Data
MyData <- data.frame(ID = rep(1:20, each = 3),
Group = rep(1:3, times = 20),
DV = rnorm(60),IV = rnorm(60))
I can estimate the models by Group with dplyr
library(dplyr)
models <- MyData %>% group_by(Group) %>%
do(lm = lm(DV ~ IV, data = .),
sar = lagsarlm(DV ~ IV, data = ., listw = lw))
Predicting to new data with this answer operates on a row-wise basis, working fine for the lm objects,
MyData2 <- data.frame(ID = rep(1:20, each = 3),
Group = rep(1:3, times = 20),
IV = rnorm(60))
MyData2 %>% left_join(models) %>% rowwise %>%
mutate(lmPred = predict(lm, newdata = list("IV" = IV))) %>% head()
#Joining by: "Group"
#Source: local data frame [6 x 6]
#Groups:
# ID Group IV lm sar lmPred
#1 1 1 -0.8930794 <S3:lm> <S3:sarlm> -0.21378814
#2 1 2 -1.6637963 <S3:lm> <S3:sarlm> 0.42547796
#3 1 3 0.5243841 <S3:lm> <S3:sarlm> -0.23372996
#4 2 1 -0.1956969 <S3:lm> <S3:sarlm> -0.20860280
#5 2 2 0.8149920 <S3:lm> <S3:sarlm> 0.14771431
#6 2 3 -0.3000439 <S3:lm> <S3:sarlm> 0.05082524
But not for the sar models:
MyData2 %>% left_join(models) %>% rowwise %>%
mutate(sarPred = predict(sar, newdata = list("IV" = IV), listw=lw)) %>% head()
#Joining by: "Group"
#Error in if (nrow(newdata) != length(listw$neighbours)) stop("mismatch between newdata and spatial weights") :
argument is of length zero
I think there should be a better way of doing this, without joining the model to every row. Also, creating a list object for newdata won't work if you have several or changing predictor variables. It seems that the dplyr way should be something like this:
MyData2 %>% group_by(Group) %>%
mutate(sarPred = predict(models$sar[[Group]], newdata = ., listw=lw))
But the [[Group]] index isn't quite right.
I ended up doing this with do in dplyr, going through the models data.frame rowwise. I believe it does what you want, although the output doesn't contain the new data used for predictions. I did add in Group to the output, though, as it seemed necessary to keep groups separated.
models %>%
do(data.frame(Group = .$Group,
predlm = predict(.$lm, newdata = filter(MyData2, Group == .$Group)),
predsar = predict(.$sar, newdata = filter(MyData2, Group == .$Group) , listw = lw)))
EDIT
Playing around with adding the explanatory variable into the output data.frame. The following works, although there is likely a better way to do this.
models %>%
do(data.frame(Group = .$Group, IV = select(filter(MyData2, Group == .$Group), IV),
predlm = predict(.$lm, newdata = filter(MyData2, Group == .$Group)),
predsar = predict(.$sar, newdata = filter(MyData2, Group == .$Group) , listw = lw)))
I'm putting this out there because it does do what I want it to, even if it needs to use a for loop (gasp)
predictobj <- list()
for(i in models$Group){
predictobj[[i]] <- predict.sarlm(models$sar[[i]],
newdata = filter(MyData2, Group == i),
listw = lw)
}
Anybody have a dplyr solution?