Separating categorical variables in a linear regression - r

I fit a linear model using two categorical variables and one numerical variable as follows:
data(iris)
iris2 <- iris %>%
mutate(petal_type= if_else(Petal.Length > 4, "petal_long", "petal_short"),
sepal_type = if_else(Sepal.Length > 6, "flower_long", "flower_short")
)
lm(Sepal.Width ~ sepal_type*petal_type + petal_type*Petal.Width, data = iris2)
# Coefficients:
# (Intercept) sepal_typeflower_short petal_typepetal_short
# 2.48793 -0.08721 1.51070
# Petal.Width sepal_typeflower_short:petal_typepetal_short petal_typepetal_short:Petal.Width
# 0.27131 -0.28965 -1.19334
I want to separate out the two categorical variables to get an estimate of the intercept and slope for each dummy variable. I would like to create a table like this that would describe the relationship between sepal.width and petal.width:
sepal_type petal_type Intercept_estimate Slope_estimate
flower_short petal_short
flower_short petal_long
flower_long petal_short
flower_long petal_long
I can do this by hand using different contrasts, but is there an easy way?
Thanks!

iris2 %>%
group_by(petal_type, sepal_type) %>%
summarise(model = list(coef(lm(Sepal.Width~Petal.Width))),
.groups = 'drop')%>%
unnest_wider(model)
petal_type sepal_type `(Intercept)` Petal.Width
<chr> <chr> <dbl> <dbl>
1 petal_long flower_long 2.33 0.355
2 petal_long flower_short 2.86 -0.0186
3 petal_short flower_long 2.8 NA
4 petal_short flower_short 3.62 -0.922

Related

Adding 'List' Objects to Word document using the Officer package

First time posting here.
I'm trying to get some statistical results to output onto a Word doc using the Officer package. I understand that the body_add_* functions seem to only work on data frames. However, functions and tests like gvlma and ncvTest output as a list with unconventional dimensions so I'm unable to use the tidyr package to tidy the lists before turning them into a data frame using data.frame(). So I need help adding these block of text that are lists into a Word Document.
So far I have this as the ADF test outputs as a very nice list that is easily convertible to a data frame:
# ADF test into dataframe
adf_df = data.frame(adf)
adf_df
ft <- flextable(data = adf_df) %>%
theme_booktabs() %>%
autofit()
# Output table into Word doc
doc <- read_docx() %>%
body_add_flextable(value = ft) %>%
body_add_par(gvlma)
fileout <- "test.docx"
print(doc, target = fileout)
The body_add_par(gvlma) line gives the error:
Warning messages:
1: In if (grepl("<|>", x)) { :
the condition has length > 1 and only the first element will be used
2: In charToRaw(enc2utf8(x)) :
argument should be a character vector of length 1
all but the first element will be ignored
gvlma outputs as a list and here is the output:
Call:
lm(formula = PD ~ ., data = dataset)
Coefficients:
(Intercept) WorldBank_Oil
1.282 -1.449
ASSESSMENT OF THE LINEAR MODEL ASSUMPTIONS
USING THE GLOBAL TEST ON 4 DEGREES-OF-FREEDOM:
Level of Significance = 0.05
Call:
gvlma(x = model)
Value p-value Decision
Global Stat 4.6172 0.3289 Assumptions acceptable.
Skewness 0.1858 0.6664 Assumptions acceptable.
Kurtosis 0.1812 0.6703 Assumptions acceptable.
Link Function 1.7823 0.1819 Assumptions acceptable.
Heteroscedasticity 2.4678 0.1162 Assumptions acceptable.
Replicating the error with iris data-set:
library(officer); library(flextable)
adf_df <- iris
ft <- flextable(data = adf_df) %>%
theme_booktabs() %>%
autofit()
gvlma <- lm(Petal.Length ~ Sepal.Length + Sepal.Width, data=iris)
# Output table into Word doc
doc <- read_docx() %>%
body_add_flextable(value = ft) %>%
body_add_par(gvlma)
Warning messages: 1: In if (grepl("<|>", x)) { : the condition has
length > 1 and only the first element will be used 2: In
charToRaw(enc2utf8(x)) : argument should be a character vector of
length 1 all but the first element will be ignored
Issue here is that the linear model are kept as list that is efficient in calling out test parameters or model statistics. Not great as a static output.
One way to work around this is to use the commands from library(broom)
library(broom)
gvlma2 <- tidy(gvlma)
gvlma3 <- glance(gvlma)
doc <- read_docx() %>%
body_add_flextable(value = ft) %>%
body_add_flextable(value = flextable(gvlma2)) %>%
body_add_flextable(value = flextable(gvlma3))
fileout <- "test.docx"
print(doc, target = fileout)
gvlma2:
# A tibble: 3 x 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -2.52 0.563 -4.48 1.48e- 5
2 Sepal.Length 1.78 0.0644 27.6 5.85e-60
3 Sepal.Width -1.34 0.122 -10.9 9.43e-21
gvlma3:
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC deviance df.residual
<dbl> <dbl> <dbl> <dbl> <dbl> <int> <dbl> <dbl> <dbl> <dbl> <int>
1 0.868 0.866 0.646 482. 2.74e-65 3 -146. 300. 312. 61.4 147

Run multiple regressions grouped by column in R [duplicate]

I want to do a linear regression in R using the lm() function. My data is an annual time series with one field for year (22 years) and another for state (50 states). I want to fit a regression for each state so that at the end I have a vector of lm responses. I can imagine doing for loop for each state then doing the regression inside the loop and adding the results of each regression to a vector. That does not seem very R-like, however. In SAS I would do a 'by' statement and in SQL I would do a 'group by'. What's the R way of doing this?
Since 2009, dplyr has been released which actually provides a very nice way to do this kind of grouping, closely resembling what SAS does.
library(dplyr)
d <- data.frame(state=rep(c('NY', 'CA'), c(10, 10)),
year=rep(1:10, 2),
response=c(rnorm(10), rnorm(10)))
fitted_models = d %>% group_by(state) %>% do(model = lm(response ~ year, data = .))
# Source: local data frame [2 x 2]
# Groups: <by row>
#
# state model
# (fctr) (chr)
# 1 CA <S3:lm>
# 2 NY <S3:lm>
fitted_models$model
# [[1]]
#
# Call:
# lm(formula = response ~ year, data = .)
#
# Coefficients:
# (Intercept) year
# -0.06354 0.02677
#
#
# [[2]]
#
# Call:
# lm(formula = response ~ year, data = .)
#
# Coefficients:
# (Intercept) year
# -0.35136 0.09385
To retrieve the coefficients and Rsquared/p.value, one can use the broom package. This package provides:
three S3 generics: tidy, which summarizes a model's
statistical findings such as coefficients of a regression;
augment, which adds columns to the original data such as
predictions, residuals and cluster assignments; and glance, which
provides a one-row summary of model-level statistics.
library(broom)
fitted_models %>% tidy(model)
# Source: local data frame [4 x 6]
# Groups: state [2]
#
# state term estimate std.error statistic p.value
# (fctr) (chr) (dbl) (dbl) (dbl) (dbl)
# 1 CA (Intercept) -0.06354035 0.83863054 -0.0757668 0.9414651
# 2 CA year 0.02677048 0.13515755 0.1980687 0.8479318
# 3 NY (Intercept) -0.35135766 0.60100314 -0.5846187 0.5749166
# 4 NY year 0.09385309 0.09686043 0.9689519 0.3609470
fitted_models %>% glance(model)
# Source: local data frame [2 x 12]
# Groups: state [2]
#
# state r.squared adj.r.squared sigma statistic p.value df
# (fctr) (dbl) (dbl) (dbl) (dbl) (dbl) (int)
# 1 CA 0.004879969 -0.119510035 1.2276294 0.0392312 0.8479318 2
# 2 NY 0.105032068 -0.006838924 0.8797785 0.9388678 0.3609470 2
# Variables not shown: logLik (dbl), AIC (dbl), BIC (dbl), deviance (dbl),
# df.residual (int)
fitted_models %>% augment(model)
# Source: local data frame [20 x 10]
# Groups: state [2]
#
# state response year .fitted .se.fit .resid .hat
# (fctr) (dbl) (int) (dbl) (dbl) (dbl) (dbl)
# 1 CA 0.4547765 1 -0.036769875 0.7215439 0.4915464 0.3454545
# 2 CA 0.1217003 2 -0.009999399 0.6119518 0.1316997 0.2484848
# 3 CA -0.6153836 3 0.016771076 0.5146646 -0.6321546 0.1757576
# 4 CA -0.9978060 4 0.043541551 0.4379605 -1.0413476 0.1272727
# 5 CA 2.1385614 5 0.070312027 0.3940486 2.0682494 0.1030303
# 6 CA -0.3924598 6 0.097082502 0.3940486 -0.4895423 0.1030303
# 7 CA -0.5918738 7 0.123852977 0.4379605 -0.7157268 0.1272727
# 8 CA 0.4671346 8 0.150623453 0.5146646 0.3165112 0.1757576
# 9 CA -1.4958726 9 0.177393928 0.6119518 -1.6732666 0.2484848
# 10 CA 1.7481956 10 0.204164404 0.7215439 1.5440312 0.3454545
# 11 NY -0.6285230 1 -0.257504572 0.5170932 -0.3710185 0.3454545
# 12 NY 1.0566099 2 -0.163651479 0.4385542 1.2202614 0.2484848
# 13 NY -0.5274693 3 -0.069798386 0.3688335 -0.4576709 0.1757576
# 14 NY 0.6097983 4 0.024054706 0.3138637 0.5857436 0.1272727
# 15 NY -1.5511940 5 0.117907799 0.2823942 -1.6691018 0.1030303
# 16 NY 0.7440243 6 0.211760892 0.2823942 0.5322634 0.1030303
# 17 NY 0.1054719 7 0.305613984 0.3138637 -0.2001421 0.1272727
# 18 NY 0.7513057 8 0.399467077 0.3688335 0.3518387 0.1757576
# 19 NY -0.1271655 9 0.493320170 0.4385542 -0.6204857 0.2484848
# 20 NY 1.2154852 10 0.587173262 0.5170932 0.6283119 0.3454545
# Variables not shown: .sigma (dbl), .cooksd (dbl), .std.resid (dbl)
Here's an approach using the plyr package:
d <- data.frame(
state = rep(c('NY', 'CA'), 10),
year = rep(1:10, 2),
response= rnorm(20)
)
library(plyr)
# Break up d by state, then fit the specified model to each piece and
# return a list
models <- dlply(d, "state", function(df)
lm(response ~ year, data = df))
# Apply coef to each model and return a data frame
ldply(models, coef)
# Print the summary of each model
l_ply(models, summary, .print = TRUE)
Here's one way using the lme4 package.
library(lme4)
d <- data.frame(state=rep(c('NY', 'CA'), c(10, 10)),
year=rep(1:10, 2),
response=c(rnorm(10), rnorm(10)))
xyplot(response ~ year, groups=state, data=d, type='l')
fits <- lmList(response ~ year | state, data=d)
fits
#------------
Call: lmList(formula = response ~ year | state, data = d)
Coefficients:
(Intercept) year
CA -1.34420990 0.17139963
NY 0.00196176 -0.01852429
Degrees of freedom: 20 total; 16 residual
Residual standard error: 0.8201316
In my opinion is a mixed linear model a better approach for this kind of data. The code below given in the fixed effect the overall trend. The random effects indicate how the trend for each individual state differ from the global trend. The correlation structure takes the temporal autocorrelation into account. Have a look at Pinheiro & Bates (Mixed Effects Models in S and S-Plus).
library(nlme)
lme(response ~ year, random = ~year|state, correlation = corAR1(~year))
A nice solution using data.table was posted here in CrossValidated by #Zach.
I'd just add that it is possible to obtain iteratively also the regression coefficient r^2:
## make fake data
library(data.table)
set.seed(1)
dat <- data.table(x=runif(100), y=runif(100), grp=rep(1:2,50))
##calculate the regression coefficient r^2
dat[,summary(lm(y~x))$r.squared,by=grp]
grp V1
1: 1 0.01465726
2: 2 0.02256595
as well as all the other output from summary(lm):
dat[,list(r2=summary(lm(y~x))$r.squared , f=summary(lm(y~x))$fstatistic[1] ),by=grp]
grp r2 f
1: 1 0.01465726 0.714014
2: 2 0.02256595 1.108173
I think it's worthwhile to add the purrr::map approach to this problem.
library(tidyverse)
d <- data.frame(state=rep(c('NY', 'CA'), c(10, 10)),
year=rep(1:10, 2),
response=c(rnorm(10), rnorm(10)))
d %>%
group_by(state) %>%
nest() %>%
mutate(model = map(data, ~lm(response ~ year, data = .)))
See #Paul Hiemstra's answer for further ideas on using the broom package with these results.
I now my answer comes a bit late, but I was looking for a similar functionality. It would seem the built-in function 'by' in R can also do the grouping easily:
?by contains the following example, which fits per group and extracts the coefficients with sapply:
require(stats)
## now suppose we want to extract the coefficients by group
tmp <- with(warpbreaks,
by(warpbreaks, tension,
function(x) lm(breaks ~ wool, data = x)))
sapply(tmp, coef)
## make fake data
ngroups <- 2
group <- 1:ngroups
nobs <- 100
dta <- data.frame(group=rep(group,each=nobs),y=rnorm(nobs*ngroups),x=runif(nobs*ngroups))
head(dta)
#--------------------
group y x
1 1 0.6482007 0.5429575
2 1 -0.4637118 0.7052843
3 1 -0.5129840 0.7312955
4 1 -0.6612649 0.9028034
5 1 -0.5197448 0.1661308
6 1 0.4240346 0.8944253
#------------
## function to extract the results of one model
foo <- function(z) {
## coef and se in a data frame
mr <- data.frame(coef(summary(lm(y~x,data=z))))
## put row names (predictors/indep variables)
mr$predictor <- rownames(mr)
mr
}
## see that it works
foo(subset(dta,group==1))
#=========
Estimate Std..Error t.value Pr...t.. predictor
(Intercept) 0.2176477 0.1919140 1.134090 0.2595235 (Intercept)
x -0.3669890 0.3321875 -1.104765 0.2719666 x
#----------
## one option: use command by
res <- by(dta,dta$group,foo)
res
#=========
dta$group: 1
Estimate Std..Error t.value Pr...t.. predictor
(Intercept) 0.2176477 0.1919140 1.134090 0.2595235 (Intercept)
x -0.3669890 0.3321875 -1.104765 0.2719666 x
------------------------------------------------------------
dta$group: 2
Estimate Std..Error t.value Pr...t.. predictor
(Intercept) -0.04039422 0.1682335 -0.2401081 0.8107480 (Intercept)
x 0.06286456 0.3020321 0.2081387 0.8355526 x
## using package plyr is better
library(plyr)
res <- ddply(dta,"group",foo)
res
#----------
group Estimate Std..Error t.value Pr...t.. predictor
1 1 0.21764767 0.1919140 1.1340897 0.2595235 (Intercept)
2 1 -0.36698898 0.3321875 -1.1047647 0.2719666 x
3 2 -0.04039422 0.1682335 -0.2401081 0.8107480 (Intercept)
4 2 0.06286456 0.3020321 0.2081387 0.8355526 x
The lm() function above is an simple example. By the way, I imagine that your database has the columns as in the following form:
year state var1 var2 y...
In my point of view, you can to use the following code:
require(base)
library(base)
attach(data) # data = your data base
#state is your label for the states column
modell<-by(data, data$state, function(data) lm(y~I(1/var1)+I(1/var2)))
summary(modell)
The question seems to be about how to call regression functions with formulas which are modified inside a loop.
Here is how you can do it in (using diamonds dataset):
attach(ggplot2::diamonds)
strCols = names(ggplot2::diamonds)
formula <- list(); model <- list()
for (i in 1:1) {
formula[[i]] = paste0(strCols[7], " ~ ", strCols[7+i])
model[[i]] = glm(formula[[i]])
#then you can plot the results or anything else ...
png(filename = sprintf("diamonds_price=glm(%s).png", strCols[7+i]))
par(mfrow = c(2, 2))
plot(model[[i]])
dev.off()
}

The Intercept of a categorical multiple regression R is not the mean value?

Let's say I have 2 (categorical) variables and one continuous:
library(tidyverse)
set.seed(123)
ds <- data.frame(
depression=rnorm(90,10,2),
schooling_dummy=c(0,1,2),
sex_dummy=c(0,1)
)
When I regress depression on sex (0 or 1), the intercept is 10.0436, what is the mean of sex = 0. Ok!
ds %>% group_by(sex_dummy) %>%
+ summarise(formatC(mean(depression),format="f", digits=4))
# A tibble: 2 x 2
sex_dummy `formatC(mean(depression), format = "f", digits = 4)`
<dbl> <chr>
1 0 10.0436
2 1.00 10.1640
The same thing happens when I regress depression on schooling. The intercept value is 10.4398. The mean of schooling = 0 is the same.
ds %>% group_by(schooling_dummy) %>%
+ summarise(formatC(mean(depression),format="f", digits=4))
# A tibble: 3 x 2
schooling_dummy `formatC(mean(depression), format = "f", digits = 4)`
<dbl> <chr>
1 0 10.4398
2 1.00 9.7122
3 2.00 10.1593
Now, when I compute a model with both variables, why the intercept is not the mean when both groups = 0? The regression **intercept is 10.3796, but the mean when sex = 0, and schooling is = 0 is 10.32548:
ds %>% group_by(schooling_dummy,sex_dummy) %>%
+ summarise(formatC(mean(depression),format="f", digits=5))
# A tibble: 6 x 3
# Groups: schooling_dummy [?]
schooling_dummy sex_dummy `formatC(mean(depression), format = "f", digits = 5)`
<dbl> <dbl> <chr>
1 0 0 10.32548
2 0 1.00 10.55404
3 1.00 0 9.59305
4 1.00 1.00 9.83139
5 2.00 0 10.21218
6 2.00 1.00 10.10648
When I predict the model when both are 0:
predict(mod3, data.frame(sex_dummy=0, schooling_dummy=0))
1
10.37956
This result is related to depression (of course...) but still not What I was expecting, since:
(Reference: https://www.theanalysisfactor.com/interpret-the-intercept/)
What is the same for this previous forum post
I aware of my variables are categorical and I'm adjusting my script, as you can reproduce using this code below:
Thanks
library(tidyverse)
set.seed(123)
ds <- data.frame(
depression=rnorm(90,10,2),
schooling_dummy=c(0,1,2),
sex_dummy=c(0,1)
)
mod <- lm(data=ds, depression ~ relevel(factor(sex_dummy), ref = "0"))
summary(mod)
ds %>% group_by(sex_dummy) %>%
summarise(formatC(mean(depression),format="f", digits=4))
mod2 <- lm(data=ds, depression ~ relevel(factor(schooling_dummy), ref = "0"))
summary(mod2)
ds %>% group_by(schooling_dummy) %>%
summarise(formatC(mean(depression),format="f", digits=4))
mod3 <- lm(data=ds, depression ~ relevel(factor(sex_dummy), ref = "0") +
relevel(factor(schooling_dummy), ref = "0"))
summary(mod3)
ds %>% group_by(schooling_dummy,sex_dummy) %>%
summarise(formatC(mean(depression),format="f", digits=5))
predict(mod3, data.frame(sex_dummy=0, schooling_dummy=0))
Two errors in your thinking (although your R code works so it's not a programming error.
First and foremost you violated your own statement you have not dummy coded schooling it does not have only zeroes and ones it has 0,1 & 2.
Second you forgot the interaction effect in your lm modeling...
Try this...
library(tidyverse)
set.seed(123)
ds <- data.frame(
depression=rnorm(90,10,2),
schooling_dummy=c(0,1,2),
sex_dummy=c(0,1)
)
# if you explicitly make these variables factors not integers R will do the right thing with them
ds$schooling_dummy<-factor(ds$schooling_dummy)
ds$sex_dummy<-factor(ds$sex_dummy)
ds %>% group_by(schooling_dummy,sex_dummy) %>%
summarise(formatC(mean(depression),format="f", digits=5))
# you need an asterisk in your lm model to include the interaction term
lm(depression ~ schooling_dummy * sex_dummy, data = ds)
The results give you the mean(s) you were expecting...
Call:
lm(formula = depression ~ schooling_dummy * sex_dummy, data = ds)
Coefficients:
(Intercept) schooling_dummy1 schooling_dummy2
10.325482 -0.732433 -0.113305
sex_dummy1 schooling_dummy1:sex_dummy1 schooling_dummy2:sex_dummy1
0.228561 0.009778 -0.334254
and FWIW you can avoid this sort of accidental misuse of categorical variables if your data is coded as characters to begin with... so if your data is coded this way:
ds <- data.frame(
depression=rnorm(90,10,2),
schooling=c("A","B","C"),
sex=c("Male","Female")
)
You're less likely to make the same mistake plus the results are easier to read...

Running multiple regressions in R for different IDs with panel data [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am new to R, but I would need to run several simple regressions for different IDs with panel data.
I have 4 columns 1. IDs 2. time 3. Y 4. X and i would need to regress Y~X for each ID. I have 100 IDs with 120 time period per ID, so I would need to run 100 simple regressions with 120 observations.
How can I do it?
Thank you very much in advance!!!
We can either use data.table
library(data.table)
setDT(df1)[, coef(lm(Y~X)), by = ID]
Or to get the pvalues
setDT(df1)[, summary(lm(Y~X))$coef[,4], by = ID]
If we are using broom, then we can get more columns of output
library(broom)
setDT(df1)[, glance(lm(Y~X)), Species]
Or with broom/dplyr
library(dplyr)
df1 %>%
group_by(ID) %>%
do(model = lm(Y~X, .)) %>%
glance(model)
Reproducible Example
data(iris)
iris %>%
group_by(Species) %>%
do(model = lm(Sepal.Width ~Petal.Width, .)) %>%
glance(model)
# Species r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC deviance df.residual
# <fctr> <dbl> <dbl> <dbl> <dbl> <dbl> <int> <dbl> <dbl> <dbl> <dbl> <int>
#1 setosa 0.0541735 0.03446878 0.3724741 2.749265 1.038211e-01 2 -20.546993 47.093987 52.830056 6.659375 48
#2 versicolor 0.4408943 0.42924626 0.2370691 37.851387 1.466661e-07 2 2.043799 1.912403 7.648472 2.697685 48
#3 virginica 0.2891514 0.27434209 0.2747206 19.524930 5.647610e-05 2 -5.326334 16.652669 22.388738 3.622626 48
and with data.table/broom
as.data.table(iris)[, glance(lm(Sepal.Width~Petal.Width)), by = Species]
# Species r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC deviance df.residual
#1: setosa 0.0541735 0.03446878 0.3724741 2.749265 1.038211e-01 2 -20.546993 47.093987 52.830056 6.659375 48
#2: versicolor 0.4408943 0.42924626 0.2370691 37.851387 1.466661e-07 2 2.043799 1.912403 7.648472 2.697685 48
#3: virginica 0.2891514 0.27434209 0.2747206 19.524930 5.647610e-05 2 -5.326334 16.652669 22.388738 3.622626 48
The nlme package has lmList:
library(nlme)
fm <- lmList(Y ~ X | ID, DF, pool = FALSE)
or use pool = TRUE (the default) if you want a common pooled variance. Also check out these methods which operate on "lmList" class objects:
methods(class = "lmList")

Regression in R with Groups [duplicate]

This question already has answers here:
Linear Regression and group by in R
(10 answers)
Closed 6 years ago.
I have imported a CSV with 3 columns , 2 columns for Y and X and the third column which identifies the category for X ( I have 20 groups/categories). I am able to run a regression at overall level but I want to run regression for the 20 categories separately and store the co-efs.
I tried the following :
list2env(split(sample, sample$CATEGORY_DESC), envir = .GlobalEnv)
Now I have 20 files, how do I run a regression on these 20 files and store the co-effs somewhere.
Since no data was provided, I am generating some sample data to show how you can run multiple regressions and store output using dplyr and broom packages.
In the following, there are 20 groups and different x/y values per group. 20 regressions are run and output of these regressions is provided as a data frame:
library(dplyr)
library(broom)
df <- data.frame(group = rep(1:20, 10),
x = rep(1:20, 10) + rnorm(200),
y = rep(1:20, 10) + rnorm(200))
df %>% group_by(group) %>% do(tidy(lm(x ~ y, data = .)))
Sample output:
Source: local data frame [40 x 6]
Groups: group [20]
group term estimate std.error statistic p.value
<int> <chr> <dbl> <dbl> <dbl> <dbl>
1 1 (Intercept) 0.42679228 1.0110422 0.4221310 0.684045203
2 1 y 0.45625124 0.7913256 0.5765657 0.580089051
3 2 (Intercept) 1.99367392 0.4731639 4.2134955 0.002941805
4 2 y 0.05101438 0.1909607 0.2671460 0.796114398
5 3 (Intercept) 3.14391308 0.8417638 3.7349114 0.005747126
6 3 y 0.08418715 0.2453441 0.3431391 0.740336702
Quick solution with lmList (package nlme):
library(nlme)
lmList(x ~ y | group, data=df)
Call:
Model: x ~ y | group
Data: df
Coefficients:
(Intercept) y
1 0.4786373 0.04978624
2 3.5125369 -0.94751894
3 2.7429958 -0.01208329
4 -5.2231576 2.24589181
5 5.6370824 -0.24223131
6 7.1785581 -0.08077726
7 8.2060808 -0.18283134
8 8.9072851 -0.13090764
9 10.1974577 -0.18514527
10 6.0687105 0.37396911
11 9.0682622 0.23469187
12 15.1081915 -0.29234452
13 17.3147636 -0.30306692
14 13.1352411 0.05873189
15 6.4006623 0.57619151
16 25.4454182 -0.59535396
17 22.0231916 -0.30073768
18 27.7317267 -0.54651597
19 10.9689733 0.45280604
20 23.3495704 -0.14488522
Degrees of freedom: 200 total; 160 residual
Residual standard error: 0.9536226
Borrowed the data df from #Gopala answer.
Consider also a base solution with lapply():
regressionList <- lapply(unique(df$group),
function(x) lm(x ~ y, df[df$group==x,]))
And only the coefficients:
coeffList <- lapply(unique(df$group),
function(x) lm(x ~ y, df[df$group==x,])$coefficients)
Even list of summaries:
summaryList <- lapply(unique(df$group),
function(x) summary(lm(x ~ y, df[df$group==x,])))

Resources