Multiple linear regression: Plot a straight line with confidence intervals - r

Here is my question:
1) I ran a multiple linear regression: suppose like:
lm(attitude~quality+price+location+Income)
I mainly care about the relationship between attitude and quality, and other variables are control variables.
2) Then I wanted to do a scatter plot between attitude and quality. It is easy:
Q <-ggplot(data=data, aes(x=quality, y=attitude))
Q + geom_point(size = 1)
3) I further wanted to plot the fitted line between x and y, and the slope should be the partial regression coefficient from the multiple linear regression. That is, it should be the b1 in the following formula: attitude=b1*quality+b2*price+b3*location+b4*Income, rather than the b in the following formula: attitude=b*quality. Therefore, the following code does not work correctly, as it will plot the slope of b rather than b1.
g <- g + geom_smooth(method = lm)
Someone asked a very similar question, see here
The answer provided looks like this (replaced with my variables):
g <- g + geom_smooth(data=data, aes(x=quality, y=attitude, ymin=lcl, ymax=ucl))
However, this is a LOWESS plot (as you can see the figure posted in the post), not a linear straight line plot.
My question: how can I add a straight line of slope b1, with confidence interval band?

If you want to see b1, you should draw the partial regression plots, as I know.
In this case,
regress attitude on every variable except quality
regress quality on the other predictors
plot residual of these fits:
X <-
data_frame(
x = lm(quality ~.-attitude, data = data)$resid,
y = lm(attitude ~ .-quality, data = data)$resid
)
X %>%
ggplot(aes(x, y)) +
geom_smooth(method = "lm")
This line might be same as b1 although not x,y points.

Related

Adding fixed effects regression line to ggplot

I am plotting panel data using ggplot and I want to add the regression line for my fixed effects model "fixed" to the plot. This is the current code:
# Fixed Effects Model in plm
fixed <- plm(progenyMean ~ damMean, data=finalDT, model= "within", index = c("sireID", "cropNum"))
# Plotting Function
plotFunction <- function(Data){
ggplot(Data, aes(x=damMean, y=progenyMean)) +
geom_point() +
geom_smooth(method = "lm", se = T, formula=fixed)
}
However, the plot doesn't recognise the geom_smooth() and there is no regression line on the plot.
Is it possible to plot a regression line for a fixed effects model here?
OP. Please, include a reproducible example in your next question so that we can help you better. In this case, I'll answer using the same dataset that is used on Princeton's site here, since I'm not too familiar with the necessary data structure to support the plm() function from the package plm. I do wish the dataset could be one that is a bit more dependably going to be present... but hopefully this example remains illustrative even if the dataset is no longer available.
library(foreign)
library(plm)
library(ggplot2)
library(dplyr)
library(tidyr)
Panel <- read.dta("http://dss.princeton.edu/training/Panel101.dta")
fixed <-plm(y ~ x1, data=Panel, index=c("country", "year"), model="within")
my_lm <- lm(y ~ x1, data=Panel) # including for some reference
Example: Plotting a Simple Linear Regression
Note that I've also referenced a standard linear model - this is to show you how you can extract the values and plot a line from that to match geom_smooth(). Here's an example plot of that data plus a line plotted with the lm() function used by geom_smooth().
plot <- Panel %>%
ggplot(aes(x1, y)) + geom_point() + theme_bw() +
geom_smooth(method="lm", alpha=0.1, color='gray', size=4)
plot
If I wanted to plot a line to match the linear regression from geom_smooth(), you can use geom_abline() and specify slope= and intercept=. You can see those come directly from our my_lm list:
> my_lm
Call:
lm(formula = y ~ x1, data = Panel)
Coefficients:
(Intercept) x1
1.524e+09 4.950e+08
Extracting those values for my_lm$coefficients gives us our slope and intercept (realizing that the named vector has intercept as the fist position and slope as the second. You'll see our new blue line runs directly over top of the geom_smooth() line - which is why I made that one so fat :).
plot + geom_abline(
slope=my_lm$coefficients[2],
intercept = my_lm$coefficients[1], color='blue')
Plotting line from plm()
The same strategy can be used to plot the line from your predictive model using plm(). Here, it's simpler, since the model from plm() seems to have an intercept of 0:
> fixed
Model Formula: y ~ x1
Coefficients:
x1
2475617827
Well, then it's pretty easy to plot in the same way:
plot + geom_abline(slope=fixed$coefficients, color='red')
In your case, I'd try this:
ggplot(Data, aes(x=damMean, y=progenyMean)) +
geom_point() +
geom_abline(slope=fixed$coefficients)

Not getting a smooth curve using ggplot2

I am trying to fitting a mixed effects models using lme4 package. Unfortunately I cannot share the data that i am working with. Also i couldn't find a toy data set is relevant to my problem . So here i have showed the steps that i followed so far :
First i plotted the overall trend of the data as follows :
p21 <- ggplot(data = sub_data, aes(x = age_cent, y = y))
p21+ geom_point() + geom_smooth()
Based on this , there seems to be a some nonlinear trend in the data. Hence I tried to fit the quadratic model as follows :
sub_data$age_cent=sub_data$age-mean((sub_data)$age)
sub_data$age_centsqr=(sub_data$age-mean((sub_data)$age))^2
m1= lmer(y ~ 1 + age_cent + age_centsqr +(1 | id) , sub_data, REML = TRUE)
In the above model i only included a random intercept because i don't have enough data to include both random slope and intercept.Then i extracted the predictions of these model at population level as follows :
pred1=predict(m1,re.form=NA)
Next I plotted these predictions along with a smooth quadratic function like this
p21+ geom_point() + geom_smooth(method = "lm", formula = y ~ I(x) + I(x^2)
,col="red")+geom_line(aes(y=pred1,group = id) ,col="blue", lwd = 0.5)
In the above plot , the curve corresponds to predictions are not smooth. Can any one helps me to figure out the reason for that ?
I am doing anything wrong here ?
Update :
As eipi10 pointed out , this may due to fitting different curves for different people.
But when i tried the same thing using a toy data set which is in the lme4 package , i got the same curve for each person as follows :
m1 <- lmer(Reaction ~ 1+I(Days) + (1+ Days| Subject) , data = sleepstudy)
pred1new1=predict(m1,re.form=NA)
p21 <- ggplot(data = sleepstudy, aes(x = Days, y = Reaction))
p21+ geom_point() + geom_smooth()
p21+ geom_point() + geom_smooth()+ geom_line(aes(y=pred1new1,group = Subject) ,col="red", lwd = 0.5)
What may be the reason the for different results ? Is this due to unbalance of the data ?
The data i used collected in 3 time steps and some people didn't have it for all 3 time steps. But the toy data set is a balanced data set.
Thank you
tl;dr use expand.grid() or something like it to generate a balanced/evenly spaced sample for every group (if you have a strongly nonlinear curve you may want to generate a larger/more finely spaced set of x values than in the original data)
You could also take a look at the sjPlot package, which does a lot of this stuff automatically ...
You need both an unbalanced data set and a non-linear (e.g. polynomial) model for the fixed effects to see this effect.
if the model is linear, then you don't notice missing values because the linear interpolation done by geom_line() works perfectly
if the data are balanced then there are no gaps to get weirdly filled by linear interpolation
Generate an example with quadratic effects and an unbalanced data set; fit the model
library(lme4)
set.seed(101)
dd <- expand.grid(id=factor(1:10),x=1:10)
dd$y <- simulate(~poly(x,2)+(poly(x,2)|id),
newdata=dd,
family=gaussian,
newparams=list(beta=c(0,0,0.1),
theta=rep(0.1,6),
sigma=1))[[1]]
## subsample randomly (missing values)
dd <- dd[sort(sample(nrow(dd),size=round(0.7*nrow(dd)))),]
m1 <- lmer(y ~ poly(x,2) + (poly(x,2)|id) , data = dd)
Naive prediction and plot:
dd$pred1 <- predict(m1,re.form=NA)
library(ggplot2)
p11 <- (ggplot(data = dd, aes(x = x, y = y))
+ geom_point() + geom_smooth(method="lm",formula=y~poly(x,2))
)
p11 + geom_line(aes(y=pred1,group = id) ,col="red", lwd = 0.5)
Now generate a balanced data set. This version generates 51 evenly spaced points between the min and max - this will be useful if the original data are unevenly spaced. If you have NA values in your x variable, don't forget na.rm=TRUE ...
pframe <- with(dd,expand.grid(id=levels(id),x=seq(min(x),max(x),length.out=51)
Make predictions, and overlay them on the original plot:
pframe$pred1 <- predict(m1,newdata=pframe,re.form=NA)
p11 + geom_line(data=pframe,aes(y=pred1,group = id) ,col="red", lwd = 0.5)

Different slope in 'regression' between ggplot (by suing geom_smooth(method = "lm") , and lm -function

I am using a data-set (Panel).
With this data-set I conduct the following:
1)
ols <-lm(CapNormChange ~ Policychanges, data=Panel) summary(ols)
plot(Panel$CapNormChange, Panel$Policychanges,
pch=19, xlab="CapNormChange", ylab="Policychanges")
abline(lm(Panel$CapNormChange~Panel$Policychanges),lwd=3, col="blue")
and 2)
p2 <- ggplot(data = Panel, mapping = aes(x = CapNormChange, y = Policychanges))
p2 + geom_point(alpha=0.3) + geom_smooth(method = "lm", se=F, color="orange")
I thought that the slopes of the lines of germ-smooth and and abline of the first plot are the same, and also correspond to the parameter of the dependent variable (Policychanges) in the OLS regression.
However, this is not the case ! Instead the ggplot, has a higher intercept (I tested it for different dataset). I really don't understand this, could please somebody give some advice?
In 1) you use CapNormChange as y-variable and Policychanges as x-variable. It's always y ~ x. This doesn't match what you do in the plot command. In 2) you do it the other way around.
OLS regression assumes that only y-values have associated errors. Thus, swapping x and y changes the fit. If you want the same results from both, you'd need orthogonal regression.

Curve fitting "best fit in 3d " with matlab or R

I have a problem with fitting a curve in 3D point set (or point cloud) in space. When I look at curve fitting tools, they mostly create a surface when given a point set [x,y,z]. But it is not what I want. I would like to fit on point set curve not surface.
So please help me what is the best solution for curve fitting in space (3D).
Particularly, my data looks like polynomial curve in 3d.
Equation is
z ~ ax^2 + bxy + cy^2 + d
and there is not any pre-estimated coefficients [a,b,c,d].
Thanks.
xyz <- read.table( text="x y z
518315,750 4328698,260 101,139
518315,429 4328699,830 101,120
518315,570 4328700,659 101,139
518315,350 4328702,050 101,180
518315,3894328702,849 101,190
518315,239 4328704,020 101,430", header=TRUE, dec=",")
sample image is here
With a bit of data we can now demonstrate a rather hackis effort in the direction you suggest, although this really is estimating a surface, despite your best efforts to convince us otherwise:
xyz <- read.table(text="x y z
518315,750 4328698,260 101,139
518315,429 4328699,830 101,120
518315,570 4328700,659 101,139
518315,350 4328702,050 101,180
518315,389 4328702,849 101,190
518315,239 4328704,020 101,430", header=TRUE, dec=",")
lm( z ~ I(x^2)+I(x*y) + I(y^2), data=xyz)
#---------------
Call:
lm(formula = z ~ I(x^2) + I(x * y) + I(y^2), data = xyz)
Coefficients:
(Intercept) I(x^2) I(x * y) I(y^2)
-1.182e+05 -3.187e-07 9.089e-08 NA
The collinearity of x^2 and x*y with y^2 is preventing an estimate of the y^2 variable coefficient since y = x*y/x. You can also use nls to estimate parameters for non-linear surfaces.
I suppose that you want to fit a parametrized curve of of this type:
r(t) = a + bt + ct^2
Therefore, you will have to do three independent fits:
x = ax + bx*t + cx*t^2
y = ay + by*t + cy*t^2
z = az + bz*t + cz*t^2
and obtain nine fitting parameters ax,ay,az,bx,by,bz,cx,cy,cz. Your data contains the positions x,y,z and you also need to include the time variable t=1,2,3,...,5 assuming that the points are sampled at equal time intervals.
If the 'time' parameter of your data points is unknown/random, then I suppose that you will have to estimate it yourself as another fitting parameter, one per data point. So what I suggest is the following:
Assume some reasonable parameters a,b,c.
Write a function which calculates the time t_i of each data point by
minimizing the square distance between that point and the tentative
curve r(t).
Calculate the sum of all (r(t)-R(t))^2
between the curve and your dataset R. This will be your fitting score, or
the Figure of Merit
use Matlab's genetic algoritm ga() routine to
obtain an optimal a,b,c which will minimize the Figure
of Merit as defined above
Good luck!

Predict Future values using polynomial regression in R

Was trying to predict the future value of a sample using polynomial regression in R. The y values within the sample forms a wave pattern.
For example
x = 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16
y= 1,2,3,4,5,4,3,2,1,0,1,2,3,4,5,4
But when the graph is plotted for future values the resultant y values was completely different from what was expected. Instead of a wave pattern, was getting a graph where the y values keep increasing.
futurY = 17,18,19,20,21,22
Tried different degrees of polynomial regression, but the predicted results for futurY were drastically different from what was expected
Following is the sample R code which was used to get the results
dfram <- data.frame('x'=c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16))
dfram$y <- c(1,2,3,4,5,4,3,2,1,0,1,2,3,4,5,4)
plot(dfram,dfram$y,type="l", lwd=3)
pred <- data.frame('x'=c(17,18,19,20,21,22))
myFit <- lm(y ~ poly(x,5), data=dfram)
newdata <- predict(myFit, pred)
print(newdata)
plot(pred[,1],data.frame(newdata)[,1],type="l",col="red", lwd=3)
Is this the correct technique to be used for predicting the unknown future y values OR should I be using other techniques like forecasting?
# Reproducing your data frame
dfram <- data.frame("x" = c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16),
"y" = c(1,2,3,4,5,4,3,2,1,0,1,2,3,4,5,4))
From your graph I've got the phase and period of the signal. There're better ways of calculating that automatically.
# Phase and period
fase = 1
per = 10
In the linear model function I've put the triangular signal equations.
fit <- lm(y ~ I((((trunc((x-fase)/(per/2))%%2)*2)-1) * (x-fase)%%(per/2))
+ I((((trunc((x-fase)/(per/2))%%2)*2)-1) * ((per/2)-((x-fase)%%(per/2))))
,data=dfram)
# Predict the old data
p_olddata <- predict(fit,type="response")
# Predict the new data
newdata <- data.frame('x'=c(17,18,19,20,21,22))
p_newdata <- predict(fit,newdata,type="response")
# Ploting Old and new data
plot(x=c(dfram$x,newdata$x),
y=c(p_olddata,p_newdata),
col=c(rep("blue",length(p_olddata)),rep("green",length(p_olddata))),
xlab="x",
ylab="y")
lines(dfram)
Where the black line is the original signal, the blue circles are the prediction for the original points and the green circles are the prediction for the new data.
The graph shows a perfect fit for the model because there's no noise in the data. In a real dataset you may find it so the fit will not look as nice as that.

Resources