I have a some data and I draw them on a plot, using R.
After that, I draw the loess function about that data.
Here is the code:
data <- read.table("D:/data.csv", header=TRUE, sep=",", na.strings="NA", dec=".", strip.white=TRUE)
ur <- subset(data, select = c(users,responseTime))
ur <- ur[with(ur, order(users, responseTime)), ]
plot(ur, xlab="Users", ylab="Response Time (ms)")
lines(ur)
loess_fit <- loess(responseTime ~ users, ur)
lines(ur$users, predict(loess_fit), col = "blue")
Here's my plot's image:
How can I get the function of this regression?
For example: responseTime = 68 + 45 * users.
Thanks.
You can use the loess_fit object from your code to predict the response time. If you want to estimate the average response time for 230 users, you could do:
predict(loess_fit, newdata=data.frame(users=230))
Here is an interesting blog post on this subject.
EDIT: If you want to make predictions for values outside your data, you need a theory or further assumptions. The most simple assumption would be a linear fit,
lm_fit <- lm(responseTime ~ users, data=ur)
predict(lm_fit, newdata=data.frame(users=400))
However, your data may show heteroscedacity (non-constant variance) and may show non-normal residuals. You might want to check if that is the case. If it is, then a robust linear fitting procedure such as rlm from the package MASS, or a generalized linear model glm might be worth a try. I am not an expert for that, maybe someone else or at Cross Validated can provide better help.
The loess.demo function in the TeachingDemos package shows the logic underlying the loess fit. This can help you understand what is going on and why there is not a simple prediction function. However, for predicting, there is a predict function that works with loess fits to create the prediction. You can also find the linear equation that will predict for a specific value of x (but it will be different for each value of x you may want to predict for).
Related
I have a dataframe which is a time series. I am using the function lm to build a multivariate regression model.
linearmodel <- lm(Y~X1+X2+X3, data = data)
I want to plot the residuals of this linearmodel on the y-axis and time on the x-axis using a simple function, with the lm() object as the input.
Standard residual plotting functions like the one in car package (car::residualPlot) gives residuals on the Y-axis and fitted-values on the Y-axis.
Ideally, I need the residuals on the Y-axis and the timescale on the X-axis. But I understand that the function lm() is time agnositc. So, I can live with if the residuals are on Y-axis in the same order as the data input and nothing on the X-axis
Is there a plotting function which i can use by passing the linearmodel object into the function (not something where i can extract the residuals and use ggplot2). So for example: plot<- plotresidualsinorder(linearmodels) should give me the residuals on Y-axis in the same order of the data input?
I want to use this plot in R-shiny ultimately.
My research led me to car package, which is wonderful in its own right, but doesn't have the function to solve my problem.
Many thanks in advance for the help.
You can use the Residual Plot information. For the proposed solution, we need to apply the lm function to a formula that describes your Y variables by the variables X1+X2+X3, and save the linear regression model in a new linearmodel variable. Finally, we compute the residual with the resid function. In your case, the following solution can be representative for your problem.
Proposed solution:
linearmodel <- lm(Y~X1+X2+X3, data = data)
lm_resid <- resid(linearmodel)
plot(data$X1+X2+X3, lm_resid,
ylab="Residuals", xlab="Time",
main="Data")
abline(0, 0)
For any help concerning how does the resid function works, you can try:
help(resid)
Calisto's solution will work, but there is a more simple and straightforward solution. The lm function already give to you the regression residuals. So you may simply pass:
plot(XTime, linearmodel$residuals, main = "Residuals")
XTime is the Date variable of your dataset, maybe you may require to format that with POSIX functions: https://www.rdocumentation.org/packages/base/versions/3.6.2/topics/as.POSIX*
Add parameters as you need to share it on R-shiny.
My prof decided that our first experience with coding was going to be trying to fit the function z(t) = A(1-e^(-t/T)) into a given data-set from class using R. I'm completely lost. I keep using lm and nls functions, without quite knowing how they work. So far, I have the data graphed but I have no clue how to get any sort of line more complicated than
mod3<-lm(y~I(x^1/5))
pre3<-predict(mod3)
lines(pre3)
to sum up: how do I find the A and T parameters? Do I use nls for the formula? Anything helps. I'll include a picture of the graph and the data. Please ignore the random lines on the plot. graph depicting my dataset dataset I have to use
One could attempt transform your expression into a linear relationship, but sometimes it is easier to just let the computer do the work. As mention in the comments, R has the nls function to perform the nonlinear regression.
Here is an example using some dummy data. The supply the nls function with your equation, the data frame containing the data and supply it with the initial estimates of the parameters.
See comments for additional details.
#create dummy data
A= 0.8
T1 = 13
t <- seq(2, 50, 3)
z <- A*(1-exp(-t/T1))
z<- z +rnorm(length(z), 0, 0.005) #add noise
#starting data frame
df <-data.frame(t, z)
#solve non-linear model
model <- nls(z ~ A*(1-exp(-t/Tc)), data=df, start = list(A=1, Tc=1))
print(summary(model))
#predict
pred_y <-predict(model, data.frame(t))
#plot
plot(x=t, y=z)
lines(y=pred_y, x= t, col="blue")
I have been reading a tutorial from https://www.datanovia.com/en/lessons/anova-in-r/ on how to perform ANOVA test in R. However, my question is regarding checking normality of the distribution in general.
There is an option to do a QQ plot with the ggqqplot function. However I do not know how to define the function. From what I can see in the tutorial on the datanovia, they use residuals from the linear model:
# Build the linear model
model <- lm(weight ~ group, data = PlantGrowth)
# Create a QQ plot of residuals
ggqqplot(residuals(model)
Then I performed the same test this way:
ggqqplot(PlantGrowth, "weight")
I expected to see the same result; however, the results differ.
From the documentation of the function ggqqplot it is not clear to me how is it correct to define it. Does someone have an explanation?
Thanks :D
You would just do ggqqplot(PlantGrowth) as long as that variable is a vector of numeric values. the function only takes a vector of numeric values and gives you something like this ex: ggqqplot(iris$Sepal.Length)
I'm working with a mgcv::gam model in R to generate predictions in which the relationship between time (year) and the outcome variable (out) varies. For example, in one scenario, I'd like to force time to affect the outcome variable in a linear manner, in another a marginally decreasing manner, and in another, I'd like to specify specific slopes of the time-outcome interaction. I'm unsure how to force the prediction to treat the interaction between time and the outcome variable in a specific manner:
res <- gam(out ~ s(time) + s(GEOID, bs='re'), data = df, method = "REML")
pred <- predict(gam, newdata = ndf, type="response", se=T)
There isn't an interaction betweentime and out; here time has a potentially non-linear effect on out.
Are we talking about trying to force certain shapes for the function of time? If so, you will need to estimate different models; use time if you want a linear effect:
res_lin <- gam(out ~ time + s(GEOID, bs='re'), data = df, method = "REML")
and look at shape constrained p splines to enforce montonicity or concave/convex relationships.
The scam package has these sorts of constraints and uses mgcv with GCV smoothness selection to fit the shape constrained models.
As for specifying a specific slope for the linear effect of time, I think you'll need to include time as an offset in the model. So say the slope you want is 0.5 I think you need to do + offset(I(0.5*time)) because an offset has by definition a coefficient of 1. I would double check this though as I might have messed up my thinking here.
I am using lowess function to fit a regression between two variables x and y. Now I want to know the fitted value at a new value of x. For example, how do I find the fitted value at x=2.5 in the following example. I know loess can do that, but I want to reproduce someone's plot and he used lowess.
set.seed(1)
x <- 1:10
y <- x + rnorm(x)
fit <- lowess(x, y)
plot(x, y)
lines(fit)
Local regression (lowess) is a non-parametric statistical method, it's a not like linear regression where you can use the model directly to estimate new values.
You'll need to take the values from the function (that's why it only returns a list to you), and choose your own interpolation scheme. Use the scheme to predict your new points.
Common technique is spline interpolation (but there're others):
https://www.r-bloggers.com/interpolation-and-smoothing-functions-in-base-r/
EDIT: I'm pretty sure the predict function does the interpolation for you. I also can't find any information about what exactly predict uses, so I've tried to trace the source code.
https://github.com/wch/r-source/blob/af7f52f70101960861e5d995d3a4bec010bc89e6/src/library/stats/R/loess.R
else { ## interpolate
## need to eliminate points outside original range - not in pred_
I'm sure the R code calls the underlying C implementation, but it's not well documented so I don't know what algorithm it uses.
My suggestion is: either trust the predict function or roll out your own interpolation algorithm.