I have a question about simulating conditional distribution.
Suppose
X ~ N(0,1)
Y ~ N(rX, 1-r^2)
I want to simulate Y distribution which is conditioning on X.
The r in here is the correlation, and it can be changed for purpose.
The X distribution code would be as follows;
sd.x <- 1
mean.x <- 0
z2 <- rnorm(1000)
x <- sd.x*z2 + mean.x
But, I have no idea about simulating Y distribution.
I'll be appreciate with help.
It seems you are in the case of a linear regression ...
You can write Y = rX + epsilon, where epsilon folows N(0,(1-r)^2).
You can chek that Y has the properties you are looking for ..
So, in r, to complete your code, somthing like this should be enough :
r <- 0.8
y <- r*x + rnorm(1000, mean = 0, sd = 1-r)
Either use the mvrnorm function from the MASS package, like this:
sample <- mvrnorm(1000, mu=c(0,0), matrix(c(1, r, r, 1-r^2), 2, 2))
Or, as a more general approach simulate X then simulate Y for each value of X
sample <- data.frame(X = rnorm(1000))
sample$Y <- sapply(sample$X, function(x){
rnorm(1, r*x, 1-r ^2)
})
Related
How can I simulate data so that the coefficients recovered by lm are determined to be particular pre-determined values and have normally distributed residuals? For example, could I generate data so that lm(y ~ 1 + x) will yield (Intercept) = 1.500 and x = 4.000? I would like the solution to be versatile enough to work for multiple regression with continuous x (e.g., lm(y ~ 1 + x1 + x2)) but there are bonus points if it works for interactions as well (lm(y ~ 1 + x1 + x2 + x1*x2)). Also, it should work for small N (e.g., N < 200).
I know how to simulate random data which is generated by these parameters (see e.g. here), but that randomness carries over to variation in the estimated coefficients, e.g., Intercept = 1.488 and x = 4.067.
Related: It is possible to generate data that yields pre-determined correlation coefficients (see here and here). So I'm asking if this can be done for multiple regression?
One approach is to use a perfectly symmetrical noise. The noise cancels itself so the estimated parameters are exactly the input parameters, yet the residuals appear normally distributed.
x <- 1:100
y <- cbind(1,x) %*% c(1.5, 4)
eps <- rnorm(100)
x <- c(x, x)
y <- c(y + eps, y - eps)
fit <- lm(y ~ x)
# (Intercept) x
# 1.5 4.0
plot(fit)
Residuals are normally distributed...
... but exhibit an anormally perfect symmetry!
EDIT by OP: I wrote up a general-purpose code exploiting the symmetrical-residuals trick. It scales well with more complex models. This example also shows that it works for categorical predictors and interaction effects.
library(dplyr)
# Data and residuals
df = tibble(
# Predictors
x1 = 1:100, # Continuous
x2 = rep(c(0, 1), each=50), # Dummy-coded categorical
# Generate y from model, including interaction term
y_model = 1.5 + 4 * x1 - 2.1 * x2 + 8.76543 * x1 * x2,
noise = rnorm(100) # Residuals
)
# Do the symmetrical-residuals trick
# This is copy-and-paste ready, no matter model complexity.
df = bind_rows(
df %>% mutate(y = y_model + noise),
df %>% mutate(y = y_model - noise) # Mirrored
)
# Check that it works
fit <- lm(y ~ x1 + x2 + x1*x2, df)
coef(fit)
# (Intercept) x1 x2 x1:x2
# 1.50000 4.00000 -2.10000 8.76543
You could do rejection sampling:
set.seed(42)
tol <- 1e-8
x <- 1:100
continue <- TRUE
while(continue) {
y <- cbind(1,x) %*% c(1.5, 4) + rnorm(length(x))
if (sum((coef(lm(y ~ x)) - c(1.5, 4))^2) < tol) continue <- FALSE
}
coef(lm(y ~ x))
#(Intercept) x
# 1.500013 4.000023
Obviously, this is a brute-force approach and the smaller the tolerance and the more complex the model, the longer this will take. A more efficient approach should be possible by providing residuals as input and then employing some matrix algebra to calculate y values. But that's more of a maths question ...
I'm trying to figure out a way to find the minimum/maximum from a fitted quadratic model. In this case the minimum.
x.lm <- lm(Y ~ X + I(X^2))
Edit: To clarify, I can already find the minimum y through min(predict(x.lm)). How can I translate this to it's corresponding x value.
Check this out. Idea is that you have to take fitted values form x.lm fit
#example data
X <- 1:100
Y <- 1:100 + rnorm(n = 100, mean = 0, sd = 4)
x.lm <- lm(Y ~ X + I(X^2))
fits <- x.lm$fitted.values #getting fits, you can take residuals,
# and other parameters too
# I guess you are looking for this.
min.fit = min(fits)
max.fit = max(fits)
After another question
df <- cbind(X, Y, fits)
df <- as.data.frame(df)
index <- which.min(df$fits) #very usefull command
row.in.df <- df[index,]
I am attempting to take a linear model fitted to empirical data, eg:
set.seed(1)
x <- seq(from = 0, to = 1, by = .01)
y <- x + .25*rnorm(101)
model <- (lm(y ~ x))
summary(model)
# R^2 is .6208
Now, what I would like to do is use the predict function (or something similar) to create, from x, a vector y of predicted values that shares the error of the original relationship between x and y. Using predict alone gives perfectly fitted values, so R^2 is 1 e.g:
y2 <- predict(model)
summary(lm(y2 ~ x))
# R^2 is 1
I know that I can use predict(model, se.fit = TRUE) to get the standard errors of the prediction, but I haven't found an option to incorporate those into the prediction itself, nor do I know exactly how to incorporate these standard errors into the predicted values to give the correct amount of error.
Hopefully someone here can point me in the right direction!
How about simulate(model) ?
set.seed(1)
x <- seq(from = 0, to = 1, by = .01)
y <- x + .25*rnorm(101)
model <- (lm(y ~ x))
y2 <- predict(model)
y3 <- simulate(model)
matplot(x,cbind(y,y2,y3),pch=1,col=1:3)
If you need to do it it by hand you could use
y4 <- rnorm(nobs(model),mean=predict(model),
sd=summary(model)$sigma)
I would like to know how to simulate quantities of interest out of a regression model estimated using either the arm or the rstanarm packages in R. I am a newbie in Bayesian methods and R and have been using the Zelig package for some time. I asked a similar question before, but I would like to know if it is possible to simulate those quantities using the posterior distribution estimated by those packages.
In Zelig you can set the values you want for the independent values and it calculates the results for the outcome variable (expected value, probability, etc). An example:
# Creating a dataset:
set.seed(10)
x <- rnorm(100,20,10)
z <- rnorm(100,10,5)
e <- rnorm(100,0,1)
y <- 2*x+3*z+e
df <- data.frame(x,z,e,y)
# Loading Zelig
require(Zelig)
# Model
m1.zelig <- zelig(y ~ x + z, model="ls", data=df)
summary(m1.zelig)
# Simulating z = 10
s1 <- setx(m1.zelig, z = 10)
simulation <- sim(m1.zelig, x = s1)
summary(simulation)
So Zelig keeps x at its mean (20.56), and simulates the quantity of interest with z = 10. In this case, y is approximately 71.
The same model using arm:
# Model
require(arm)
m1.arm <- bayesglm(y ~ x + z, data=df)
summary(m1.arm)
And using rstanarm:
# Model
require(rstanarm)
m1.stan <- stanlm(y ~ x + z, data=df)
print(m1.stan)
Is there any way to simulate z = 10 and x equals to its mean with the posterior distribution estimated by those two packages and get the expected value of y? Thank you very much!
In the case of bayesglm, you could do
sims <- arm::sim(m1.arm, n = 1000)
y_sim <- rnorm(n = 1000, mean = sims#coef %*% t(as.matrix(s1)), sd = sims#sigma)
mean(y_sim)
For the (unreleased) rstanarm, it would be similar
sims <- as.matrix(m1.stan)
y_sim <- rnorm(n = nrow(sims), mean = sims[,1:(ncol(sims)-1)] %*% t(as.matrix(s1)),
sd = sims[,ncol(sims)])
mean(y_sim)
In general for Stan, you could pass s1 as a row_vector and utilize it in a generated quantities block of a .stan file like
generated quantities {
real y_sim;
y_sim <- normal_rng(s1 * beta, sigma);
}
in which case the posterior distribution of y_sim would appear when you print the posterior summary.
Say we have a linear model f1 that was fit to some x and y data points:
f1 <- lm(y ~ x,data=d)
How can I generate new y values at new x values (that are different from the old x values but are within the range of the old x values) using this f1 fit in R?
stats:::simulate.lm allows you to sample from a linear model fitted with lm. (In contrast to the approach of #Bulat this uses unbiased estimates of the residual variance). To simulate at different values of the independent variable, you could hack around like this:
# simulate example data
x <- runif(20, 0, 100)
y <- 5*x + rnorm(20, 0, 10)
df <- data.frame(x, y)
# fit linear model
mod <- lm(y ~ x, data = df)
# new values of the independent variable
x_new <- 1:100
# replaces fitted values of the model object with predictions for new data,
mod$fitted.values <- predict(mod, data.frame(x=x_new)) # "hack"
# simulate samples appropriate noise and adds it the models `fitted.values`
y_new <- simulate(mod)[, 1] # simulate can return multiple samples (as columns), we only need one
# visualize original data ...
plot(df)
# ... alongside simulated data at new values of the independent variable (x)
points(x_new, y_new, col="red")
(original data in black, simulated in red)
I am looking at the same problem.
In simple terms it can be done by using sample from residuals:
mod <- lm(y ~ x, data = df)
x_new <- c(5) # value that you need to simulate for.
pred <- predict(mod, newdata=data.frame(x = x_new))
err <- sample(mod$residuals, 1)
y <- pred + err
There is a simulate(fit, nsim = 10, XX = x_new) function, that is supposed to do it for you.
You can use predict for this:
x <- runif(20, 0, 100)
y <- 5*x + rnorm(20, 0, 10)
df <- data.frame(x, y)
df
plot(df)
mod <- lm(y ~ x, data = df)
x_new <- 1:100
pred <- predict(mod, newdata=data.frame(x = x_new))
plot(df)
points(x_new, pred)