Is it possible to specify lower bound in response variable during smooth with gam? - r

I am trying to fit a smoothed surface of z against x and y using formula z ~ s(x, y) with gam function
in mgcv package. My goal is to predict response z based on new values of x and y.
In my real situation, z should be a positive number negative z would be meaningless. However, the predicted zs
are sometimes negative. It seems that for some region, there is not enough points in the training data to estimate z
accurately.
My question is: Is there a way to specifiy a lower bound of z during smooth in gam so that later I won't get negative zs with predict?
Below is a minimal example that reproduces this issue.
library(mgcv)
x <- seq(0.1, 1, by = 0.01)
y <- seq(0.1, 1, by = 0.01)
dtt <- expand.grid(x = x, y = y)
set.seed(123)
dtt$xp <- dtt$x + rnorm(nrow(dtt)) / 100
dtt$yp <- dtt$y + rnorm(nrow(dtt)) / 100
dtt$z <- 1 / (dtt$xp^2 + dtt$yp^2)
m <- sample.int(nrow(dtt), 3000)
dtt.train <- dtt[m, ]
dtt.test <- dtt[!(1:nrow(dtt) %in% m), ]
fit <- gam(z ~ s(x, y), data = dtt.train)
p <- predict(fit, newdata = dtt.test)
plot(dtt.test$z, p, xlab = 'Real', ylab = 'Predicted', pch = 19, col = 1 + (p < 0))
abline(h = 0, v = 0)
As you can see, for the red points. the real values are positive but the predicted values are negative.

Related

How to fit Gaussian distribution with one-sided data?

x <- c(-3,-2.5,-2,-1.5,-1,-0.5)
y <- c(2,2.5,2.6,2.9,3.2,3.3)
The challenge is that the entire data is from the left slope, how to generate a two-sided Gaussian Distribution?
There is incomplete information with regards to the question. Hence several ways can be implemented. NOTE that the data is insufficient. ie trying fitting tis by nls does not work.
Here is one way to tackle it:
f <- function(par, x, y )sum((y - par[3]*dnorm(x,par[1],par[2]))^2)
a <- optim(c(0, 1, 1), f, x = x, y = y)$par
plot(x, y, xlim = c(-3,3.5), ylim = c(2, 3.5))
curve(dnorm(x, a[1], a[2])*a[3], add = TRUE, col = 2)
There is no way to fit a Gaussian distribution with these densities. If correct y-values had been provided this would be one way of solving the problem:
# Define function to be optimized
f <- function(pars, x, y){
mu <- pars[1]
sigma <- pars[2]
y_hat <- dnorm(x, mu, sigma)
se <- (y - y_hat)^2
sum(se)
}
# Define the data
x <- c(-3,-2.5,-2,-1.5,-1,-0.5)
y <- c(2,2.5,2.6,2.9,3.2,3.3)
# Find the best paramters
opt <- optim(c(-.5, .1), f, 'SANN', x = x, y = y)
plot(
seq(-5, 5, length.out = 200),
dnorm(seq(-5, 5, length.out = 200), opt$par[1], opt$par[2]), type = 'l', col = 'red'
)
points(c(-3,-2.5,-2,-1.5,-1,-0.5), c(2,2.5,2.6,2.9,3.2,3.3))
Use nls to get a least squares fit of y to .lin.a * dnorm(x, b, c) where .lin.a, b and c are parameters to be estimated.
fm <- nls(y ~ cbind(a = dnorm(x, b, c)),
start = list(b = mean(x), c = sd(x)), algorithm = "plinear")
fm
giving:
Nonlinear regression model
model: y ~ cbind(a = dnorm(x, b, c))
data: parent.frame()
b c .lin.a
0.2629 3.2513 27.7287
residual sum-of-squares: 0.02822
Number of iterations to convergence: 7
Achieved convergence tolerance: 2.582e-07
The dnorm model (black curve) seems to fit the points although even a straight line (blue line) involving only two parameters (intercept and slope) instead of 3 isn't bad.
plot(y ~ x)
lines(fitted(fm) ~ x)
fm.lin <- lm(y ~ x)
abline(fm.lin, col = "blue")

Force GAM model fit to be monotonic and go through a fixed point (x0, y0) with R mgcv

I am trying to fit a GAM model to data under two constraints simultatenously: (1) the fit is monotonic (increasing), (2) the fit goes through a fixed point, say, (x0,y0).
So far, I managed to have these two constraints work separately:
For (1), based on mgcv::pcls() documentation examples, by using mgcv::mono.con() to get linear constraints sufficient for monotonicity, and estimate model coefs via mgcv::pcls(), using the constraints.
For (2), based on this post, by setting the value of spline at knot location x0 to 0 + using offset term in the model formula.
However, I struggle to combine these two constraints simultaneously. I guess a way to go is mgcv::pcls(), but I could work out neither (a) doing a similar trick of setting the value of spline at knot location x0 to 0 + using offset nor (b) setting equality constraint(s) (which I think could yield my (2) constraint setup).
I also note that the approach for setting the value of spline at knot location x0 to 0 for my constrain condition (2) yields weirdly wiggly outcome (as compared to unconstrained GAM fit) -- as showed below.
Attempt so far: fit a smooth function to data under two constraints separately
Simulate some data
library(mgcv)
set.seed(1)
x <- sort(runif(100) * 4 - 1)
f <- exp(4*x)/(1+exp(4*x))
y <- f + rnorm(100) * 0.1
dat <- data.frame(x=x, y=y)
GAM unconstrained (for comparison)
k <- 13
fit0 <- gam(y ~ s(x, k = k, bs = "cr"), data = dat)
# predict from unconstrained GAM fit
newdata <- data.frame(x = seq(-1, 3, length.out = 1000))
newdata$y_pred_fit0 <- predict(fit0, newdata = newdata)
GAM constrained: (1) the fit is monotonic (increasing)
k <- 13
# Show regular spline fit (and save fitted object)
f.ug <- gam(y~s(x,k=k,bs="cr"))
# explicitly construct smooth term's design matrix
sm <- smoothCon(s(x,k=k,bs="cr"),dat,knots=NULL)[[1]]
# find linear constraints sufficient for monotonicity of a cubic regression spline
# it assumes "cr" is the basis and its knots are provided as input
F <- mono.con(sm$xp)
G <- list(
X=sm$X,
C=matrix(0,0,0), # [0 x 0] matrix (no equality constraints)
sp=f.ug$sp, # smoothing parameter estimates (taken from unconstrained model)
p=sm$xp, # array of feasible initial parameter estimates
y=y,
w= dat$y * 0 + 1 # weights for data
)
G$Ain <- F$A # matrix for the inequality constraints
G$bin <- F$b # vector for the inequality constraints
G$S <- sm$S # list of penalty matrices; The first parameter it penalizes is given by off[i]+1
G$off <- 0 # Offset values locating the elements of M$S in the correct location within each penalty coefficient matrix. (Zero offset implies starting in first location)
p <- pcls(G); # fit spline (using smoothing parameter estimates from unconstrained fit)
# predict
newdata$y_pred_fit2 <- Predict.matrix(sm, data.frame(x = newdata$x)) %*% p
# plot
plot(y ~ x, data = dat)
lines(y_pred_fit0 ~ x, data = newdata, col = 2, lwd = 2)
lines(y_pred_fit2 ~ x, data = newdata, col = 4, lwd = 2)
Blue line: constrained; red line: unconstrained
GAM constrained: (2) fitted go through (x0,y0)=(-1, -0.1)
k <- 13
## Create a spline basis and penalty
## Make sure there is a knot at the constraint point (here: -1)
knots <- data.frame(x = seq(-1,3,length=k))
# explicit construction of a smooth term in a GAM
sm <- smoothCon(s(x,k=k,bs="cr"), dat, knots=knots)[[1]]
## 1st parameter is value of spline at knot location -1, set it to 0 by dropping
knot_which <- which(knots$x == -1)
X <- sm$X[, -knot_which] ## spline basis
S <- sm$S[[1]][-knot_which, -knot_which] ## spline penalty
off <- dat$y * 0 + (-0.1) ## offset term to force curve through (x0, y0)
## fit spline constrained through (x0, y0)
gam_1 <- gam(y ~ X - 1 + offset(off), paraPen = list(X = list(S)))
# predict (add offset of -0.1)
newdata_tmp <- Predict.matrix(sm, data.frame(x = newdata$x))
newdata_tmp <- newdata_tmp[, -knot_which]
newdata$y_pred_fit1 <- (newdata_tmp %*% coef(gam_1))[, 1] + (-0.1)
# plot
plot(y ~ x, data = dat)
lines(y_pred_fit0 ~ x, data = newdata, col = 2, lwd = 2)
lines(y_pred_fit1 ~ x, data = newdata, col = 3, lwd = 2)
# lines at cross of which the plot should go throught
abline(v=-1, col = 3); abline(h=-0.1, col = 3)
Green line: constrained; red line: unconstrained
I think you could augment the data vectors x and y with (x0, y0) and then put a (really) high weight on the first observation (i.e. add a weight vector to your G list).
Alternatively to the simple weighting strategy, we can write the quadratic programming problem starting from the results of the preliminary smoothing. This is illustrated in the second R-code below (in this case I used p-spline smoothers, see Eilers and Marx 1991).
Hope this helps a bit (a similar problem is discussed here).
Rcode example 1 (weight strategy)
set.seed(123)
N = 100
x <- sort(runif(N) * 4 - 1)
f <- exp(4*x)/(1+exp(4*x))
y <- f + rnorm(N) * 0.1
x = c(-1, x)
y = c(-0.1, y)
dat = data.frame(x = x, y= y)
k <- 13
fit0 <- gam(y ~ s(x, k = k, bs = "cr"), data = dat)
# predict from unconstrained GAM fit
newdata <- data.frame(x = seq(-1, 3, length.out = 1000))
newdata$y_pred_fit0 <- predict(fit0, newdata = newdata)
k <- 13
# Show regular spline fit (and save fitted object)
f.ug <- gam(y~s(x,k=k,bs="cr"))
# explicitly construct smooth term's design matrix
sm <- smoothCon(s(x,k=k,bs="cr"),dat,knots=NULL)[[1]]
# find linear constraints sufficient for monotonicity of a cubic regression spline
# it assumes "cr" is the basis and its knots are provided as input
F <- mono.con(sm$xp)
G <- list(
X=sm$X,
C=matrix(0,0,0), # [0 x 0] matrix (no equality constraints)
sp=f.ug$sp, # smoothing parameter estimates (taken from unconstrained model)
p=sm$xp, # array of feasible initial parameter estimates
y=y,
w= c(1e8, 1:N * 0 + 1) # weights for data
)
G$Ain <- F$A # matrix for the inequality constraints
G$bin <- F$b # vector for the inequality constraints
G$S <- sm$S # list of penalty matrices; The first parameter it penalizes is given by off[i]+1
G$off <- 0 # Offset values locating the elements of M$S in the correct location within each penalty coefficient matrix. (Zero offset implies starting in first location)
p <- pcls(G); # fit spline (using smoothing parameter estimates from unconstrained fit)
# predict
newdata$y_pred_fit2 <- Predict.matrix(sm, data.frame(x = newdata$x)) %*% p
# plot
plot(y ~ x, data = dat)
lines(y_pred_fit0 ~ x, data = newdata, col = 2, lwd = 2)
lines(y_pred_fit2 ~ x, data = newdata, col = 4, lwd = 2)
abline(v = -1)
abline(h = -0.1)
rm(list = ls())
library(mgcv)
library(pracma)
library(colorout)
set.seed(123)
N = 100
x = sort(runif(N) * 4 - 1)
f = exp(4*x)/(1+exp(4*x))
y = f + rnorm(N) * 0.1
x0 = -1
y0 = -0.1
dat = data.frame(x = x, y= y)
k = 50
# Show regular spline fit (and save fitted object)
f.ug = gam(y~s(x,k=k,bs="ps"))
# explicitly construct smooth term's design matrix
sm = smoothCon(s(x,k=k,bs="ps"), dat,knots=NULL)[[1]]
# Build quadprog to estimate the coefficients
scf = sapply(f.ug$smooth, '[[', 'S.scale')
lam = f.ug$sp / scf
Xp = rbind(sm$X, sqrt(lam) * f.ug$smooth[[1]]$D)
yp = c(dat$y, rep(0, k - 2))
X0 = Predict.matrix(sm, data.frame(x = x0))
sm$deriv = 1
X1 = Predict.matrix(sm, data.frame(x = dat$x))
coef_mono = pracma::lsqlincon(Xp, yp, Aeq = X0, beq = y0, A = -X1, b = rep(0, N))
# fitted values
fit = sm$X %*% coef_mono
sm$deriv = 0
xf = seq(-1, 3, len = 1000)
Xf = Predict.matrix(sm, data.frame(x = xf))
fine_fit = Xf %*% coef_mono
# plot
par(mfrow = c(2, 1), mar = c(3,3,3,3))
plot(dat$x, dat$y, pch = 1, main= 'Data and fit')
lines(dat$x, f.ug$fitted, lwd = 2, col = 2)
lines(dat$x, fit, col = 4, lty = 1, lwd = 2)
lines(xf, fine_fit, col = 3, lwd = 2, lty = 2)
abline(h = -0.1)
abline(v = -1)
plot(dat$x, X1 %*% coef_mono, type = 'l', main = 'Derivative of the fit', lwd = 2)
abline(h = 0.0)
The following package seems to implement what you are looking for:
The proposed shape constrained smoothing has been incorporated into generalized
additive models with a mixture of unconstrained and shape restricted smooth terms
(mono-GAM). [...]
The proposed modelling approach has been implemented in an R package monogam.
The model setup is the same as in mgcv(gam) with the addition of shape constrained
smooths. In order to be consistent with the unconstrained GAM, the package provides
key functions similar to those associated with mgcv(gam).
Additive models with shape constraints

Difference between two geom_smooth() lines

I made a plot for my data and am now I would like to have the difference in y for every x that was estimated by geom_smooth(). There is a similiar question which unfortunately has no answer. For example, how to get the differences for the following plot (data below):
EDIT
Two suggestions were made but I still don't know how to calculate the differences.
First suggestion was to access the data from the ggplot object. I did so with
pb <- ggplot_build(p)
pb[["data"]][[1]]
That approach kind of works, but the data doesn't use the same x values for the groups. For example, the first x value of the first group is -3.21318853, but there is no x of -3.21318853 for the second group, hence, I can not calculate the difference in y for -3.21318853 between both groups
Second suggestion was to see what formula is used in geom_smooth(). The package description says that "loess() is used for less than 1,000 observations; otherwise mgcv::gam() is used with formula = y ~ s(x, bs = "cs")". My N is more than 60,000, hence, gam is used by default. I am not familiar with gam; can anyone provide a short answer how to calculate the difference between the two lines considering the things just described?
R Code
library("ggplot2") # library ggplot
set.seed(1) # make example reproducible
n <- 5000 # set sample size
df <- data.frame(x= rnorm(n), g= factor(rep(c(0,1), n/2))) # generate data
df$y <- NA # include y in df
df$y[df$g== 0] <- df$x[df$g== 0]**2 + rnorm(sum(df$g== 0))*5 # y for group g= 0
df$y[df$g== 1] <-2 + df$x[df$g== 1]**2 + rnorm(sum(df$g== 1))*5 # y for g= 1 (with intercept 2)
ggplot(df, aes(x, y, col= g)) + geom_smooth() + geom_point(alpha= .1) # make a plot
Hi and welcome on Stack Overflow,
The first suggestion is good. To make the x-sequences match, you can interpolate the values in between using the approx function (in stats).
library("ggplot2") # library ggplot
set.seed(1) # make example reproducible
n <- 5000 # set sample size
df <- data.frame(x= rnorm(n), g= factor(rep(c(0,1), n/2))) # generate data
df$y <- NA # include y in df
df$y[df$g== 0] <- df$x[df$g== 0]**2 + rnorm(sum(df$g== 0))*5 # y for group g= 0
df$y[df$g== 1] <-2 + df$x[df$g== 1]**2 + rnorm(sum(df$g== 1))*5 # y for g= 1 (with intercept 2)
p <- ggplot(df, aes(x, y, col= g)) + geom_smooth() + geom_point(alpha= .1) # make a plot
pb <- ggplot_build(p) # Get computed data
data.of.g1 <- pb[['data']][[1]][pb[['data']][[1]]$group == 1, ] # Extract info for group 1
data.of.g2 <- pb[['data']][[1]][pb[['data']][[1]]$group == 2, ] # Extract info for group 2
xlimit.inf <- max(min(data.of.g1$x), min(data.of.g2$x)) # Get the minimum X the two smoothed data have in common
xlimit.sup <- min(max(data.of.g1$x), max(data.of.g2$x)) # Get the maximum X
xseq <- seq(xlimit.inf, xlimit.sup, 0.01) # Sequence of X value (you can use bigger/smaller step size)
# Based on data from group 1 and group 2, interpolates linearly for all the values in `xseq`
y.g1 <- approx(x = data.of.g1$x, y = data.of.g1$y, xout = xseq)
y.g2 <- approx(x = data.of.g2$x, y = data.of.g2$y, xout = xseq)
difference <- data.frame(x = xseq, dy = abs(y.g1$y - y.g2$y)) # Compute the difference
ggplot(difference, aes(x = x, y = dy)) + geom_line() # Make the plot
Output:
As I mentioned in the comments above, you really are better off doing this outside of ggplot and instead do it with a full model of the two smooths from which you can compute uncertainties on the difference, etc.
This is basically a short version of a blog post that I wrote a year or so back.
OP's exmaple data
set.seed(1) # make example reproducible
n <- 5000 # set sample size
df <- data.frame(x= rnorm(n), g= factor(rep(c(0,1), n/2))) # generate data
df$y <- NA # include y in df
df$y[df$g== 0] <- df$x[df$g== 0]**2 + rnorm(sum(df$g== 0))*5 # y for group g= 0
df$y[df$g== 1] <-2 + df$x[df$g== 1]**2 + rnorm(sum(df$g== 1))*5 # y for g= 1 (with intercept 2)
Start by fitting the model for the example data:
library("mgcv")
m <- gam(y ~ g + s(x, by = g), data = df, method = "REML")
Here I'm fitting a GAM with a factor-smooth interaction (the by bit) and for this model we need to also include g as a parametric effect as the group-specific smooths are both centred about 0 so we need to include the group means in the parametric part of the model.
Next we need a grid of data along the x variable at which we will estimate the difference between the two estimated smooths:
pdat <- with(df, expand.grid(x = seq(min(x), max(x), length = 200),
g = c(0,1)))
pdat <- transform(pdat, g = factor(g))
then we use this prediction data to generate the Xp matrix, which is a matrix that maps values of the covariates to values of the basis expansion for the smooths; we can manipulate this matrix to get the difference smooth that we want:
xp <- predict(m, newdata = pdat, type = "lpmatrix")
Next some code to identify which rows and columns in xp belong to the smooths for the respective levels of g; as there are only two levels and only a single smooth term in the model, this is entirely trivial but for more complex models this is needed and it is important to get the smooth component names right for the grep() bits to work.
## which cols of xp relate to splines of interest?
c1 <- grepl('g0', colnames(xp))
c2 <- grepl('g1', colnames(xp))
## which rows of xp relate to sites of interest?
r1 <- with(pdat, g == 0)
r2 <- with(pdat, g == 1)
Now we can difference the rows of xp for the pair of levels we are comparing
## difference rows of xp for data from comparison
X <- xp[r1, ] - xp[r2, ]
As we focus on the difference, we need to zero out all the column not associated with the selected pair of smooths, which includes any parametric terms.
## zero out cols of X related to splines for other lochs
X[, ! (c1 | c2)] <- 0
## zero out the parametric cols
X[, !grepl('^s\\(', colnames(xp))] <- 0
(In this example, these two lines do exactly the same thing, but in more complex examples both are needed.)
Now we have a matrix X which contains the difference between the two basis expansions for the pair of smooths we're interested in, but to get this in terms of fitted values of the response y we need to multiply this matrix by the vector of coefficients:
## difference between smooths
dif <- X %*% coef(m)
Now dif contains the difference between the two smooths.
We can use X again and covariance matrix of the model coefficients to compute the standard error of this difference and thence a 95% (in this case) confidence interval for the estimate difference.
## se of difference
se <- sqrt(rowSums((X %*% vcov(m)) * X))
## confidence interval on difference
crit <- qt(.975, df.residual(m))
upr <- dif + (crit * se)
lwr <- dif - (crit * se)
Note that here with the vcov() call we're using the empirical Bayesian covariance matrix but not the one corrected for having chosen the smoothness parameters. The function I show shortly allows you to account for this additional uncertainty via argument unconditional = TRUE.
Finally we gather the results and plot:
res <- data.frame(x = with(df, seq(min(x), max(x), length = 200)),
dif = dif, upr = upr, lwr = lwr)
ggplot(res, aes(x = x, y = dif)) +
geom_ribbon(aes(ymin = lwr, ymax = upr, x = x), alpha = 0.2) +
geom_line()
This produces
Which is consistent with an assessment that shows the model with the group-level smooths doesn't provide substantially better fit than a model with different group means but only single common smoother in x:
r$> m0 <- gam(y ~ g + s(x), data = df, method = "REML")
r$> AIC(m0, m)
df AIC
m0 9.68355 30277.93
m 14.70675 30285.02
r$> anova(m0, m, test = 'F')
Analysis of Deviance Table
Model 1: y ~ g + s(x)
Model 2: y ~ g + s(x, by = g)
Resid. Df Resid. Dev Df Deviance F Pr(>F)
1 4990.1 124372
2 4983.9 124298 6.1762 73.591 0.4781 0.8301
Wrapping up
The blog post I mentioned has a function which wraps the steps above into a simple function, smooth_diff():
smooth_diff <- function(model, newdata, f1, f2, var, alpha = 0.05,
unconditional = FALSE) {
xp <- predict(model, newdata = newdata, type = 'lpmatrix')
c1 <- grepl(f1, colnames(xp))
c2 <- grepl(f2, colnames(xp))
r1 <- newdata[[var]] == f1
r2 <- newdata[[var]] == f2
## difference rows of xp for data from comparison
X <- xp[r1, ] - xp[r2, ]
## zero out cols of X related to splines for other lochs
X[, ! (c1 | c2)] <- 0
## zero out the parametric cols
X[, !grepl('^s\\(', colnames(xp))] <- 0
dif <- X %*% coef(model)
se <- sqrt(rowSums((X %*% vcov(model, unconditional = unconditional)) * X))
crit <- qt(alpha/2, df.residual(model), lower.tail = FALSE)
upr <- dif + (crit * se)
lwr <- dif - (crit * se)
data.frame(pair = paste(f1, f2, sep = '-'),
diff = dif,
se = se,
upper = upr,
lower = lwr)
}
Using this function we can repeat the entire analysis and plot the difference with:
out <- smooth_diff(m, pdat, '0', '1', 'g')
out <- cbind(x = with(df, seq(min(x), max(x), length = 200)),
out)
ggplot(out, aes(x = x, y = diff)) +
geom_ribbon(aes(ymin = lower, ymax = upper, x = x), alpha = 0.2) +
geom_line()
I won't show the plot here as it is identical to that shown above except for the axis labels.

I would like to visualize the third order interaction fitted with thin-plate regression splines

I am a beginner of R so it may be a simple question.
I am now trying to fit a 4-dimensional point using thin-plate regression splines. One variable is a target variable and three variables are an explanatory variable.
I made a model with third order interaction and fitted the data to this.
library(mgcv)
dat <- read.csv('../data//data.csv')
model <- gam(Y ~ s(x1, x2, x3), data=dat)
By giving x3, I want to visualize a three-dimensional graph of spline curve or estimated contour plot, but how do I do it?
It will be very helpful if you can answer.
Thanks.
This is the sample data.
n = 100
x1 <- runif(n, min = 0, max = 100)
x2 <- runif(n, min = 0, max = 100)
x3 <- runif(n, min = 0, max = 100)
Y = numeric(n)
for(i in 1:n){
Y[i] <- x1[i]**0.5*x2[i]**2*x3[i]/10000
}
dat = data.frame(Y=Y, x1=x1, x2=x2, x3=x3)
I do thin-plane regression spline using this dat.
model <- gam(Y ~ s(x1, x2, x3, k= 50), data=dat)
Then, I would like to obtain a fitting curve of three-dimensional thin-plane regression spline or contour plot estimated by regression spline when x3 = 25, for example.
To make a contour plot, you can use contour(x, y, z, ...). z is your data Matrix (in your case, Y[x1,x2, ], x and y are index vectors from 0 to 1 with a length of nrow(Y[x1,x2, ]) and ncol(Y[x1,x2, ]).
You should be able to use it similar to:
contour( x = seq(0, 1, length.out = length(x1)), y = seq(0, 1, length.out = length(x2)), z = Y[x1,x2, ] )
I found a solution with reference to the answer of d0d0.
n=100
const=25
x = y = seq(0, n, 1)
f = function(x,y){
dtmp <- data.frame(x1=(x), x2=(y), x3=(const))
pred <- predict.gam(model, dtmp)
}
z = outer(x, y, f)
contour(x,y,z)

Drawing a regression surface with an interaction in a 3D figure in R

Using car::scatter3d(), I am trying to create a 3D figure with a regression surface indicating an interaction between a categorical and a continuous variable. Partly following the code here, I obtained a figure below.
The figure is obviously wrong in that the regression surface does not reach one of the values of the categorical variable. The problem perhaps lies in the use of the rgl::persp3d() (the last block of the code below), but I have not been able to identify what exactly I'm doing wrongly. Could someone let me know what I'm missing and how to fix the problem?
library(rgl)
library(car)
n <- 100
set.seed(1)
x <- runif(n, 0, 10)
set.seed(1)
z <- sample(c(0, 1), n, replace = TRUE)
set.seed(1)
y <- 0.5 * x + 0.1 * z + 0.3 * x * z + rnorm(n, sd = 1.5)
d <- data.frame(x, z, y)
scatter3d(y ~ x + z, data = d,
xlab = "continuous", zlab = "categorical", ylab = "outcome",
residuals = FALSE, surface = FALSE
)
d2 <- d
d2$x <- d$x / (max(d$x) - min(d$x))
d2$y <- d$y / (max(d$y) - min(d$y))
mod <- lm(y ~ x * z, data = d2)
grd <- expand.grid(x = unique(d2$x), z = unique(d2$z))
grd$pred <- predict(mod, newdata = grd)
grd <- grd[order(grd$z, grd$x), ]
# The problem is likely to lie somewhere below.
persp3d(x = unique(grd$x), y = unique(grd$z),
z = matrix(grd$pred, length(unique(grd$z)), length(unique(grd$x))),
alpha = 0.5,
col = "blue",
add = TRUE,
xlab = "", ylab = "", zlab = ""
)
I prefer sticking to car::scatter3d() in drawing the original graph because I already made several figures with car::scatter3d() and want to make this figure consistent with them as well.

Resources