Drawing a 3D decision boundary of logistic regression - r

I have fitted a logistic regression model that takes 3 variables into account. I would like to make a 3D plot of the datapoints and draw the decision boundary (which I suppose would be a plane here).
I found an online example that applies to the case (so that you can load the data directly)
mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv")
mylogit <- glm(admit ~ gre + gpa + rank, data = mydata, family = "binomial")
I was thinking of using the 3Dscatterplot package, but I am not sure what equation I should write to draw the boundary. Any ideas?
Many thanks,

The decision boundary will be a 3-d plane, which you could plot with any 3-d plotting package in R. I'll use persp by defining an x-y grid and then calculating the corresponding z value with the outer function:
# Use iris dataset for example logistic regression
data(iris)
iris$long <- as.numeric(iris$Sepal.Length > 6)
mod <- glm(long~Sepal.Width+Petal.Length+Petal.Width, data=iris, family="binomial")
# Plot 50% decision boundary; another cutoff can be achieved by changing the intercept term
x <- seq(2, 5, by=.1)
y <- seq(1, 7, by=.1)
z <- outer(x, y, function(x, y) (-coef(mod)[1] - coef(mod)[2]*x - coef(mod)[3]*y) /
coef(mod)[4])
persp(x, y, z, col="lightblue")

Related

How to change slope of a curve in R

I have two vectors:
x <- c(0,5,10,15,20,24,30,35,40,45,49,54,59,64,69,74,79,85,90,94,100)
y <- c(0,0,0,0,1,3,5,8,11,16,23,29,37,44,52,59,68,76,84,91,100)
plot(x,y)
I would like to be able to change the slope of the curve (Curve 1) by increasing or decreasing it by a given amount, say 50%, so the curve has a similar shape but shallower or steeper slope (Curve 2). I can create a model, but how can I change the slope based on the model?
fit <- loess(y ~ x, control=loess.control(surface="direct"))
xn <- seq(0:100)
fitline = predict(fit, newdata=xn)
Is there a function in R I can use? Thank you.

How to extrapolate loess result? [duplicate]

I am struggling with "out-of-sample" prediction using loess. I get NA values for new x that are outside the original sample. Can I get these predictions?
x <- c(24,36,48,60,84,120,180)
y <- c(3.94,4.03,4.29,4.30,4.63,4.86,5.02)
lo <- loess(y~x)
x.all <- seq(3, 200, 3)
predict(object = lo, newdata = x.all)
I need to model full yield curve, i.e. interest rates for different maturities.
From the manual page of predict.loess:
When the fit was made using surface = "interpolate" (the default), predict.loess will not extrapolate – so points outside an axis-aligned hypercube enclosing the original data will have missing (NA) predictions and standard errors
If you change the surface parameter to "direct" you can extrapolate values.
For instance, this will work (on a side note: after plotting the prediction, my feeling is that you should increase the span parameter in the loess call a little bit):
lo <- loess(y~x, control=loess.control(surface="direct"))
predict(lo, newdata=x.all)
In addition to nico's answer: I would suggest to fit a gam (which uses penalized regression splines) instead. However, extrapolation is not advisable if you don't have a model based on science.
x <- c(24,36,48,60,84,120,180)
y <- c(3.94,4.03,4.29,4.30,4.63,4.86,5.02)
lo <- loess(y~x, control=loess.control(surface = "direct"))
plot(x.all <- seq(3,200,3),
predict(object = lo,newdata = x.all),
type="l", col="blue")
points(x, y)
library(mgcv)
fit <- gam(y ~ s(x, bs="cr", k=7, fx =FALSE), data = data.frame(x, y))
summary(fit)
lines(x.all, predict(fit, newdata = data.frame(x = x.all)), col="green")

`gam` package: extra shift spotted when sketching data on `plot.gam`

I try to fit a GAM using the gam package (I know mgcv is more flexible, but I need to use gam here). I now have the problem that the model looks good, but in comparison with the original data it seems to be offset along the y-axis by a constant value, for which I cannot figure out where this comes from.
This code reproduces the problem:
library(gam)
data(gam.data)
x <- gam.data$x
y <- gam.data$y
fit <- gam(y ~ s(x,6))
fit$coefficients
#(Intercept) s(x, 6)
# 1.921819 -2.318771
plot(fit, ylim = range(y))
points(x, y)
points(x, y -1.921819, col=2)
legend("topright", pch=1, col=1:2, legend=c("Original", "Minus intercept"))
Chambers, J. M. and Hastie, T. J. (1993) Statistical Models in S (Chapman & Hall) shows that there should not be an offset, and this is also intuitively correct (the smooth should describe the data).
I noticed something comparable in mgcv, which can be solved by providing the shift parameter with the intercept value of the model (because the smooth is seemingly centred). I thought the same could be true here, so I subtracted the intercept from the original data-points. However, the plot above shows this idea wrong. I don't know where the extra shift comes from. I hope someone here may be able to help me.
(R version. 3.3.1; gam version 1.12)
I think I should first explain various output in the fitted GAM model:
library(gam)
data(gam.data)
x <- gam.data$x
y <- gam.data$y
fit <-gam(y ~ s(x,6), model = FALSE)
## coefficients for parametric part
## this includes intercept and null space of spline
beta <- coef(fit)
## null space of spline smooth (a linear term, just `x`)
nullspace <- fit$smooth.frame[,1]
nullspace - x ## all 0
## smooth space that are penalized
## note, the backfitting procedure guarantees that this is centred
pensmooth <- fit$smooth[,1]
sum(pensmooth) ## centred
# [1] 5.89806e-17
## estimated smooth function (null space + penalized space)
smooth <- nullspace * beta[2] + pensmooth
## centred smooth function (this is what `plot.gam` is going to plot)
c0 <- mean(smooth)
censmooth <- smooth - c0
## additive predictors (this is just fitted values in Gaussian case)
addpred <- beta[1] + smooth
You can first verify that addpred is what fit$additive.predictors gives, and since we are fitting additive models with Gaussian response, this is also as same as fit$fitted.values.
What plot.gam does, is to plot censmooth:
plot.gam(fit, col = 4, ylim = c(-1.5,1.5))
points(x, censmooth, col = "gray")
Remember, there is
addpred = beta[0] + censmooth + c0
If you want to shift original data y to match this plot, you not only need to subtract intercept (beta[0]), but also c0 from y:
points(x, y - beta[1] - c0)

R Fitting polynomial to data through (a) fixed point(s)

I'm stuck at a very specific problem where I have to find a function describing the (normalized) leaf shape of a plant. The problem is not just to find the polynomial that best describes the data, but also that it starts at (0,0) ends at (1,0) and moves through the point of maximum width (x_ymax, 1) without ever going wider.
An alternate option I tried is Hermite interpolation, using those 3 specific points as control points but the function it provides is way off the actual shape of the leaf, unless I provide more control points.
Is there a specific function for this or do I need to make some manual conversion? Or would there be better or alternate options to tackling this problem?
Thanks in advance!
I'm not sure if this would always work, but here is an example of a "Generalized Additive Model" that uses a cyclic spline. When you specify that the model should not have an intercept (i.e. include -1 in formula, then it should pass through y=0. You will have to scale your predictor variable to be between 0 and 1 in order for the ends to pass through the points you mentioned (see here for more info.).
Example
# required model
library(mgcv)
# make data
n <- 200
tmp <- seq(0,20*pi,,n)
x <- tmp / (2*pi)
mon <- x%%1
err <- rnorm(n, sd=0.5)
y <- sin(tmp) + err + 1
plot(x, y, t="l")
df <- data.frame(x, y, mon)
# GAM with intercept
fit1 <- gam(y ~ s(mon, bs = "cc", k = 12), data=df)
summary(fit1)
plot(fit1)
# GAM without intercept
fit2 <- gam(y ~ s(mon, bs = "cc", k = 12) - 1, data=df) # note "-1" for no intercept
summary(fit2)
plot(fit2)

Modifying a curve to prevent singular gradient matrix at initial parameter estimates

I want to use y=a^(b^x) to fit the data below,
y <- c(1.0385, 1.0195, 1.0176, 1.0100, 1.0090, 1.0079, 1.0068, 1.0099, 1.0038)
x <- c(3,4,5,6,7,8,9,10,11)
data <- data.frame(x,y)
When I use the non-linear least squares procedure,
f <- function(x,a,b) {a^(b^x)}
(m <- nls(y ~ f(x,a,b), data = data, start = c(a=1, b=0.5)))
it produces an error: singular gradient matrix at initial parameter estimates. The result is roughly a = 1.1466, b = 0.6415, so there shouldn't be a problem with intial parameter estimates as I have defined them as a=1, b=0.5.
I have read in other topics that it is convenient to modify the curve. I was thinking about something like log y=log a *(b^x), but I don't know how to deal with function specification. Any idea?
I will expand my comment into an answer.
If I use the following:
y <- c(1.0385, 1.0195, 1.0176, 1.0100, 1.0090, 1.0079, 1.0068, 1.0099, 1.0038)
x <- c(3,4,5,6,7,8,9,10,11)
data <- data.frame(x,y)
f <- function(x,a,b) {a^b^x}
(m <- nls(y ~ f(x,a,b), data = data, start = c(a=0.9, b=0.6)))
or
(m <- nls(y ~ f(x,a,b), data = data, start = c(a=1.2, b=0.4)))
I obtain:
Nonlinear regression model
model: y ~ f(x, a, b)
data: data
a b
1.0934 0.7242
residual sum-of-squares: 0.0001006
Number of iterations to convergence: 10
Achieved convergence tolerance: 3.301e-06
I always obtain an error if I use 1 as a starting value for a, perhaps because 1 raised to anything is 1.
As for automatically generating starting values, I am not familiar with a procedure to do that. One method I have read about is to simulate curves and use starting values that generate a curve that appears to approximate your data.
Here is the plot generated using the above parameter estimates using the following code. I admit that maybe the lower right portion of the line could fit a little better:
setwd('c:/users/mmiller21/simple R programs/')
jpeg(filename = "nlr.plot.jpeg")
plot(x,y)
curve(1.0934^(0.7242^x), from=0, to=11, add=TRUE)
dev.off()

Resources