My question is about the typical feed-forward single-hidden-layer backprop neural network, as implemented in package nnet, and trained with train() in package caret. This is related to this question but in the context of the nnet and caret packages in R.
I demonstrate the problem with a simple regression example where Y = sin(X) + small error:
raw Y ~ raw X: predicted outputs are uniformly zero where raw Y < 0.
scaled Y (to 0-1) ~ raw X: solution looks great; see code below.
The code is as follows
library(nnet)
X <- t(t(runif(200, -pi, pi)))
Y <- t(t(sin(X))) # Y ~ sin(X)
Y <- Y + rnorm(200, 0, .05) # Add a little noise
Y_01 <- (Y - min(Y))/diff(range(Y)) # Y linearly transformed to have range 0-1.
plot(X,Y)
plot(X, Y_01)
dat <- data.frame(cbind(X, Y, Y_01)); names(dat) <- c("X", "Y", "Y_01")
head(dat)
plot(dat)
nnfit1 <- nnet(formula = Y ~ X, data = dat, maxit = 2000, size = 8, decay = 1e-4)
nnpred1 <- predict(nnfit1, dat)
plot(X, nnpred1)
nnfit2 <- nnet(formula = Y_01 ~ X, data = dat, maxit = 2000, size = 8, decay = 1e-4)
nnpred2 <- predict(nnfit2, dat)
plot(X, nnpred2)
When using train() in caret, there is a preProcess option but it only scales the inputs. train(..., method = "nnet", ...) appears to be using the raw Y values; see code below.
library(caret)
ctrl <- trainControl(method = "cv", number = 10)
nnet_grid <- expand.grid(.decay = 10^seq(-4, -1, 1), .size = c(8))
nnfit3 <- train(Y ~ X, dat, method = "nnet", maxit = 2000,
trControl = ctrl, tuneGrid = nnet_grid, preProcess = "range")
nnfit3
nnpred3 <- predict(nnfit3, dat)
plot(X, nnpred3)
Of course, I could linearly transform the Y variable(s) to have a positive range, but then my predictions will be on the wrong scale. Though this is only a minor headache, I'm wondering if there is a better solution for training nnet or avNNet models with caret when the output has negative values.
This was answered on cross validated here by user topepo
The relevant part of their answer is:
Since Y is roughly between -1 and 1 you should also use linout = TRUE in your nnet and train calls.
Related
I'm here re-raising the issue of predicting CI's for gamlss models using the newdata argument. A further complication is that I'm interested in partial effects as well.
A closely related issue (without partial effects) was un-resolved in 2018: Error when predicting new fitted values from R gamlss object.
I'm wondering if there has been updates that also extend to partial effects. The example below reproduces the error (notice the `type = "terms" specifying I'm interested in the effects of each model term)".
library(gamlss)
library(tidyverse)
#example data
test_df <- tibble(x = rnorm(1e4),
x2 = rnorm(n = 1e4),
y = x2^2 + rnorm(1e4, sd = 0.5))
#fitting gamlss model
gam_test = gamlss(formula = y ~ pb(x2) + x,
sigma.fo= y ~ pb(x2) + x,
data = test_df)
#data I want predictions for
pred_df <- tibble(x = seq(-0.5, 0.5, length.out = 300),
x2 = seq(-0.5, 0.5, length.out = 300))
#returns error when se.fit = TRRUE
pred <- predictAll(object = gam_test,
type = "terms",
se.fit = TRUE, #works if se.fit = FALSE
newdata = pred_df)
Many thanks in advance!
I talked to the main developer of the gamlss software (who is responsible for this function).
He says that the option se.fit=TRUE with type="terms"
has not yet been implemented,
and unfortunately he is too busy at present.
One idea is to bootstrap the original data,
and predict terms for each bootstrap sample,
and then use the results to obtain CI's.
I have the following function:
Of this function, the parameter R is a constant with a value of 22.5. I want to estimate parameters A and B using nonlinear regression (nls() function). I made a few attempts, but all were unsuccessful. I'm not very familiar with this type of operations in R, so I would like your help.
Additionally, if possible, I would also like to plot this function using ggplot2.
# Initial data
x <- c(0, 60, 90, 120, 180, 240)
y <- c(0, 0.967676, 1.290101, 1.327099, 1.272404, 1.354246)
R <- 22.5
df <- data.frame(x, y)
f <- function(x) (1/(n^2))*exp((-B*(n^2)*(pi^2)*x)/(R^2))
# First try
nls(formula = y ~ A*(1-(6/(pi^2))*sum(f, seq(1, Inf, 1))),
data = df,
start = list(A = 1,
B = 0.7))
Error in seq.default(1, Inf, 1) : 'to' must be a finite number
# Second try
nls(formula = y ~ A*(1-(6/(pi^2))*integrate(f, 1, Inf)),
data = df,
start = list(A = 1,
B = 0.7))
Error in f(x, ...) : object 'n' not found
You can use a finite sum approximation. Using 25 terms:
f <- function(x, B, n = 1:25) sum((1/(n^2))*exp((-B*(n^2)*(pi^2)*x)/(R^2)))
fm <- nls(formula = y ~ cbind(A = (1-(6/pi^2))* Vectorize(f)(x, B)),
data = df,
start = list(B = 0.7),
alg = "plinear")
fm
giving:
Nonlinear regression model
model: y ~ cbind(A = (1 - (6/pi^2)) * Vectorize(f)(x, B))
data: df
B .lin.A
-0.00169 1.39214
residual sum-of-squares: 1.054
Number of iterations to convergence: 12
Achieved convergence tolerance: 9.314e-06
The model does not seem to fit the data very well (solid line in graph below); however, a logistic model seems to work well (dashed line).
fm2 <- nls(y ~ SSlogis(x, Asym, xmid, scal), df)
plot(y ~ x, df)
lines(fitted(fm) ~ x, df)
lines(fitted(fm2) ~ x, df, lty = 2)
legend("bottomright", c("fm", "fm2"), lty = 1:2)
I've read a few Q&As about this, but am still not sure I understand, why the coefficients from glmnet and caret models based on the same sample and the same hyper-parameters are slightly different. Would greatly appreciate an explanation!
I am using caret to train a ridge regression:
library(ISLR)
Hitters = na.omit(Hitters)
x = model.matrix(Salary ~ ., Hitters)[, -1] #Dropping the intercept column.
y = Hitters$Salary
set.seed(0)
train = sample(1:nrow(x), 7*nrow(x)/10)
library(caret)
set.seed(0)
train_control = trainControl(method = 'cv', number = 10)
grid = 10 ^ seq(5, -2, length = 100)
tune.grid = expand.grid(lambda = grid, alpha = 0)
ridge.caret = train(x[train, ], y[train],
method = 'glmnet',
trControl = train_control,
tuneGrid = tune.grid)
ridge.caret$bestTune
# alpha is 0 and best lambda is 242.0128
Now, I use the lambda (and alpha) found above to train a ridge regression for the whole data set. At the end, I extract the coefficients:
ridge_full <- train(x, y,
method = 'glmnet',
trControl = trainControl(method = 'none'),
tuneGrid = expand.grid(
lambda = ridge.caret$bestTune$lambda, alpha = 0)
)
coef(ridge_full$finalModel, s = ridge.caret$bestTune$lambda)
Finally, using exactly the same alpha and lambda, I try to fit the same ridge regression using glmnet package - and extract coefficients:
library(glmnet)
ridge_full2 = glmnet(x, y, alpha = 0, lambda = ridge.caret$bestTune$lambda)
coef(ridge_full2)
The reason is the fact the exact lambda you specified was not used by caret. You can check this by:
ridge_full$finalModel$lambda
closest values are 261.28915 and 238.07694.
When you do
coef(ridge_full$finalModel, s = ridge.caret$bestTune$lambda)
where s is 242.0128 the coefficients are interpolated from the coefficients actually calculated.
Wheres when you provide lambda to the glmnet call the model returns exact coefficients for that lambda which differ only slightly from the interpolated ones caret returns.
Why this happens:
when you specify one alpha and one lambda for a fit on all of the data caret will actually fit:
fit = function(x, y, wts, param, lev, last, classProbs, ...) {
numLev <- if(is.character(y) | is.factor(y)) length(levels(y)) else NA
theDots <- list(...)
if(all(names(theDots) != "family")) {
if(!is.na(numLev)) {
fam <- ifelse(numLev > 2, "multinomial", "binomial")
} else fam <- "gaussian"
theDots$family <- fam
}
## pass in any model weights
if(!is.null(wts)) theDots$weights <- wts
if(!(class(x)[1] %in% c("matrix", "sparseMatrix")))
x <- Matrix::as.matrix(x)
modelArgs <- c(list(x = x,
y = y,
alpha = param$alpha),
theDots)
out <- do.call(glmnet::glmnet, modelArgs)
if(!is.na(param$lambda[1])) out$lambdaOpt <- param$lambda[1]
out
}
this was taken from here.
in your example this translates to
fit <- glmnet::glmnet(x, y,
alpha = 0)
lambda <- unique(fit$lambda)
these lambda values correspond to ridge_full$finalModel$lambda:
all.equal(lambda, ridge_full$finalModel$lambda)
#output
TRUE
I intended to fit a 2D co-ordinate cloud, which resembles an incomplete ellipse. When I tried using bs() function from "splines" package in R to get the fitting curve (which is used to generate the B-spline basis matrix for a polynomial spline), it looked like this.
But what I intended was something like this.
To feed the parameters to bs() and plot it, I just followed this
mod <- lm(y ~ bs(x, knots = seq(0.1, 0.9, by = 0.1)))
pdat <- data.frame(x = seq(min(x), max(x), length = 100))
## predict for new `x`
pdat <- transform(pdat, yhat = predict(mod, newdata = pdat))
# plotting
plot(y ~ x)
lines(yhat ~ x, data = pdat, lwd = 2, col = "red")
Is there any way to achieve what I intended?
Thanks!
I can create simple graphs. I would like to have observed and predicted values (from a linear regression) on the same graph. I am plotting say Yvariable vs Xvariable. There is only 1 predictor and only 1 response. How could I also add linear regression curve to the same graph?
So to conclude need help with:
plotting actuals and predicted both
plotting regression line
Here is one option for the observed and predicted values in a single plot as points. It is easier to get the regression line on the observed points, which I illustrate second
First some dummy data
set.seed(1)
x <- runif(50)
y <- 2.5 + (3 * x) + rnorm(50, mean = 2.5, sd = 2)
dat <- data.frame(x = x, y = y)
Fit our model
mod <- lm(y ~ x, data = dat)
Combine the model output and observed x into a single object for plott
res <- stack(data.frame(Observed = dat$y, Predicted = fitted(mod)))
res <- cbind(res, x = rep(dat$x, 2))
head(res)
Load lattice and plot
require("lattice")
xyplot(values ~ x, data = res, group = ind, auto.key = TRUE)
The resulting plot should look similar to this
To get just the regression line on the observed data, and the regression model is a simple straight line model as per the one I show then you can circumvent most of this and just plot using
xyplot(y ~ x, data = dat, type = c("p","r"), col.line = "red")
(i.e. you don't even need to fit the model or make new data for plotting)
The resulting plot should look like this
An alternative to the first example which can be used with anything that will give coefficients for the regression line is to write your own panel functions - not as scary as it seems
xyplot(y ~ x, data = dat, col.line = "red",
panel = function(x, y, ...) {
panel.xyplot(x, y, ...)
panel.abline(coef = coef(mod), ...) ## using mod from earlier
}
)
That gives a plot from Figure 2 above, but by hand.
Assuming you've done this with caret then
mod <- train(y ~ x, data = dat, method = "lm",
trControl = trainControl(method = "cv"))
xyplot(y ~ x, data = dat, col.line = "red",
panel = function(x, y, ...) {
panel.xyplot(x, y, ...)
panel.abline(coef = coef(mod$finalModel), ...) ## using mod from caret
}
)
Will produce a plot the same as Figure 2 above.
Another option is to use panel.lmlineq from latticeExtra.
library(latticeExtra)
set.seed(0)
xsim <- rnorm(50, mean = 3)
ysim <- (0 + 2 * xsim) * (1 + rnorm(50, sd = 0.3))
## basic use as a panel function
xyplot(ysim ~ xsim, panel = function(x, y, ...) {
panel.xyplot(x, y, ...)
panel.lmlineq(x, y, adj = c(1,0), lty = 1,xol.text='red',
col.line = "blue", digits = 1,r.squared =TRUE)
})