I fitted a lasso logistic model with interaction terms. Then i wanted to visualize those interactions using a interaction plot.
I tried to find some R function that will plot interactions for glmnet models and i couldnt find any .
Is there any R package that will plot interactions for LASSO ?
Since i couldnt find any, i tried to do it manually , by plotting the predicted values. But i am getting some errors.
My code is as follows,
require(ISLR)
require(glmnet)
y <- Smarket$Direction
x <- model.matrix(Direction ~ Lag1 + Lag4* Volume, Smarket)[, -1]
lasso.mod <- cv.glmnet(x, y, alpha=1,family="binomial",nfolds = 5, type.measure="class",
lambda = seq(0.001,0.1,by = 0.001))
lasso.mod$lambda.min
pred = expand.grid(Lag1 = median(Smarket$Lag1),
Lag4 = c(-0.64,0.0385,0.596750),
Volume = seq(min(Smarket$Volume), max(Smarket$Volume), length=100))
lasso.mod1 <- glmnet(x, y, alpha=1,family="binomial",
lambda = lasso.mod$lambda.min)
pred$Direction = predict(lasso.mod1, newx=pred,
type="response", s= lasso.mod$lambda.min)
i am getting this error :
Error in cbind2(1, newx) %*% nbeta :
not-yet-implemented method for <data.frame> %*% <dgCMatrix>
Can any suggest anything to fix this issue ?
Thank you
predict.glmnet says newx must be a matrix. And you need to give interaction value by yourself.
library(dplyr)
pred = expand.grid(Lag1 = median(Smarket$Lag1),
Lag4 = c(-0.64,0.0385,0.596750),
Volume = seq(min(Smarket$Volume), max(Smarket$Volume), length=100)) %>%
mutate(`Lag4:Volume` = Lag4 * Volume) # preparing interaction values
pred$Direction = predict(lasso.mod1, newx = as.matrix(pred), # convert to matrix
type = "link", s= lasso.mod$lambda.min)
[EDITED]
Oh, I overlooked more general, better way.
pred = expand.grid(Lag1 = median(Smarket$Lag1),
Lag4 = c(-0.64,0.0385,0.596750),
Volume = seq(min(Smarket$Volume), max(Smarket$Volume), length=100))
pred$Direction = predict(lasso.mod1,
newx = model.matrix( ~ Lag1 + Lag4* Volume, pred)[, -1],
type="response", s= lasso.mod$lambda.min)
Related
I'm trying to figure out custom objective functions in LightGBM, and I figured a good place to start would be replicating the built-in functions. The equation LightGBM uses to calculate the Tweedie metric (https://github.com/microsoft/LightGBM/blob/1c27a15e42f0076492fcc966b9dbcf9da6042823/src/metric/regression_metric.hpp#L300-L318) seems to match definitions of the Tweedie loss I've found online (https://towardsdatascience.com/tweedie-loss-function-for-right-skewed-data-2c5ca470678f), though they do a weird exp(ln(score)) process, I'm guessing for numerical stability. However, their equations for the gradient and Hessian seem to be done on the log of score directly (https://github.com/microsoft/LightGBM/blob/1c27a15e42f0076492fcc966b9dbcf9da6042823/src/objective/regression_objective.hpp#L702-L732).
It seems like they are using the equation:
gradients[i] = -label_[i] * e^((1 - rho_) * score[i]) + e^((2 - rho_) * score[i]);
where I would expect the gradient to be:
gradients[i] = -label_[i] * score[i]^(- rho_) + score[i]^(1 - rho_);
My guess is somewhere LightGBM is processing score as ln(score), like using parameter reg_sqrt, but I can't find where in the documentation this is described.
Anyway I've tried recreating both their formula and my own calculations as custom objective functions, and neither seem to work:
library(lightgbm)
library(data.table)
# Tweedie gradient with variance = 1.5, according to my own math
CustomObj_t1 <- function(preds, dtrain) {
labels <- dtrain$getinfo('label')
grad <- -labels * preds^(-3/2) + preds^(-1/2)
hess <- 1/2 * (3*labels*preds^(-5/2) - preds^(-3/2))
return(list(grad = grad, hess = hess))
}
# Tweedie gradient with variance = 1.5, recreating code from LightGBM github
CustomObj_t2 <- function(preds, dtrain) {
labels <- dtrain$getinfo('label')
grad <- -labels*exp(-1/2*preds) + exp(1/2*preds)
hess <- -labels*(-1/2)*exp(-1/2*preds) + 1/2*exp(1/2*preds)
return(list(grad = grad, hess = hess))
}
params = list(objective = "tweedie",
seed = 1,
metric = "rmse")
params2 = list(objective = CustomObj_t1,
seed= 1,
metric = "rmse")
params3 = list(objective = CustomObj_t2,
seed= 1,
metric = "rmse")
# Create data
set.seed(321)
db_Custom = data.table(a=runif(2000), b=runif(2000))
db_Custom[,X := (a*4+exp(b))]
# break into test and training sets
db_Test = db_Custom[1:10]
db_Custom=db_Custom[11:nrow(db_Custom),]
FeatureCols = c("a","b")
# Create dataset
ds_Custom <- lgb.Dataset(data.matrix(db_Custom[, FeatureCols, with = FALSE]), label = db_Custom[["X"]])
# Train
fit = lgb.train(params, ds_Custom, verb=-1)
#print(" ")
fit2 = lgb.train(params2, ds_Custom, verb=-1)
#print(" ")
fit3 = lgb.train(params3, ds_Custom, verb=-1)
# Predict
pred = predict(fit, data.matrix(db_Test[, FeatureCols, with = FALSE]))
db_Test[, prediction := pmax(0, pred)]
pred2 = predict(fit2, data.matrix(db_Test[, FeatureCols, with = FALSE]))
db_Test[, prediction2 := pmax(0, pred2)]
pred3 = predict(fit3, data.matrix(db_Test[, FeatureCols, with = FALSE]))
db_Test[, prediction3 := pmax(0, pred3)]
print(db_Test[,.(X,prediction,prediction2,prediction3)])
I get the results (would expect prediction2 or prediction3 to be very similar to prediction):
"X" "prediction" "prediction2" "prediction3"
4.8931646234958 4.89996556839721 0 1.59154656425556
6.07328897031702 6.12313647937047 0 1.81022588429474
2.05728566704078 2.06824004875244 0 0.740577102751491
2.54732526765174 2.50329903656292 0 0.932517774958986
4.07044099941395 4.07047912554207 0 1.39922723582939
2.74639568121359 2.74408567443232 0 1.01628212910587
3.47720295158928 3.49241414141969 0 1.23049599462599
2.92043718858535 2.90464303454649 0 1.0680618051659
4.44415913080697 4.43091665909845 0 1.48607456777287
4.96566318066753 4.97898586895233 0 1.60163901781479
Is there something I'm missing? Am I just doing the math or coding wrong?
It appears, per the linked git page, and your prediction3 column, that if you exponentiate this column, it becomes very close to columns 0 and 1.
I am using quantreg package to predict new data based on training set. However, I noticed a discrepancy between predict.rq or predict and doing it manually. Here is an example:
The quantile regression setting is
N = 10000
tauList = seq(1:11/12)/12
y = rchisq(N,2)
X = matrix( rnorm(3*N) ,nrow = N, ncol = 3 )
fit <- rq( y ~ X-1, tau = tauList, method = "fn")
The new data set I want to predict is
newdata <- matrix( rbeta((3*N),2,2) ,nrow = N,ncol=3 )
I use predict.rq or predict to predict newdata. Both return the same result:
fit_use_predict <- predict.rq( fit, newdata = as.data.frame(newdata) )
Also I manually do the prediction based on the coefficients matrix:
coef_mat <- coef(fit)
fit_use_multiplication <- newdata %*% coef_mat
I expect both are numerically identical, but they are not:
diff <- fit_use_predict - fit_use_multiplication
print(diff)
Their difference cannot be negligible.
However, predicting the original data set X, both return the same result, i.e.,
predict(fit, newdata = data.frame(X)) = X %*% coef_mat ## True
Do I miss something when using the function? Thanks!
A more serious problem here, before we get to prediction is that the model is forcing all of the fitted quantile functions through the origin of design space and since the covariates are centered at the origin all of the quantile functions are forced to cross there. Even if the X's all lie in the positive orthant it is quite a strong assumption to say that the distribution of the response is degenerate at the origin.
I think you just have to retain the 'X' name in your data as it was in the training data.
library(quantreg)
N = 10000
tauList = seq(1:11/12)/12
y = rchisq(N,2)
X = matrix( rnorm(3*N) ,nrow = N, ncol = 3 )
fit <- rq( y ~ X-1, tau = tauList, method = "fn")
newdata <- matrix( rbeta((3*N),2,2) ,nrow = N,ncol=3 )
fit_use_predict <- predict.rq( fit, newdata = data.frame(X=I(newdata)) )
coef_mat <- coef(fit)
fit_use_multiplication <- newdata %*% coef_mat
diff <- fit_use_predict - fit_use_multiplication
max( abs(diff) )
Output is 0
I am trying to implement lasso linear regression. I train my model but when I try to make prediction on unknown data it gives me the following error:
Error in cbind2(1, newx) %*% nbeta :
invalid class 'NA' to dup_mMatrix_as_dgeMatrix
Summary of my data is:
I want to predict the unknown percent_gc. I initially train my model using data for which percent_gc is known
set.seed(1)
###training data
data.all <- tibble(description = c('Xylanimonas cellulosilytica XIL07, DSM 15894','Teredinibacter turnerae T7901',
'Desulfotignum phosphitoxidans FiPS-3, DSM 13687','Brucella melitensis bv. 1 16M'),
phylum = c('Actinobacteria','Proteobacteria','Proteobacteria','Bacteroidetes'),
genus = c('Acaryochloris','Acetohalobium','Acidimicrobium','Acidithiobacillus'),
Latitude = c('63.93','69.372','3.493.11','44.393.704'),
Longitude = c('-22.1','88.235','134.082.527','-0.130781'),
genome_size = c(8361599,2469596,2158157,3207552),
percent_gc = c(34,24,55,44),
percent_psuedo = c(0.0032987747,0.0291222313,0.0353728489,0.0590663703),
percent_signalpeptide = c(0.02987198,0.040607055,0.048757170,0.061606859))
###data for prediction
data.prediction <- tibble(description = c('Liberibacter crescens BT-1','Saprospira grandis Lewin',
'Sinorhizobium meliloti AK83','Bifidobacterium asteroides ATCC 25910'),
phylum = c('Actinobacteria','Proteobacteria','Proteobacteria','Bacteroidetes'),
genus = c('Acaryochloris','Acetohalobium','Acidimicrobium','Acidithiobacillus'),
Latitude = c('39.53','69.372','5.493.12','44.393.704'),
Longitude = c('20.1','-88.235','134.082.527','-0.130781'),
genome_size = c(474832,2469837,2158157,3207552),
percent_gc = c(NA,NA,NA,NA),
percent_psuedo = c(0.0074639239,0.0291222313,0.0353728489,0.0590663703),
percent_signalpeptide = c(0.02987198,0.040607055,0.048757170,0.061606859))
x=model.matrix(percent_gc~.,data.all)
y=data.all$percent_gc
cv.out <- cv.glmnet (x, y, alpha = 1,family = "gaussian")
best.lambda= cv.out$lambda.min
fit <- glmnet(x,y,alpha=1)
I then want to make predictions for which percent_gc in not known.
newX = matrix(data = data.prediction %>% select(-percent_gc))
data.prediction$percent_gc <-
predict(object = fit ,type="response", s=best.lambda, newx=newX)
And this generates the error I mentioned above.
I don't understand which format newX should be in order to get rid of this help. Insights would be appreciated.
I could not really figure out how to construct a appropiate matrix, but package glmnetUtils provides functionality to directly fit a formula on a dataframe and predict. With this I got it to predict values:
library(glmnetUtils)
fit <- glmnet(percent_gc~.,data.all,alpha=1)
cv.out <- cv.glmnet (percent_gc~.,data.all, alpha = 1,family = "gaussian")
best.lambda= cv.out$lambda.min
predict(object = fit,data.prediction,s=best.lambda)
I am using this code to fit a model using LASSO regression.
library(glmnet)
IV1 <- data.frame(IV1 = rnorm(100))
IV2 <- data.frame(IV2 = rnorm(100))
IV3 <- data.frame(IV3 = rnorm(100))
IV4 <- data.frame(IV4 = rnorm(100))
IV5 <- data.frame(IV5 = rnorm(100))
DV <- data.frame(DV = rnorm(100))
data<-data.frame(IV1,IV2,IV3,IV4,IV5,DV)
x <-model.matrix(DV~.-IV5 , data)[,-1]
y <- data$DV
AB<-glmnet(x=x, y=y, alpha=1)
plot(AB,xvar="lambda")
lambdas = NULL
for (i in 1:100)
{
fit <- cv.glmnet(x,y)
errors = data.frame(fit$lambda,fit$cvm)
lambdas <- rbind(lambdas,errors)
}
lambdas <- aggregate(lambdas[, 2], list(lambdas$fit.lambda), mean)
bestindex = which(lambdas[2]==min(lambdas[2]))
bestlambda = lambdas[bestindex,1]
fit <- glmnet(x,y,lambda=bestlambda)
I would like to calculate some sort of R2 using the training data. I assume that one way to do this is using the cross-validation that I performed in choosing lambda. Based off of this post it seems like this can be done using
r2<-max(1-fit$cvm/var(y))
However, when I run this, I get this error:
Warning message:
In max(1 - fit$cvm/var(y)) :
no non-missing arguments to max; returning -Inf
Can anyone point me in the right direction? Is this the best way to compute R2 based off of the training data?
The function glmnet does not return cvm as a result on fit
?glmnet
What you want to do is use cv.glmnet
?cv.glmnet
The following works (note you must specify more than 1 lambda or let it figure it out)
fit <- cv.glmnet(x,y,lambda=lambdas[,1])
r2<-max(1-fit$cvm/var(y))
I'm not sure I understand what you are trying to do. Maybe do this?
for (i in 1:100)
{
fit <- cv.glmnet(x,y)
errors = data.frame(fit$lambda,fit$cvm)
lambdas <- rbind(lambdas,errors)
r2[i]<-max(1-fit$cvm/var(y))
}
lambdas <- aggregate(lambdas[, 2], list(lambdas$fit.lambda), mean)
bestindex = which(lambdas[2]==min(lambdas[2]))
bestlambda = lambdas[bestindex,1]
r2[bestindex]
I appreciate any help to make segmented.lm (or any other function) find the obvious breakpoints in this example:
data = list(x=c(50,60,70,80,90) , y= c(703.786,705.857,708.153,711.056,709.257))
plot(data, type='b')
require(segmented)
model.lm = segmented(lm(y~x,data = data),seg.Z = ~x, psi = NA)
It returns with the following error:
Error in solve.default(crossprod(x1), crossprod(x1, y1)) :
system is computationally singular: reciprocal condition number = 1.51417e-20
If I change K:
model.lm = segmented(lm(y~x,data = data),seg.Z = ~x, psi = NA, control = seg.control(K=1))
I get another error:
Error in segmented.lm(lm(y ~ x, data = data), seg.Z = ~x, psi = NA, control = seg.control(K = 1)) :
only 1 datum in an interval: breakpoint(s) at the boundary or too close each other
A nice objective method to determine the break point is described in Crawley (2007: 427).
First, define a vector breaks for a range of potential break points:
breaks <- data$x[data$x >= 70 & data$x <= 90]
Then run a for loop for piecewise regressions for all potential break points and yank out the minimal residual standard error (mse) for each model from the summary output:
mse <- numeric(length(breaks))
for(i in 1:length(breaks)){
piecewise <- lm(data$y ~ data$y*(data$x < breaks[i]) + data$y*(data$x >= breaks[i]))
mse[i] <- summary(piecewise)[6]
}
mse <- as.numeric(mse)
Finally, identify the break point with the least mse:
breaks[which(mse==min(mse))]
Hope this helps.