I trained a logistic regression model in R using the glm function
model<-glm(df1$deny~df1$dir+df1$hir+df1$lvr+df1$ccs+df1$mcs+df1$pbcr+df1$dmi+df1$self+df1$single+df1$uria+df1$condominium+df1$black,data=df1,family='binomial')
Now i want to get the mean response for a data point
test<-c(0.59,0.24,0.941177,3,2,0,1,0,0,10.6,1,1)
the test data points are the respective predictors as in the model. i.e. dir = 0.59, hir = 0.24...
How to obtain the mean response in this case?
model <- glm(deny~dir+hir+lvr+ccs+mcs+pbcr+dmi+
self+single+uria+condominium+black,
data=df1,family='binomial')
test <- c(0.59,0.24,0.941177,3,2,0,1,0,0,10.6,1,1)
You can either use the model definition:
X <- matrix(c(1, test), nrow = 1)
beta <- coef(model)
drop(plogis(X %*% beta))
or
dftest <- as.data.frame(X)
names(dftest) <- c("dir", "hir", "lvr", "ccs", ...)
(you need to complete the list of names yourself, I'm lazy)
or possibly
names(dftest) <- setdiff(names(df1), "deny")
if the model variables match the order etc. of the data frame
Then:
predict(model, newdata = dftest, type = "response")
Sorted. I did
df.test<- df1[0,-13]
head(df.test)
test<- c(0.59,0.24,0.941177,3,2,0,1,0,0,10.6,1,1)
df.test[nrow(df.test)+1,]=test
pred<- model.1 %>% predict(df.test,type='response')
pred
Related
I am fitting a model using a random site-level effect using a generalized additive model, implemented in the mgcv package for R. I had been doing this using the function gam() however, to speed things up I need to shift to the bam() framework, which is basically the same as gam(), but faster. I further sped up fitting by passing the options bam(nthreads = N, discrete=T), where nthreads is the number of cores on my machine. However, when I use the discretization option, and then try to make predictions with my model on new data, while ignoring the random effect, I consistent get an error.
Here is code to generate example data and reproduce the error.
library(mgcv)
#generate data.
N <- 10000
x <- runif(N,0,1)
y <- (0.5*x / (x + 0.2)) + rnorm(N)*0.1 #non-linear relationship between x and y.
#uninformative random effect.
random.x <- as.factor(do.call(paste0, replicate(2, sample(LETTERS, N, TRUE), FALSE)))
#fit models.
fit1 <- gam(y ~ s(x) + s(random.x, bs = 're')) #this one takes ~1 minute to fit, rest faster.
fit2 <- bam(y ~ s(x) + s(random.x, bs = 're'))
fit3 <- bam(y ~ s(x) + s(random.x, bs = 're'), discrete = T, nthreads = 2)
#make predictions on new data.
newdat <- data.frame(runif(200, 0, 1))
colnames(newdat) <- 'x'
test1 <- predict(fit1, newdata=newdat, exclude = c("s(random.x)"), newdata.guaranteed = T)
test2 <- predict(fit2, newdata=newdat, exclude = c("s(random.x)"), newdata.guaranteed = T)
test3 <- predict(fit3, newdata=newdat, exclude = c("s(random.x)"), newdata.guaranteed = T)
Making predictions with the third model which uses discretization throws this error (which the other two do not):
Error in model.frame.default(object$dinfo$gp$fake.formula[-2], newdata) :
variable lengths differ (found for 'random.x')
In addition: Warning message:
'newdata' had 200 rows but variables found have 10000 rows
How can I go about making predictions for a new dataset using the model fit with discretization?
newdata.gauranteed doesn't seem to be working for bam() models with discrete = TRUE. You could email the author and maintainer of mgcv and send him the reproducible example so he can take a look. See ?bug.reports.mgcv.
You probably want
names(newdat) <- "x"
as data frames have names.
But the workaround is just to pass in something for random.x
newdat <- data.frame(x = runif(200, 0, 1), random.x = random.x[[1]])
and then do your call to generate test3 and it will work.
The warning message and error are the result of you not specifying random.x in the newdata and then mgcv looking for random.x and finding it in the global environment. You should really gather that variables into a data frame and use the data argument when you are fitting your models, and try not to leave similarly named objects lying around in your global environment.
I'm working on an optimization of a logistic regression model made with glm, the optimization is a lasso regression using glmnet. I want to compare both models using the output of a Hosmer Lemeshow test and I get this output.
For the glm I get
> hl <- hoslem.test(trainingDatos$Exited, fitted(logit.Mod))
> hl
Hosmer and Lemeshow goodness of fit (GOF) test
data: trainingDatos$Exited, fitted(logit.Mod)
X-squared = 2.9161, df = 8, p-value = 0.9395
And when I try to run the test for the lasso regression I get
> hll <- hoslem.test(trainingDatos$Exited, fitted(lasso.model), g=10)
Error in cut.default(yhat, breaks = qq, include.lowest = TRUE) :
'x' must be numeric
I also tried to use the coefficients of the lasso regression to make it numeric and I get
> hll <- hoslem.test(trainingDatos$Exited, fitted(lasso.model$beta), g=10)
Error: $ operator not defined for this S4 class
But when I treat it as an S4
> hll <- hoslem.test(trainingDatos$Exited, fitted(lasso.model#beta), g=10)
Error in fitted(lasso.model#beta) :
trying to get slot "beta" from an object (class "lognet") that is not an S4 object
Any way to run the test for my lasso regression?
Here is my full code for the lasso regression, can't share the database right now sorry
#Creation of Training Data Set
input_ones <- Datos[which(Datos$Exited == 1), ] #All 1s
input_zeros <- Datos[which(Datos$Exited == 0), ] #All 0s
set.seed(100)
#Training 1s
input_ones_training_rows <- sample(1:nrow(input_ones), 0.7*nrow(input_ones))
#Training 0s
input_zeros_training_rows <- sample(1:nrow(input_zeros), 0.7*nrow(input_ones))
training_ones <- input_ones[input_ones_training_rows, ]
training_zeros <- input_zeros[input_zeros_training_rows, ]
trainingDatos <- rbind(training_ones, training_zeros)
library(glmnet)
#Conversion of training data into matrix form
x <- model.matrix(Exited ~ CreditScore + Geography + Gender
+ Age + Tenure + Balance + IsActiveMember
+ EstimatedSalary, trainingDatos)[,-1]
#Defining numeric response variable
y <- trainingDatos$Exited
sed.seed(100)
#Grid search to find best lambda
cv.lasso<-cv.glmnet(x, y, alpha = 1, family = "binomial")
#Creation of the model
lasso.model <- glmnet(x, y, alpha = 1, family = "binomial",
lambda = cv.lasso$lambda.1se)
coef(cv.lasso, cv.lasso$lambda.1se)
#Now trying to run the test
library(ResourceSelection)
set.seed(12657)
hll <- hoslem.test(trainingDatos$Exited, fitted(lasso.model), g=10)#numeric value error
hll <- hoslem.test(trainingDatos$Exited, fitted(lasso.model$beta), g=10)#$ not defined for S4
hll <- hoslem.test(trainingDatos$Exited, fitted(lasso.model#beta), g=10)#saying that beta is nos S4
glmnet uses a unique predict() method for obtaining fitted values. As rightly mentioned, the errors came from using fitted(). Meanwhile, running such tests could be easier with the gofcat package. Supported objects are passed directly to the functions. Your glm model, for instance, goes hosmerlem(logit.Mod).
I prefer to use caret when fitting models because of its relative speed and preprocessing capabilities. However, I'm slightly confused on how it makes predictions. When comparing predictions made directly from the train object and predictions made from the extracted final model, I'm seeing very different numbers. The predictions from the train object appear to be more accurate.
library(caret)
library(ranger)
x1 <- rnorm(100)
x2 <- rbeta(100, 1, 1)
y <- 2*x1 + x2 + 5*x1*x2
data <- data.frame(x1, x2, y)
fitRanger <- train(y ~ x1 + x2, data = data,
method = 'ranger',
tuneLength = 1,
preProcess = c('knnImpute', 'center', 'scale'))
predict.data <- data.frame(x1 = rnorm(10), x2 = rbeta(10, 1, 1))
prediction1 <- predict(fitRanger, newdata = predict.data)
prediction2 <- predict(fitRanger$finalModel, data = predict.data)$prediction
results <- data.frame(prediction1, prediction2)
results
I'm positive it has something to do with how I preprocess the data in the train object, but even when I preprocess the test data and use the Ranger model to make predictions, the values are different
predict.data.processed <- predict.data %>%
preProcess(method = c('knnImpute',
'center',
'scale')) %>% .$data
results3 <- predict(fitRanger$finalModel, data = predict.data.processed)$prediction
results <- cbind(results, results3)
results
I want to extract the predictions from each individual tree in the ranger model, which I can't do in caret. Any thoughts?
In order to get the same predictions from the final model as with caret train you should pre-process the data in the same way. Using your example with set.seed(1):
caret predict:
prediction1 <- predict(fitRanger,
newdata = predict.data)
ranger predict on the final model. caret pre process was used on predict.data
prediction2 <- predict(fitRanger$finalModel,
data = predict(fitRanger$preProcess,
predict.data))$prediction
all.equal(prediction1,
prediction2)
#output
TRUE
The working data looks like:
set.seed(1234)
df <- data.frame(y = rnorm(1:30),
fac1 = as.factor(sample(c("A","B","C","D","E"),30, replace = T)),
fac2 = as.factor(sample(c("NY","NC","CA"),30,replace = T)),
x = rnorm(1:30))
The lme model is fitted as:
library(lme4)
mixed <- lmer(y ~ x + (1|fac1) + (1|fac2), data = df)
I used bootMer to run the parametric bootstrapping and I can successfully obtain the coefficients (intercept) and SEs for fixed&random effects:
mixed_boot_sum <- function(data){s <- sigma(data)
c(beta = getME(data, "fixef"), theta = getME(data, "theta"), sigma = s)}
mixed_boot <- bootMer(mixed, FUN = mixed_boot_sum, nsim = 100, type = "parametric", use.u = FALSE)
My first question is how to obtain the coefficients(slope) of each individual levels of the two random effects from the bootstrapping results mixed_boot ?
I have no problem extracting the coefficients(slope) from mixed model by using augment function from broom package, see below:
library(broom)
mixed.coef <- augment(mixed, df)
However, it seems like broom can't deal with boot class object. I can't use above functions directly on mixed_boot.
I also tried to modify the mixed_boot_sum by adding mmList( I thought this would be what I am looking for), but R complains as:
Error in bootMer(mixed, FUN = mixed_boot_sum, nsim = 100, type = "parametric", :
bootMer currently only handles functions that return numeric vectors
Furthermore, is it possible to obtain CI of both fixed&random effects by specifying FUN as well?
Now, I am very confused about the correct specifications for the FUN in order to achieve my needs. Any help regarding to my question would be greatly appreciated!
My first question is how to obtain the coefficients(slope) of each individual levels of the two random effects from the bootstrapping results mixed_boot ?
I'm not sure what you mean by "coefficients(slope) of each individual level". broom::augment(mixed, df) gives the predictions (residuals, etc.) for every observation. If you want the predicted coefficients at each level I would try
mixed_boot_coefs <- function(fit){
unlist(coef(fit))
}
which for the original model gives
mixed_boot_coefs(mixed)
## fac1.(Intercept)1 fac1.(Intercept)2 fac1.(Intercept)3 fac1.(Intercept)4
## -0.4973925 -0.1210432 -0.3260958 0.2645979
## fac1.(Intercept)5 fac1.x1 fac1.x2 fac1.x3
## -0.6288728 0.2187408 0.2187408 0.2187408
## fac1.x4 fac1.x5 fac2.(Intercept)1 fac2.(Intercept)2
## 0.2187408 0.2187408 -0.2617613 -0.2617613
## ...
If you want the resulting object to be more clearly named you can use:
flatten <- function(cc) setNames(unlist(cc),
outer(rownames(cc),colnames(cc),
function(x,y) paste0(y,x)))
mixed_boot_coefs <- function(fit){
unlist(lapply(coef(fit),flatten))
}
When run through bootMer/confint/boot::boot.ci these functions will give confidence intervals for each of these values (note that all of the slopes facW.xZ are identical across groups because the model assumes random variation in the intercept only). In other words, whatever information you know how to extract from a fitted model (conditional modes/BLUPs [ranef], predicted intercepts and slopes for each level of the grouping variable [coef], parameter estimates [fixef, getME], random-effects variances [VarCorr], predictions under specific conditions [predict] ...) can be used in bootMer's FUN argument, as long as you can flatten its structure into a simple numeric vector.
I would like to perform a likelihood ratio test to determine the power of a model term in a DOE. Till now I have been using the p-value from the glm fit to do this and things have been fine. As I started to use the anova function, I realized that there does not seem to be an anova function designed to accept the input from a glm.fit function, only a glm function. Here is an example of what I would like to do:
X # This is a model matrix from matrix.model
y # These are the y values for the fit
tfit = glm.fit(X, y, family = poisson())
anova(tfit, test = 'LRT')
Typically I would assume that the anova function call would just need to be altered to anova.glm, but that is not the case. How can I get the glm.fit function output to be compatible with an anova function input?
The problem is that glm.fit does not output of class glm, but a raw list with all kinds of data about the model. This cannot be fed to anova.glm since this function expects an object of class glm as produced by the glm function. If you have the raw data available (thus not turned in to a model matrix, you can apply the glm function to this to produce the desired outcome.
X <- matrix(c(runif(10), rnorm(10)), ncol = 2)
y <- round(runif(10, 1, 5))
X.mm <- model.matrix(y ~ X)
model.fit.1 <- glm.fit(X.mm, y, family = poisson())
class(model.fit.1)
model.fit.2 <- glm(y ~ X, family = "poisson")
class(model.fit.2)
anova(model.fit.2, test = "LRT")
If you can't use the glm function and must use the glm.fit then you can construct the LRT yourself from the glm.fit output. For a start take the following function
LRT.glm.fit <- function(glm.fit.mod){
df.null <- glm.fit.mod$df.null
df.mod <- glm.fit.mod$df.residual
dev.null <- glm.fit.mod$null.deviance
dev.mod <- glm.fit.mod$deviance
dev.diff <- dev.null - dev.mod
p.value <- 1 - pchisq(dev.null - dev.mod, df.null - df.mod)
output <- c(round(df.null), round(df.mod), dev.null, dev.mod, p.value)
names(output) <- c("df.null", "df.mod", "dev.null", "dev.mod", "p.value")
output
}