I am using xgboost similar to the following example, where I "recode" some numeric value to a (numeric) value in 0,1,2 indicating class labels. Note that I did not convert this to a factor variable.
Then I use xgboost to fit a model and produce predictions.
library(xgboost)
iris$Species <- runif(nrow(iris))
recode <- function(x){
if(x >= 0 & x <= 0.33){
x <- 0
} else if(x > 0.33 & x <= 0.66){
x <- 1
} else if(x > 0.66){
x <- 2
}
}
train <- xgb.DMatrix(data = as.matrix(iris[,-5]),
label = sapply(iris$Species, FUN = recode))
bst <- xgboost(data = train,
max_depth = 4, eta = 0.5, nrounds = 10,
objective = "multi:softprob",
num_class = 3)
pred <- predict(bst, as.matrix(iris[, -5]), reshape = TRUE)
str(pred)
Is there away to obtain the column labels of the matrix of predictions? Or can I be sure that they are ordered according to the numeric values to which I recoded the input?
The columns follow the same order as your label, so it's 0,1 and 2. To be sure, you can do a confusion matrix to check whether you are predicting it correctly:
library(xgboost)
set.seed(100)
iris$Species <- runif(nrow(iris))
train <- xgb.DMatrix(data = as.matrix(iris[,-5]),
label = sapply(iris$Species, FUN = recode))
bst <- xgboost(data = train,
max_depth = 4, eta = 0.5, nrounds = 10,
objective = "multi:softprob",
num_class = 3)
pred <- predict(bst, as.matrix(iris[, -5]), reshape = TRUE)
# which.max tells you which column is most probable
# we convert them back to 0-2, assuming column 1 corresponds to 0
predicted = apply(pred,1,which.max)-1
actual = sapply(iris$Species,recode)
table(predicted,actual)
The results are:
actual
predicted 0 1 2
0 36 2 2
1 4 48 4
2 6 3 45
So most of those predicted to be 0,1 or 2 follows the highest probable class predicted.
Or if you use caret:
caret::confusionMatrix(factor(predicted,levels=1:3),factor(actual,levels=1:3))
Related
I want to derive the dependent variable Y as highly correlated with the first 5 variables of the X independent variable matrix. Also, I want to edit 1 class to be 60% and 0 class to be 40%. How do I provide this? (Correlation status is more important to me than 60%-40%)
install.packages("MASS")
library(MASS)
# Data gen
p=30
n=50
pr <- seq(0.7, 0.4, length.out = p)
pr[1] <- 1
covmat <- toeplitz(pr)
mu= rep(0,p)
X_ <- data.frame(mvrnorm(n, mu = mu, Sigma = covmat))
X <- unname(as.matrix(X_))
vCoef = rnorm(ncol(X))
vProb =exp(X%*%vCoef)/(1+exp(X%*%vCoef))
Y <- rbinom(nrow(X), 1, vProb)
mydata= data.frame(cbind(X,Y))
I'm using a R package called logistf to make a Logistc Regression and I saw that there's no predict function for new data in this package and predict package does not work with this, so I found a code that show how making this with new data:
fit<-logistf(Tax ~ L20+L24+L28+L29+L31+L32+L33+L36+S10+S15+S16+S17+S20, data=trainData)
betas <- coef(fit)
X <- model.matrix(fit, data=testData)
probs <- 1 / (1 + exp(-X %*% betas))
I want to make a cross validation version with this using fit$predict and the probabilities that probs generate for me. Has anyone ever done something like this before?
Other thing that I want to know is about fit$predict I'm making a binary logistic regression, and this function returns many values, are these values from class 0 or 1, how can I know this? Thanks
While the code that you wrote works perfectly, there is a concise way of getting the same results seemingly:
brglm_model <- brglm(formula = response ~ predictor , family = "binomial", data = train )
brglm_pred <- predict(object = brglm_model, newdata = test , type = "response")
About the CV, you have to write a few lines of code I guess:
#Setting the number of folds, and number of instances in each fold
n_folds <- 5
fold_size <- nrow(dataset) %/% 5
residual <- nrow(dataset) %% 5
#label the instances based on the number of folds
cv_labels <- c(rep(1,fold_size),rep(2,fold_size), rep(3,fold_size), rep(4,fold_size), rep(5,fold_size), rep(5,residual))
# the error term would differ based on each threshold value
t_seq <- seq(0.1,0.9,by = 0.1)
index_mat <- matrix(ncol = (n_folds+1) , nrow = length(t_seq))
index_mat[,1] <- t_seq
# the main loop for calculation of the CV error on each fold
for (i in 1:5){
train <- dataset %>% filter(cv_labels != i)
test <- dataset %>% filter(cv_labels == i )
brglm_cv_model <- brglm(formula = response_var ~ . , family = "binomial", data = train )
brglm_cv_pred <- predict(object = brglm_model, newdata = test , type = "response")
# error formula that you want, e.g. misclassification
counter <- 0
for (treshold in t_seq ) {
counter <- counter + 1
conf_mat <- table( factor(test$response_var) , factor(brglm_cv_pred>treshold, levels = c("FALSE","TRUE") ))
sen <- conf_mat[2,2]/sum(conf_mat[2,])
# other indices can be computed as follows
#spec <- conf_mat[1,1]/sum(conf_mat[1,])
#prec <- conf_mat[2,2]/sum(conf_mat[,2])
#F1 <- (2*prec * sen)/(prec+sen)
#accuracy <- (conf_mat[1,1]+conf_mat[2,2])/sum(conf_mat)
#here I am only interested in sensitivity
index_mat[counter,(i+1)] <- sen
}
}
# final data.frame would be the mean of sensitivity over each threshold value
final_mat <- matrix(nrow = length(t_seq), ncol = 2 )
final_mat[,1] <- t_seq
final_mat[,2] <- apply(X = index_mat[,-1] , MARGIN = 1 , FUN = mean)
final_mat <- data.frame(final_mat)
colnames(final_mat) <- c("treshold","sensitivity")
#why not having a look at the CV-sensitivity of the model over threshold values?
ggplot(data = final_mat) +
geom_line(aes(x = treshold, y = sensitivity ), color = "blue")
When I use the mice package to impute data I have the following issue:
I can't seem to find a way to replace NA values of new observations, given that I already have imputed the missing data in the training set.
Example 1
I have trained an algorithm with data from data frame with 10 features and 1000 observations.
How can I predict a new observation using this algorithm (with missing data)?
Example 2
Supose we have a data frame with NA values:
V1 V2 V3 R1
1 2 NA 1
1.4 -1 0 0
1.2 NA 0 1
1.6 NA 1 1
1.2 3 1 0
I impute the missing values using the mice package:
imp <- mice(df, m = 2, maxit = 100, meth = 'pmmm', seed = 12345)
The object df now has 2 dataframes with imputed values.
(dfImp1)
V1 V2 V3 R1
1 2 0.5 1
1.4 -1 0 0
1.2 1.5 0 1
1.6 1.5 1 1
1.2 3 1 0
Now with this data frame, I can train an algorithm:
modl <- glm(R1~., (dfImp1), family = binomial)
I want to predict the response of a new observation, e.g:
obs1 <- data.frame(V1 = 1, V2 = 1.4, V3 = NA)
How do I impute the missing data a of new individual observation?
It seems that mice package has not a built-in solution but we can write one.
The idea is to:
(1) use the same mice algorithm to fill NA in dataset used to train GLM and the new observation(s);
(2) predict only the new observation without NA.
I'm going to use iris as a data example.
library(R6)
library(mice)
# Binary output to use Binomial
df <- iris %>% filter(Species != "virginica")
# The new observation
new_data <- tail(df, 1)
# the dataset used to train the model
df <- head(df,-1)
# Now, let insert some NAs
insert_nas <- function(x) {
set.seed(123)
len <- length(x)
n <- sample(1:floor(0.2*len), 1)
i <- sample(1:len, n)
x[i] <- NA
x
}
df$Sepal.Length <- insert_nas(df$Sepal.Length)
df$Petal.Width <- insert_nas(df$Petal.Width)
new_data$Sepal.Width = NA
summary(df)
In fit method we apply mice to fill NAs, fit a GLM model and store it to be used in predict method.
In predict method we (1) add the new_observation to the dataset (with NAs), (2) replace NA again using mice, (3) get back the row(s) of the new observation(s) without NA and then (4) apply GLM to predict this new observation.
# R6 Class Generator
GLMWithMice <- R6Class("GLMWithMice", list(
model = NULL,
df = NULL,
fitted = FALSE,
initialize = function(df) {
self$df <- df
},
fit = function(formula = "Species~.", family = binomial) {
imp <- mice(self$df, m = 2, maxit = 100, meth = 'pmm', seed = 12345, print=FALSE)
df_cleaned <- complete(imp,1)
self$model <- glm(formula, df_cleaned, family = family, maxit = 100)
self$fitted <- TRUE
return(cat("\n model fitted!"))
},
predict = function(new_data, type = "response"){
n_rows <- nrow(self$df)
df_new <- bind_rows(self$df, new_data)
imp <- mice(df_new, m = 2, maxit = 100, meth = 'pmm', seed = 12345, print=FALSE)
df_cleaned <- complete(imp,1)
new_data_cleaned <- tail(df_cleaned, nrow(df_new) - n_rows)
return(predict(self$model,new_data_cleaned, type = type))
}
)
)
#Let's create a new instance of "GLMWithMice" class
model <- GLMWithMice$new(df = df)
class(model)
model$fit(formula = Species~., family = binomial)
model$predict(new_data = new_data)
Does anyone know how gbm in R handles missing values? I can't seem to find any explanation using google.
To explain what gbm does with missing predictors, let's first visualize a single tree of a gbm object.
Suppose you have a gbm object mygbm. Using pretty.gbm.tree(mygbm, i.tree=1) you can visualize the first tree on mygbm, e.g.:
SplitVar SplitCodePred LeftNode RightNode MissingNode ErrorReduction Weight Prediction
0 46 1.629728e+01 1 5 9 26.462908 1585 -4.396393e-06
1 45 1.850000e+01 2 3 4 11.363868 939 -4.370936e-04
2 -1 2.602236e-04 -1 -1 -1 0.000000 271 2.602236e-04
3 -1 -7.199873e-04 -1 -1 -1 0.000000 668 -7.199873e-04
4 -1 -4.370936e-04 -1 -1 -1 0.000000 939 -4.370936e-04
5 20 0.000000e+00 6 7 8 8.638042 646 6.245552e-04
6 -1 3.533436e-04 -1 -1 -1 0.000000 483 3.533436e-04
7 -1 1.428207e-03 -1 -1 -1 0.000000 163 1.428207e-03
8 -1 6.245552e-04 -1 -1 -1 0.000000 646 6.245552e-04
9 -1 -4.396393e-06 -1 -1 -1 0.000000 1585 -4.396393e-06
See the gbm documentation for details. Each row corresponds to a node, and the first (unnamed) column is the node number. We see that each node has a left and right node (which are set to -1 in case the node is a leaf). We also see each node has associated a MissingNode.
To run an observation down the tree, we start at node 0. If an observation has a missing value on SplitVar = 46, then it will be sent down the tree to the node MissingNode = 9. The prediction of the tree for such observation will be SplitCodePred = -4.396393e-06, which is the same prediction the tree had before any split is made to node zero (Prediction = -4.396393e-06 for node zero).
The procedure is similar for other nodes and split variables.
It appears to send missing values to a separate node within each tree. If you have a gbm object called "mygbm" then you'll see by typing "pretty.gbm.tree(mygbm, i.tree = 1)" that for each split in the tree there is a LeftNode a RightNode and a MissingNode. This implies that (assuming you have interaction.depth=1) each tree will have 3 terminal nodes (1 for each side of the split and one for where the predictor is missing).
The official guide to gbms introduces missing values to the test data, so I would assume that they are coded to handle missing values.
The gbm package in particular deals with NAs (missing values) as follows. The algorithm works by building and serially combining classification or regression trees. So-called base learner trees are built by divvying up observations into Left and Right splits (#user2332165 is right). There is also a separate node type of Missing in gbm. If the row or observation does not have a value for that variable, the algorithm will apply a surrogate split method.
If you want to understand surrogate splitting better, I recommend reading the package rpart vignette.
Start with the source code then. Just typing gbm at the console shows you the source code:
function (formula = formula(data), distribution = "bernoulli",
data = list(), weights, var.monotone = NULL, n.trees = 100,
interaction.depth = 1, n.minobsinnode = 10, shrinkage = 0.001,
bag.fraction = 0.5, train.fraction = 1, cv.folds = 0, keep.data = TRUE,
verbose = TRUE)
{
mf <- match.call(expand.dots = FALSE)
m <- match(c("formula", "data", "weights", "offset"), names(mf),
0)
mf <- mf[c(1, m)]
mf$drop.unused.levels <- TRUE
mf$na.action <- na.pass
mf[[1]] <- as.name("model.frame")
mf <- eval(mf, parent.frame())
Terms <- attr(mf, "terms")
y <- model.response(mf, "numeric")
w <- model.weights(mf)
offset <- model.offset(mf)
var.names <- attributes(Terms)$term.labels
x <- model.frame(terms(reformulate(var.names)), data, na.action = na.pass)
response.name <- as.character(formula[[2]])
if (is.character(distribution))
distribution <- list(name = distribution)
cv.error <- NULL
if (cv.folds > 1) {
if (distribution$name == "coxph")
i.train <- 1:floor(train.fraction * nrow(y))
else i.train <- 1:floor(train.fraction * length(y))
cv.group <- sample(rep(1:cv.folds, length = length(i.train)))
cv.error <- rep(0, n.trees)
for (i.cv in 1:cv.folds) {
if (verbose)
cat("CV:", i.cv, "\n")
i <- order(cv.group == i.cv)
gbm.obj <- gbm.fit(x[i.train, , drop = FALSE][i,
, drop = FALSE], y[i.train][i], offset = offset[i.train][i],
distribution = distribution, w = ifelse(w ==
NULL, NULL, w[i.train][i]), var.monotone = var.monotone,
n.trees = n.trees, interaction.depth = interaction.depth,
n.minobsinnode = n.minobsinnode, shrinkage = shrinkage,
bag.fraction = bag.fraction, train.fraction = mean(cv.group !=
i.cv), keep.data = FALSE, verbose = verbose,
var.names = var.names, response.name = response.name)
cv.error <- cv.error + gbm.obj$valid.error * sum(cv.group ==
i.cv)
}
cv.error <- cv.error/length(i.train)
}
gbm.obj <- gbm.fit(x, y, offset = offset, distribution = distribution,
w = w, var.monotone = var.monotone, n.trees = n.trees,
interaction.depth = interaction.depth, n.minobsinnode = n.minobsinnode,
shrinkage = shrinkage, bag.fraction = bag.fraction, train.fraction = train.fraction,
keep.data = keep.data, verbose = verbose, var.names = var.names,
response.name = response.name)
gbm.obj$Terms <- Terms
gbm.obj$cv.error <- cv.error
gbm.obj$cv.folds <- cv.folds
return(gbm.obj)
}
<environment: namespace:gbm>
A quick read suggests that the data is put into a model frame and that NA's are handled with na.pass so in turn, ?na.pass Reading that, it looks like it does nothing special with them, but you'd probably have to read up on the whole fitting process to see what that means in the long run. Looks like you might need to also look at the code of gbm.fit and so on.
unfortunately I have problems using predict() in the following simple example:
library(e1071)
x <- c(1:10)
y <- c(0,0,0,0,1,0,1,1,1,1)
test <- c(11:15)
mod <- svm(y ~ x, kernel = "linear", gamma = 1, cost = 2, type="C-classification")
predict(mod, newdata = test)
The result is as follows:
> predict(mod, newdata = test)
1 2 3 4 <NA> <NA> <NA> <NA> <NA> <NA>
0 0 0 0 0 1 1 1 1 1
Can anybody explain why predict() only gives the fitted values of the training sample (x,y) and does not care about the test-data?
Thank you very much for your help!
Richard
It looks like this is because you misuse the formula interface to svm(). Normally, one supplies a data frame or similar object within which the variables in the formula are searched for. It usually doesn't matter if you don't do this, even if it is not best practice, but when you want to predict, not putting variables in a data frame gets you in a right mess. The reason it returns the training data is because you don't provide newdata an object with a component named x in it. Hence it can't find the new data x so returns the fitted values. This is common for most R predict methods I know.
The solution then is to i) put your training data in a data frame and pass svm this as the data argument, and ii) supply a new data frame containing x (from test) to predict(). E.g.:
> DF <- data.frame(x = x, y = y)
> mod <- svm(y ~ x, data = DF, kernel = "linear", gamma = 1, cost = 2,
+ type="C-classification")
> predict(mod, newdata = data.frame(x = test))
1 2 3 4 5
1 1 1 1 1
Levels: 0 1
You need newdata to be of the same form, ie using a data.frame helps:
R> library(e1071)
Loading required package: class
R> df <- data.frame(x=1:10, y=sample(c(0,1), 10, rep=TRUE))
R> mod <- svm(y ~ x, kernel = "linear", gamma = 1,
+ cost = 2, type="C-classification", data=df)
R> newdf <- data.frame(x=11:15)
R> predict(mod, newdata=newdf)
1 2 3 4 5
0 0 0 0 0
Levels: 0 1
R>
By the way, this is also shown the help page for svm():
## density-estimation
# create 2-dim. normal with rho=0:
X <- data.frame(a = rnorm(1000), b = rnorm(1000))
attach(X)
# traditional way:
m <- svm(X, gamma = 0.1)
# formula interface:
m <- svm(~., data = X, gamma = 0.1)
# or:
m <- svm(~ a + b, gamma = 0.1)
# test:
newdata <- data.frame(a = c(0, 4), b = c(0, 4))
predict (m, newdata)
So in sum, use the formula interface and supply a data.frame --- that is how essentially all modeling functions in R work.