How is xgboost cover calculated? - r

Could someone explain how the Cover column in the xgboost R package is calculated in the xgb.model.dt.tree function?
In the documentation it says that Cover "is a metric to measure the number of observations affected by the split".
When you run the following code, given in the xgboost documentation for this function, Cover for node 0 of tree 0 is 1628.2500.
data(agaricus.train, package='xgboost')
#Both dataset are list with two items, a sparse matrix and labels
#(labels = outcome column which will be learned).
#Each column of the sparse Matrix is a feature in one hot encoding format.
train <- agaricus.train
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
eta = 1, nthread = 2, nround = 2,objective = "binary:logistic")
#agaricus.test$data#Dimnames[[2]] represents the column names of the sparse matrix.
xgb.model.dt.tree(agaricus.train$data#Dimnames[[2]], model = bst)
There are 6513 observations in the train dataset, so can anyone explain why Cover for node 0 of tree 0 is a quarter of this number (1628.25)?
Also, Cover for the node 1 of tree 1 is 788.852 - how is this number calculated?
Any help would be much appreciated. Thanks.

Cover is defined in xgboost as:
the sum of second order gradient of training data classified to the
leaf, if it is square loss, this simply corresponds to the number of
instances in that branch. Deeper in the tree a node is, lower this
metric will be
https://github.com/dmlc/xgboost/blob/f5659e17d5200bd7471a2e735177a81cb8d3012b/R-package/man/xgb.plot.tree.Rd
Not particularly well documented....
In order to calculate the cover, we need to know the predictions at that point in the tree, and the 2nd derivative with respect to the loss function.
Lucky for us, the prediction for every data point (6513 of them) in the 0-0 node in your example is .5. This is a global default setting whereby your first prediction at t=0 is .5.
base_score [ default=0.5 ] the initial prediction score of all
instances, global bias
http://xgboost.readthedocs.org/en/latest/parameter.html
The gradient of binary logistic (which is your objective function) is p-y, where p = your prediction, and y = true label.
Thus, The hessian (which we need for this) is p*(1-p). Note: the Hessian can be determined without y, the true labels.
So (bringing it home) :
6513 * (.5) * (1 - .5) = 1628.25
In the second tree, the predictions at that point are no longer all .5,sp lets get the predictions after one tree
p = predict(bst,newdata = train$data, ntree=1)
head(p)
[1] 0.8471184 0.1544077 0.1544077 0.8471184 0.1255700 0.1544077
sum(p*(1-p)) # sum of the hessians in that node,(root node has all data)
[1] 788.8521
Note , for linear (squared error) regression the hessian is always one, so the cover indicates how many examples are in that leaf.
The big takeaway is that cover is defined by the hessian of the objective function. Lots of info out there in terms of getting to the gradient, and hessian of the binary logistic function.
These slides are helpful is seeing why he uses hessians as a weighting, and also explain how xgboost splits differently from standard trees. https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf

Related

R: How to check which model of an ensemble algorithm has been selected to perform regression?

I am using the R package machisplin (it's not on CRAN) to downscale a satellite image. According to the description of the package:
The machisplin.mltps function simultaneously evaluates different combinations of the six algorithms to predict the input data. During model tuning, each algorithm is systematically weighted from 0-1 and the fit of the ensembled model is evaluated. The best performing model is determined through k-fold cross validation (k=10) and the model that has the lowest residual sum of squares of test data is chosen. After determining the best model algorithms and weights, a final model is created using the full training dataset.
My question is how can I check which model out of the 6 has been selected for the downscaling? To put it differently, when I export the downscaled image, I would like to know which algorithm (out of the 6) has been used to perform the downscaling.
Here is the code:
library(MACHISPLIN)
library(raster)
library(gbm)
evi = raster("path/evi.tif") # covariate
ntl = raster("path/ntl_1600.tif") # raster to be downscaled
##convert one of the rasters to a point dataframe to sample. Use any raster input.
ntl.points<-rasterToPoints(ntl,
fun = NULL,
spatial = FALSE)
##subset only the x and y data
ntl.points<- ntl.points[,1:2]
##Extract values to points from rasters
RAST_VAL<-data.frame(extract(ntl, ntl.points))
##merge sampled data to input
InInterp<-cbind(ntl.points, RAST_VAL)
#run an ensemble machine learning thin plate spline
interp.rast<-machisplin.mltps(int.values = InInterp,
covar.ras = evi,
smooth.outputs.only = T,
tps = T,
n.cores = 4)
#set negative values to 0
interp.rast[[1]]$final[interp.rast[[1]]$final <= 0] <- 0
writeRaster(interp.rast[[1]]$final,
filename = "path/ntl_splines.tif")
I vied all the output parameters (please refer to Example 2 in the package description) but I couldn't find anything relevant to my question.
I have posted a question on GitHub as well. From here you can download my images.
I think this is a misunderstanding; mahcisplin, isnt testing 6 and gives one. it's trying many ensembles of 6 and its giving one ensemble... or in other words
that its the best 'combination of 6 algorithms' that I will get, and not one of 6 algo's chosen.
It will get something like "a model which is 20% algo1 , 10% algo2 etc. "and not "algo1 is the best and chosen"

Find the nearest neighbor using caret

I'm fitting a k-nearest neighbor model using R's caret package.
library(caret)
set.seed(0)
y = rnorm(20, 100, 15)
predictors = matrix(rnorm(80, 10, 5), ncol=4)
data = data.frame(cbind(y, predictors))
colnames(data)=c('Price', 'Distance', 'Cost', 'Tax', 'Transport')
I left one observation as the test data and fit the model using the training data.
id = sample(nrow(data)-1)
train = data[id, ]
test = data[-id,]
knn.model = train(Price~., method='knn', train)
predict(knn.model, test)
When I display knn.model, it tells me it uses k=9. I would love to know which 9 observations are actually the "nearest" to the test observation. Besides manually calculating the distances, is there an easier way to display the nearest neighbors?
Thanks!
When you are using knn you are creating clusters with points that are near based on independent variables. Normally, this is done using train(Price~., method='knn', train), such that the model chooses the best prediction based on some criteria (taking into account also the dependent variable as well). Given the fact I have not checked whether the R object stores the predicted price for each of the trained values, I just used the model trained to predicte the expected price given the model (where the expected price is located in the space).
At the end, the dependent variable is just a representation of all the other variables in a common space, where the price associated is assumed to be similar since you cluster based on proximity.
As a summary of steps, you need to calculate the following:
Get the distance for each of the training data points. This is done through predicting over them.
Calculate the distance between the trained data and your observation of interest (in absolut value, since you do not care about the sign but just about the absolut distances).
Take the indexes of the N smaller ones(e.g.N= 9). you can get the observations and related to this lower distances.
TestPred<-predict(knn.model, newdata = test)
TrainPred<-predict(knn.model, train)
Nearest9neighbors<-order(abs(TestPred-TrainPred))[1:9]
train[Nearest9neighbors,]
Price Distance Cost Tax Transport
15 95.51177 13.633754 9.725613 13.320678 12.981295
7 86.07149 15.428847 2.181090 2.874508 14.984934
19 106.53525 16.191521 -1.119501 5.439658 11.145098
2 95.10650 11.886978 12.803730 9.944773 16.270416
4 119.08644 14.020948 5.839784 9.420873 8.902422
9 99.91349 3.577003 14.160236 11.242063 16.280094
18 86.62118 7.852434 9.136882 9.411232 17.279942
11 111.45390 8.821467 11.330687 10.095782 16.496562
17 103.78335 14.960802 13.091216 10.718857 8.589131

Weights in Principal Component Analysis (PCA) using Psych::principal

I am computing a Principal Component Analysis with this matrix as input using the function psych::principal . Each column in the input data is the monthly correlations between crop yields and a climatic variable in a region (30) so what I want to obtain with the PCA is to reduce the information and find simmilarities pattern of response between regions.
pc <- principal(dat,nfactors = 9, residuals = FALSE, rotate="varimax", n.obs=NA, covar=TRUE,scores=TRUE, missing=FALSE, impute="median", oblique.scores=TRUE, method="regression")
The matrix has dimensions 10*30, and the first message I get is:
The determinant of the smoothed correlation was zero. This means the
objective function is not defined. Chi square is based upon observed
residuals. The determinant of the smoothed correlation was zero. This
means the objective function is not defined for the null model either.
The Chi square is thus based upon observed correlations. Warning
messages: 1: In cor.smooth(r) : Matrix was not positive definite,
smoothing was done 2: In principal(dat, nfactors = 3, residuals = F,
rotate = "none", : The matrix is not positive semi-definite, scores
found from Structure loadings
Nontheless, the function seems to work, the main problem is when you check pc$weights and realize that is equal to pc$loadings.
When the number of columns is less than/equal to the number of rows the results are coherent, however that is not the case here.
I have to obtain the weights for refering the score values in the same magnitude as the input data (correlation values).
I would really appreciate any help.
Thank you.

ROCR cutoff value and accuracy plots

I have a continuous independent variable (let's say 'height') and a binary independent variable (let's say 'gets a job'). I want to see at what cutoff value for height best predicts one's ability to get a job. I also want to see how accurate this model is. I assumed a multinomial logistic model. I wanted a ROC curve so I used the ROCR package in R. This was my code:
mymodel <- multinom(job~height, data = dataset)
pred <- predict(mymodel,dataset,type = 'prob')
roc_pred <- prediction(pred,dataset$job)
roc <- performance(roc_pred,"tpr","fpr")
plot(roc,colorize=T)
Now, this is my question. When I colorize the plot, it gives me the range of cut-off values used to make the plot. I'm a little confused as to what the cutoff values actually are though. Are the cutoff values the heights? Or the probability that a certain data point (person) with a certain height is able to get a job? I have a feeling it's the latter, but I am interested in the former. If it is the latter, how do I obtain the cutoff value for the height??
I found a video that explains the cutoffs you see: https://www.youtube.com/watch?v=YdNhNfJ4Vl8
There are many different ways to estimate optimal cutoffs: Youden Index, Sensitivity + Specificity,Distance to Corner and many others (see this article)
I suggest you use a pROC library to do so
library(pROC)
roc <- roc(fit, obs, percent = TRUE)
roc.out <- coords(roc, "best", ret = c("threshold", "sens", "spec"), transpose = TRUE)
method "best" uses the Younden index (J- index) The maximum value of the Youden index is 1 (perfect test) and the minimum is 0 when the test has no diagnostic value. The minimum occurs when sensitivity=1−specificity, i.e., represented by the equal line (the diagonal) in the ROC diagram. The vertical distance between the equal line and the ROC curve is the J-index for that particular cutoff. The J-index is represented by the ROC-curve itself.

How to reproduce the H2o GBM class probability calculation

I've been using h2o.gbm for a classification problem, and wanted to understand a bit more about how it calculates the class probabilities. As a starting point, I tried to recalculate the class probability of a gbm with only 1 tree (by looking at the observations in the leafs), but the results are very confusing.
Let's assume my positive class variable is "buy" and negative class variable "not_buy" and I have a training set called "dt.train" and a separate test-set called "dt.test".
In a normal decision tree, the class probability for "buy" P(has_bought="buy") for a new data row (test-data) is calculated by dividing all observations in the leaf with class "buy" by the total number of observations in the leaf (based on the training data used to grow the tree).
However, the h2o.gbm seems to do something differently, even when I simulate a 'normal' decision tree (setting n.trees to 1, and alle sample.rates to 1). I think the best way to illustrate this confusion is by telling what I did in a step-wise fashion.
Step 1: Training the model
I do not care about overfitting or model performance. I want to make my life as easy as possible, so I've set the n.trees to 1, and make sure all training-data (rows and columns) are used for each tree and split, by setting all sample.rate parameters to 1. Below is the code to train the model.
base.gbm.model <- h2o.gbm(
x = predictors,
y = "has_bought",
training_frame = dt.train,
model_id = "2",
nfolds = 0,
ntrees = 1,
learn_rate = 0.001,
max_depth = 15,
sample_rate = 1,
col_sample_rate = 1,
col_sample_rate_per_tree = 1,
seed = 123456,
keep_cross_validation_predictions = TRUE,
stopping_rounds = 10,
stopping_tolerance = 0,
stopping_metric = "AUC",
score_tree_interval = 0
)
Step 2: Getting the leaf assignments of the training set
What I want to do, is use the same data that is used to train the model, and understand in which leaf they ended up in. H2o offers a function for this, which is shown below.
train.leafs <- h2o.predict_leaf_node_assignment(base.gbm.model, dt.train)
This will return the leaf node assignment (e.g. "LLRRLL") for each row in the training data. As we only have 1 tree, this column is called "T1.C1" which I renamed to "leaf_node", which I cbind with the target variable "has_bought" of the training data. This results in the output below (from here on referred to as "train.leafs").
Step 3: Making predictions on the test set
For the test set, I want to predict two things:
The prediction of the model itself P(has_bought="buy")
The leaf node assignment according to the model.
test.leafs <- h2o.predict_leaf_node_assignment(base.gbm.model, dt.test)
test.pred <- h2o.predict(base.gbm.model, dt.test)
After finding this, I've used cbind to combine these two predictions with the target variable of the test-set.
test.total <- h2o.cbind(dt.test[, c("has_bought")], test.pred, test.leafs)
The result of this, is the table below, from here on referred to as "test.total"
Unfortunately, I do not have enough rep point to post more than 2 links. But if you click on "table "test.total" combined with manual
probability calculation" in step 5, it's basically the same table
without the column "manual_prob_buy".
Step 4: Manually predicting probabilities
Theoretically, I should be able to predict the probabilities now myself. I did this by writing a loop, that loops over each row in "test.total". For each row, I take the leaf node assignment.
I then use that leaf-node assignment to filter the table "train.leafs", and check how many observations have a positive class (has_bought == 1) (posN) and how many observations are there in total (totalN) within the leaf associated with the test-row.
I perform the (standard) calculation posN / totalN, and store this in the test-row as a new column called "manual_prob_buy", which should be the probability of P(has_bought="buy") for that leaf. Thus, each test-row that falls in this leaf should get this probability.
This for-loop is shown below.
for(i in 1:nrow(dt.test)){
leaf <- test.total[i, leaf_node]
totalN <- nrow(train.leafs[train.leafs$leaf_node == leaf])
posN <- nrow(train.leafs[train.leafs$leaf_node == leaf & train.leafs$has_bought == "buy",])
test.total[i, manual_prob_buy := posN / totalN]
}
Step 5: Comparing the probabilities
This is where I get confused. Below is the the updated "test.total" table, in which "buy" represents the probability P(has_bought="buy") according to the model and "manual_prob_buy" represents the manually calculated probability from step 4. As for as I know, these probabilities should be identical, knowing I only used 1 tree and I've set the sample.rates to 1.
Table "test.total" combined with manual probability calculation
The Question
I just don't understand why these two probabilities are not the same. As far as I know, I've set the parameters in such a way that it should just be like a 'normal' classification tree.
So the question: does anyone know why I find differences in these probabilities?
I hope someone could point me to where I might have made wrong assumptions. I just really hope I did something stupid, as this is driving me crazy.
Thanks!
Rather than compare the results from R's h2o.predict() with your own handwritten code, I recommend you compare with an H2O MOJO, which should match.
See an example here:
http://docs.h2o.ai/h2o/latest-stable/h2o-genmodel/javadoc/overview-summary.html#quickstartmojo
You can run that simple example yourself, and then modify it according to your own model and new row of data to predict on.
Once you can do that, you can look at the code and debug/single-step it in a java environment to see exactly how the prediction gets calculated.
You can find the MOJO prediction code on github here:
https://github.com/h2oai/h2o-3/blob/master/h2o-genmodel/src/main/java/hex/genmodel/easy/EasyPredictModelWrapper.java
The main cause of the large difference between your observed probabilities and the predictions of h2o is your learning rate. As you have learn_rate = 0.001 the gbm is adjusting the probabilities by a relatively small amount from the overall rate. If you adjust this to learn_rate = 1 you will have something much closer to a decision tree, and h2o's predicted probabilities will come much closer to the rates in each leaf node.
There is a secondary difference which will then become apparent as your probabilities will still not exactly match. This is due to the method of gradient descent (the G in GBM) on the logistic loss function, which is used rather than the number of observations in each leaf node.

Resources