I am using h2o kmeans in R to divide my population. The method need to be audited, so I would like to explain the threshold used in the h2o's kmeans.
In the documentation of h2o kmeans (http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/k-means.html), it is said :
H2O uses proportional reduction in error (PRE) to determine when to
stop splitting. The PRE value is calculated based on the sum of
squares within (SSW).
PRE=(SSW[before split]−SSW[after split])/SSW[before split]
H2O stops splitting when PRE falls below a threshold, which is a
function of the number of variables and the number of cases as
described below:
threshold takes the smaller of these two values:
either 0.8 or [0.02 + 10/number_of_training_rows +
2.5/(number_of_model_features)^2]
The source code (https://github.com/h2oai/h2o-3/blob/master/h2o-algos/src/main/java/hex/kmeans/KMeans.java) is given as :
final double rel_improvement_cutoff = Math.min(0.02 + 10. /
_train.numRows() + 2.5 / Math.pow(model._output.nfeatures(), 2), 0.8);
Where does this threshold come from ? Are there scientific papers about it ?
I am responsible for that threshold. I developed it by running numerous datasets -- artificial and real -- through the k-means algorithm. I began some years ago working with SSW improvement and testing it as a chi-square variable, as recommended by John Hartigan. This criterion failed in a number of instances, so I switched to PRE. The equation above is the result of fitting a nonlinear model to results on datasets with a known number of clusters. When I wrote the k-means program for Tableau, I used this same PRE criterion. After I left Tableau for H2O, they substituted the Calinski-Harabasz index for my PRE rule, producing similar results.
Leland Wilkinson, Chief Scientist, H2O.
Related
I have a large dataset (3.5+ million observations) of a binary response variable that I am trying to compute a Hierarchical GAM with a global smoother with individual effects that have a Shared penalty (e.g. 'GS' in Pedersen et al. 2019). Specifically I am trying to estimate the following structure: Global > Geographic Zone (N=2) > Bioregion (N=20) > Season (N varies by bioregion). In total, I am trying to estimate 36 different nested parameters.
Here is the the code I am currently using:
modGS <- bam(
outbreak ~
te(days_diff,NDVI_mean,bs=c("tp","tp"),k=c(5,5)) +
t2(days_diff, NDVI_mean, Zone, Bioregion, Season, bs=c("tp", "tp","re","re","re"),k=c(5, 5), m=2, full=TRUE) +
s(Latitude,Longitude,k=50),
family=binomial(),select = TRUE,data=dat)
My main issue is that it is taking a long time (5+ days) to construct the model. This nesting structure cannot be discretized, so I cannot compute it in parallel. Further I have tried gamm4 but I ran into memory limit issues. Here is the gamm4 code:
modGS <- gamm4(
outbreak ~
t2(days_diff,NDVI_mean,bs=c("tp","tp"),k=c(5,5)) +
t2(days_diff, NDVI_mean, Zone, Bioregion, Season, bs=c("tp", "tp","re","re","re"),k=c(5, 5), m=2, full=TRUE) +
s(Latitude,Longitude,k=50),
family=binomial(),select = TRUE,data=dat)
What is the best/most computationally feasible way to run this model?
I cut down the computational time by reducing the amount of bioregion levels and randomly sampling ca. 60% of the data. This actually allow me to calculate OOB error for the model.
There is an article I read recently that has a specific section on decreasing computational time. The main things they highlight are:
Use the bam function with it's useful fREML estimation, which refactorizes the model matrix to make calculation faster. Here it seems you have already done that.
Adding the discrete = TRUE argument, which assumes only a smaller finite number of unique values for estimation.
Manipulating nthreads in this function so it runs more than one core in parallel in your computer.
As the authors caution, the second option can reduce the amount of accuracy in your estimates. I fit some large models recently doing this and found that it was not always the same as the default bam function, so its best to use this as a quick inspection rather than the full result you are looking for.
I've been doing a head-to-head comparison of Spark 1.6.2 ML's LogisticRegression with R's glmnet package (the closest analog I could find based on other forum posts).
I'm specifically looking at these two fitting packages when using categorical features. When using continuous features, results for the two packages are comparable.
For my first attempt with Spark, I used the ML Pipeline API to transform my single 21-level categorical variable (called FAC for faculty) with StringIndexer followed by OneHotEncoder to get a binary vector representation.
When I fit my models in Spark and R, I get the following sets of results (that aren't even close):
SPARK 1.6.2 ML
lrModel.intercept
-3.1453838659926427
lrModel.weights
[0.37664264958084287,0.697784342445422,0.4269429071484017,0.3521764371898419,0.19233585960734872,0.6708049751689226,0.49342372792676115,0.5471489576300356,0.37650628365008465,1.0447861554914701,0.5371820187662734,0.4556833133252492,0.2873530144304645,0.09916227313130375,0.1378469333986134,0.20412095883272838,0.4494641670133712,0.4499625784826652,0.489912016708041,0.5433020878341336]
R (glmnet)
(Intercept) -2.79255253
facG -0.35292166
facU -0.16058275
facN 0.69187146
facY -0.06555711
facA 0.09655696
facI 0.02374558
facK -0.25373146
facX 0.31791765
facM 0.14054251
facC 0.02362977
facT 0.07407357
facP 0.09709607
facE 0.10282076
facH -0.21501281
facQ 0.19044412
facW 0.18432837
facF 0.34494177
facO 0.13707197
facV -0.14871580
facS 0.19431703
I've manually checked the glmnet results and they're correct (calculating the proportion of training samples with a particular level of the categorical feature and comparing that to the softmax prob. under the estimated model). These results do not change even when the max. no. of iterations in the optimization is set to 1000000 and the convergence tolerance is set to 1E-15. These results also do not change when the Spark LogisticRegression weights are initialized to the glmnet-estimated weights (Spark's optimizing a different cost function?).
I should say that the optimization problem is not different between these two approaches. You should be minimizing logistic loss (a convex surface) and thereby arriving at nearly the exact same answer).
Now, when I manually recode the FAC feature as a binary vector in the data file and read those binary columns as "DoubleType" (using Spark's DataFrame schema), I get very comparable results. (The order of the coefficients for the following results is different from the above results. Also the reference levels are different--"B" in the first case, "A" in the second--so the coefficients for this test should not match those from the above test.)
SPARK 1.6.2 ML
lrModel.intercept
-2.9530485080391378
lrModel.weights
[-0.19233467682265934,0.8524505857034615,0.09501714540028124,0.25712829253044844,0.18430675058702053,0.09317325898819705,0.4784688407322236,0.3010877381053835,0.18417033887042242,0.2346069926274015,0.2576267066227656,0.2633474197307803,0.05448893119304087,0.35096612444193326,0.3448460751810199,0.505448794876487,0.29757609104571175,0.011785058030487976,0.3548130904832268,0.15984047288368383]
R (glmnet)
s0
(Intercept) -2.9419468179
FAC_B -0.2045928975
FAC_C 0.8402716731
FAC_E 0.0828962518
FAC_F 0.2450427806
FAC_G 0.1723424956
FAC_H -0.1051037449
FAC_I 0.4666239456
FAC_K 0.2893153021
FAC_M 0.1724536240
FAC_N 0.2229762780
FAC_O 0.2460295934
FAC_P 0.2517981380
FAC_Q -0.0660069035
FAC_S 0.3394729194
FAC_T 0.3334048723
FAC_U 0.4941379563
FAC_V 0.2863010635
FAC_W 0.0005482422
FAC_X 0.3436361348
FAC_Y 0.1487405173
Standardization is set to FALSE for both and no regularization is performed (you shouldn't perform it here since you're really just learning the incidence rate of each level of the feature and the binary feature columns are completely uncorrelated from one another). Also, I should mention that the 21 levels of the categorical feature range in incidence counts from ~800 to ~3500 (so this is not due to lack of data; large error in estimates).
Anyone experience this? I'm one step away from reporting this to the Spark guys.
Thanks as always for the help.
I am trying to train a neural network using neuralnet in R, but am getting very high error terms (in the region of 1850). The input variables are responses to a set of 6 likert scales (all on a 1-7) and the output is whether there was a topbox response to another variable (coded 0,1). The input variables have been scaled to a 0,1 range (I've also tried normalizing to a mean of 0). I've tried a range of hidden nodes (from 1-10) and the network is converging to a threshold of 0.1 fairly reliably in 200000-400000 iterations, but with a consistent error term around 1800-1900. There are 30,000 cases in total, about 22000 in the training set. I appreciate that this type of problem doesn't need a neural network necessarily - this is proof of concept (going excellently...) on a familiar dataset before application to other questions. Any suggestions on how to reduce the error/improve the training net appreciated.
As I said, I have tried both normalising and scaling, and now also using the pca preprocessing provided in caret. Must be something in the data, but at a bit of a loss...
Code:
maxs <-apply(final, 2, max)
mins <-apply(final, 2, min)
scaled <-as.data.frame(scale(final, center=mins, scale=maxs-mins))
index<-sample(1:nrow(scaled), round(0.75*nrow(scaled)))
train_ <-scaled[index,]
test_ <-scaled[-index,]
nn<-neuralnet(Q11~A1+B1+C1+D1+E1+Q12, data=train_, hidden=5, rep=1,threshold=0.01, stepmax=6e+05, linear.output=F, lifesign='full')
I am using the nnet function package from the nnet package in R. I am trying to set the MaxNWts parameter and was wondering if there is any disadvantage to setting this number to a large value like 10^8 etc. The documentation says
"The maximum allowable number of weights. There is no intrinsic limit
in the code, but increasing MaxNWts will probably allow fits that are
very slow and time-consuming."
I also calculate the size parameter by the following calculation
size = Math.Sqrt(%No of Input Nodes% * %No of Output Nodes%)
The problem is that if I set "MaxNWts" to a value like 10000 , it fails sometimes because the number of coefficients is > 10000 when working with huge data sets.
EDIT
Is there a way to calculate the number of wts( to the get the same number calculated by R nnet) somehow so that I can explicitly set it every time without worrying about the failures?
Suggestions?
This is what I have seen in the Applied Predictive Modeling:
Kuhn M, Johnson K. Applied predictive modeling. 1st edition. New York: Springer. 2013.
for the MaxNWts= argument you can pass either one of:
5 * (ncol(predictors) + 1) + 5 + 1)
or
10 * (ncol(predictors) + 1) + 10 + 1
or
13 * (ncol(predictors) + 1) + 13 + 1
predictors is the matrix of your predictors
I think it is empirical based on your data, it is a regularization term like the idea behind shrinkage methods, ridge regression (L2) term for instance. It's main goal is to prevent the model from over fitting as is the case often with neural networks because of the over-parameterization of its computation.
I have a matrix of 62 columns and 181408 rows that I am going to be clustering using k-means. What I would ideally like is a method of identifying what the optimum number of clusters should be. I have tried implementing the gap statistic technique using clusGap from the cluster package (reproducible code below), but this produces several error messages relating to the size of the vector (122 GB) and memory.limitproblems in Windows and a "Error in dist(xs) : negative length vectors are not allowed" in OS X. Does anyone has any suggestions on techniques that will work in determining optimum number of clusters with a large dataset? Or, alternatively, how to make my code function (and does not take several days to complete)? Thanks.
library(cluster)
inputdata<-matrix(rexp(11247296, rate=.1), ncol=62)
clustergap <- clusGap(inputdata, FUN=kmeans, K.max=12, B=10)
At 62 dimensions, the result will likely be meaningless due to the curse of dimensionality.
k-means does a minimum SSQ assignment, which technically equals minimizing the squared Euclidean distances. However, Euclidean distance is known to not work well for high dimensional data.
If you don't know the numbers of the clusters k to provide as parameter to k-means so there are three ways to find it automaticaly:
G-means algortithm: it discovers the number of clusters automatically using a statistical test to decide whether to split a k-means center into two. This algorithm takes a hierarchical approach to detect the number of clusters, based on a statistical test for the hypothesis that a subset of data follows a Gaussian distribution (continuous function which approximates the exact binomial distribution of events), and if not it splits the cluster. It starts with a small number of centers, say one cluster only (k=1), then the algorithm splits it into two centers (k=2) and splits each of these two centers again (k=4), having four centers in total. If G-means does not accept these four centers then the answer is the previous step: two centers in this case (k=2). This is the number of clusters your dataset will be divided into. G-means is very useful when you do not have an estimation of the number of clusters you will get after grouping your instances. Notice that an inconvenient choice for the "k" parameter might give you wrong results. The parallel version of g-means is called p-means. G-means sources:
source 1
source 2
source 3
x-means: a new algorithm that efficiently, searches the space of cluster locations and number of clusters to optimize the Bayesian Information Criterion (BIC) or the Akaike Information Criterion (AIC) measure. This version of k-means finds the number k and also accelerates k-means.
Online k-means or Streaming k-means: it permits to execute k-means by scanning the whole data once and it finds automaticaly the optimal number of k. Spark implements it.
This is from RBloggers.
https://www.r-bloggers.com/k-means-clustering-from-r-in-action/
You could do the following:
data(wine, package="rattle")
head(wine)
df <- scale(wine[-1])
wssplot <- function(data, nc=15, seed=1234){
wss <- (nrow(data)-1)*sum(apply(data,2,var))
for (i in 2:nc){
set.seed(seed)
wss[i] <- sum(kmeans(data, centers=i)$withinss)}
plot(1:nc, wss, type="b", xlab="Number of Clusters",
ylab="Within groups sum of squares")}
wssplot(df)
this will create a plot like this.
From this you can choose the value of k to be either 3 or 4. i.e
there is a clear fall in 'within groups sum of squares' when moving from 1 to 3 clusters. After three clusters, this decrease drops off, suggesting that a 3-cluster solution may be a good fit to the data.
But like Anony-Mouse pointed out, the curse of dimensionality affects due to the fact that euclidean distance being used in k means.
I hope this answer helps you to a certain extent.