R kmeans (stats) vs Kmeans (amap) - r

Hello stackoverflow community,
I'm running kmeans (stats package) and Kmeans (amap package) on the Iris dataset. In both cases, I use the same algorithm (Lloyd–Forgy), the same distance (euclidean), the same number of initial random sets (50), the same maximal number of iterations (1000), and I test for the same set of k values (from 2 to 15). I also use the same seed for both cases (4358).
I don't understand why under these conditions I'm getting different wss curves, in particular: the "elbow" using the stats package is much less accentuated than when using the amap package.
Could you please help me to understand why? Thanks much!
Here the code:
# data load and scaling
newiris <- iris
newiris$Species <- NULL
newiris <- scale(newiris)
# using kmeans (stats)
wss1 <- (nrow(newiris)-1)*sum(apply(newiris,2,var))
for (i in 2:15) {
set.seed(4358)
wss1[i] <- sum(kmeans(newiris, centers=i, iter.max=1000, nstart=50,
algorithm="Lloyd")$withinss)
}
# using Kmeans (amap)
library(amap)
wss2 <- (nrow(newiris)-1)*sum(apply(newiris,2,var))
for (i in 2:15) {
set.seed(4358)
wss2[i] <- sum(Kmeans(newiris, centers=i, iter.max=1000, nstart=50,
method="euclidean")$withinss)
}
# plots
plot(1:15, wss1, type="b", xlab="Number of Clusters",
ylab="Within groups sum of squares", main="kmeans (stats package)")
plot(1:15, wss2, type="b", xlab="Number of Clusters",
ylab="Within groups sum of squares", main="Kmeans (amap package)")
EDIT:
I've emailed the author of the amap package and will post the reply when/if I get any.
https://cran.r-project.org/web/packages/amap/index.html

The author of the amap package, changed the code and the value of withinss variable is the sum applied by method (eg. euclidean distance).
One way to solve this, given the return of Kmeans function (amap), recalculate the value of withinss ( Error Sum of Squares (SSE) ).
Here is my suggestion:
# using Kmeans (amap)
library(amap)
wss2 <- (nrow(newiris)-1)*sum(apply(newiris,2,var))
for (i in 2:15) {
set.seed(4358)
ans.Kmeans <- Kmeans(newiris, centers=i, iter.max=1000, nstart=50, method="euclidean")
wss <- vector(mode = "numeric", length=i)
for (j in 1:i) {
km = as.matrix(newiris[which(ans.Kmeans$cluster %in% j),])
## average = as.matrix( t(apply(km,2,mean) ))
## wss[j] = sum( apply(km, 1, function(x) sum((x-average) ^ 2 )))
## or
wss[j] <- ( nrow(km)-1) * sum(apply(km,2,var))
}
wss2[i] = sum(wss)
}
Note. The method for pearson in this package is wrong (be careful !) on version 0.8-14.
Line 325 according code in this link:
https://github.com/cran/amap/blob/master/src/distance_T.inl

Related

How to get gap statistic for hierarchical average clustering

I perform a hierarchical cluster analysis based on 'average linkage' In base r, I use
dist_mat <- dist(cdata, method = "euclidean")
hclust_avg <- hclust(dist_mat, method = "average")
I want to calculate the gap statistics to decide optimal number of clusters. I use the 'cluster' library and the clusGap function. Since I can't pass the hclust solution nor specify average hiearchical clustering in the clusGap function, I use these lines:
cluster_fun <- function(x, k) list(cluster = cutree(hclust(dist(x, method = "euclidean"), method="average"), k = k))
gap_stat <- clusGap(cdata, FUN=cluster_fun, K.max=10, B=50)
print(gap_stat)
However, here I can't check the cluster solution. So, my question is - can I be sure that the gap statistic is calculated on the same solution as hclust_avg?
Is there a better way of doing this?
Yes it should be the same. In the clusGap function, it calls the cluster_fun for each k you provided, then calculates the pooled within cluster sum of squares around, as described in the paper
This is the bit of code called inside clusGap that calls your custom function:
W.k <- function(X, kk) {
clus <- if (kk > 1)
FUNcluster(X, kk, ...)$cluster
else rep.int(1L, nrow(X))
0.5 * sum(vapply(split(ii, clus), function(I) {
xs <- X[I, , drop = FALSE]
sum(dist(xs)^d.power/nrow(xs))
}, 0))
}
And from here, the gap statistics is calculated.
You can calculate the gap statistic using some custom code, but for the sake of reproducibility, etc, it might be easier to use this?
Thanhs for solving it. I must say this is good enough solution but you can try below given code as well.
# Gap Statistic for K means
def optimalK(data, nrefs=3, maxClusters=15):
"""
Calculates KMeans optimal K using Gap Statistic
Params:
data: ndarry of shape (n_samples, n_features)
nrefs: number of sample reference datasets to create
maxClusters: Maximum number of clusters to test for
Returns: (gaps, optimalK)
"""
gaps = np.zeros((len(range(1, maxClusters)),))
resultsdf = pd.DataFrame({'clusterCount':[], 'gap':[]})
for gap_index, k in enumerate(range(1, maxClusters)):
# Holder for reference dispersion results
refDisps = np.zeros(nrefs)
# For n references, generate random sample and perform kmeans getting resulting dispersion of each loop
for i in range(nrefs):
# Create new random reference set
randomReference = np.random.random_sample(size=data.shape)
# Fit to it
km = KMeans(k)
km.fit(randomReference)
refDisp = km.inertia_
refDisps[i] = refDisp
# Fit cluster to original data and create dispersion
km = KMeans(k)
km.fit(data)
origDisp = km.inertia_
# Calculate gap statistic
gap = np.log(np.mean(refDisps)) - np.log(origDisp)
# Assign this loop's gap statistic to gaps
gaps[gap_index] = gap
resultsdf = resultsdf.append({'clusterCount':k, 'gap':gap}, ignore_index=True)
return (gaps.argmax() + 1, resultsdf)
score_g, df = optimalK(cluster_df, nrefs=5, maxClusters=30)
plt.plot(df['clusterCount'], df['gap'], linestyle='--', marker='o', color='b');
plt.xlabel('K');
plt.ylabel('Gap Statistic');
plt.title('Gap Statistic vs. K');

EM clustering instead of Kmeans

I have the following script that I can use to find the best number of the cluster using kmeans. How to change the following script using the EM clustering technique rather than kmeans.
reproducible example:
ourdata<- scale(USArrests)
Appreciate!
wss <- (nrow(ourdata)-1)*sum(apply(ourdata,2,var))
for (i in 2:10) wss[i] <- sum(kmeans(ourdata,
centers=i)$withinss)
plot(1:10, wss, type="b", xlab="Number of Clusters", ylab="Within groups sum of squares")
The EMCluster package offers a variety of functions for running EM model-based clustering. An example of finding a solution with k = 3 clusters:
Update per OP's comment:
You can calculate the within sums of squares, along with other metrics of interest, using fpc::cluster.stats(). These can be extracted and plotted akin to your original post. As a reminder, "the elbow technique" as you described is an inaccurate description because the elbow technique is a general techinque and can and is used with any metric of choice. It is not only used for within sums of squares as in your original post.
library(EMCluster)
library(fpc)
ourdata<- scale(USArrests)
dist_fit <- dist(ourdata)
num_clusters <- 2:4
set.seed(1)
wss <- vapply(num_clusters, function(i_k) {
em_fit <- em.EM(ourdata, nclass = i_k, lab = NULL, EMC = .EMC,
stable.solution = TRUE, min.n = NULL, min.n.iter = 10)
cluster_stats_fit <- fpc::cluster.stats(dist_fit, em_fit$class)
cluster_stats_fit$within.cluster.ss
}, numeric(1))
plot(num_clusters, wss, type="b", xlab="Number of Clusters", ylab="Within groups sum of squares")

R: Cluster analysis with hclust(). How to get the cluster representatives?

I am doing some cluster analysis with R. I am using the hclust() function and I would like to get, after I perform the cluster analysis, the cluster representative of each cluster.
I define a cluster representative as the instances which are closest to the centroid of the cluster.
So the steps are:
Finding the centroid of the clusters
Finding the cluster representatives
I have already asked a similar question but using K-means: https://stats.stackexchange.com/questions/251987/cluster-analysis-with-k-means-how-to-get-the-cluster-representatives
The problem, in this case, is that hclust doesn't give the centroids!
For example, saying that d are my data, what I have done so far is:
hclust.fit1 <- hclust(d, method="single")
groups1 <- cutree(hclust.fit1, k=3) # cut tree into 3 clusters
## getting centroids ##
mycentroid <- colMeans(CV)
clust.centroid = function(i, dat, groups1) {
ind = (groups1 == i)
colMeans(dat[ind,])
}
centroids <- sapply(unique(groups1), clust.centroid, data, groups1)
But now, I was trying to get the cluster representatives with this code (I got it in the other question I asked, for k-means):
index <- c()
for (i in 1:3){
rowsum <- rowSums(abs(CV[which(centroids==i),1:3] - centroids[i,]))
index[i] <- as.numeric(names(which.min(rowsum)))
}
And it says that:
"Error in e2[[j]] : index out of the limit"
I would be grateful if any of you could give me a little help. Thanks.
-- (not) Working example of the code --
example_data.txt
A,B,C
10.761719,5.452188,7.575762
10.830457,5.158822,7.661588
10.75391,5.500170,7.740330
10.686719,5.286823,7.748297
10.864527,4.883244,7.628730
10.701415,5.345650,7.576218
10.820583,5.151544,7.707404
10.877528,4.786888,7.858234
10.712337,4.744053,7.796390
As for the code:
# Install R packages
#install.packages("fpc")
#install.packages("cluster")
#install.packages("rgl")
library(fpc)
library(cluster)
library(rgl)
CV <- read.csv("example_data")
str(CV)
data <- scale(CV)
d <- dist(data,method = "euclidean")
hclust.fit1 <- hclust(d, method="single")
groups1 <- cutree(hclust.fit1, k=3) # cut tree into 3 clusters
mycentroid <- colMeans(CV)
clust.centroid = function(i, dat, groups1) {
ind = (groups1 == i)
colMeans(dat[ind,])
}
centroids <- sapply(unique(groups1), clust.centroid, CV, groups1)
index <- c()
for (i in 1:3){
rowsum <- rowSums(abs(CV[which(centroids==i),1:3] - centroids[i,]))
index[i] <- as.numeric(names(which.min(rowsum)))
}
Hierarchical clustering does not use (or compute) representatives.
In particular for single link (but it can also happen for other linkages), the "center" can be in a different cluster. Just consider the top two data sets in example:
Furthermore, the centroid (mean) is connected to Euclidean distance. With other distances, it may be a very bad representative.
So use with care!
Either way, hierarchical clustering does not define or compute a representative. You will have to do this yourself.

an error in R code written for finding clusters

I am relatively new to R. I just try to find out optimum number of clusters for iris data using the following methods:
library(datasets)
head(iris)
# method1:
wss <- (nrow(iris)-1)*sum(apply(iris,2,var))
for (i in 2:3) wss[i] <- sum(kmeans(iris, centers=i)$withinss)
plot(1:3, wss, type="b", xlab="Number of Clusters",ylab="Within groups sum of squares")
# method2:
library(fpc)
pamk.best <- pamk(iris)
cat("number of clusters estimated by optimum average silhouette width:", pamk.best$nc, "\n")
plot(pam(iris, pamk.best$nc))
Both methods throw up error. So please do someone shed light on it. Many Thanks in advance.
apply(iris,2,var)
gives you an error because the 4th column is not numeric.
Try
apply(iris[,1:4],2,var)
The same goes for the second method.
Error in pam(sdata, k, diss = diss, ...) :
x is not a numeric dataframe or matrix.

How to find the optimal number of clusters?

I know this question has already been asked, but I am failing to implement a decent plot for the following code:
options(digits=1)
set.seed(2014)
mydata <- matrix(seq(1,360),nrow=10,ncol=36)
wss <- c()
for (i in 1:19) wss[i] <- sum(kmeans(x=mydata,centers=seq(1,360,length.out=20)[i])$withinss)
plot(1:9, wss, type="b", xlab="Number of Clusters",
ylab="Within groups sum of squares")
It produces the following error
Error in sample.int(m, k) :
cannot take a sample larger than the population when 'replace = FALSE'
kmeans assumes that each row is your data is an observation. So if you have k rows in x, the results of $clusters will be of lenth k. Here your test data has 10 rows. Yet you are specifying centers=20 when i=2 There is no way that 10 observations can have 20 different clusters.
Just a little spark in the dark!
options(digits=1)
set.seed(2014)
mydata <- seq(from=1,to=365)
wss <- c()
for (i in 5:15){
wss[i-4] <- sum(kmeans(mydata,centers=floor(seq(from=1,to=365,length.out=i)[-i]))$withinss)
}
plot(1:15,wss,type="b",xlab="Number of Clusters",ylab="Within groups sum of squares")
Does that make sense? #jlhoward #jbaums

Resources