How to visualize k-means centroids for each iteration? - r

I would like to graphically demostrate the behavior of k-means by plotting iterations of the algorithm from a starting value (at (3,5),(6,2),(8,3)) of initial cluster till the cluster centers.
Each iteration may correspond to a single plot with centroids and clusters.
Given:
x<-c(3,6,8,1,2,2,6,6,7,7,8,8)
y<-c(5,2,3,5,4,6,1,8,3,6,1,7)
df<-data.frame(x,y)
dfCluster<-kmeans(df,centers=3) # with 3 centroids
I would like to use the first three tuples as my initial cluster and track the movement of the centroids.

Try to use tryCatch to automate the the process of stopping when conversion is reached:
I use the iris-data set because there kmeans needs 2 iterations (the (6,3.5)-Point switches)
set.seed(1337)
df = iris[,1:2]
dfCluster<-kmeans(df,centers=3, iter.max = 1)
plot(df[,1], df[,2], col=dfCluster$cluster,pch=19,cex=2, main="iter 1")
points(dfCluster$centers,col=1:5,pch=3,cex=3,lwd=3)
max_iter = 10
for (i in 2:max_iter){
tryCatch({
dfCluster <- kmeans(df,centers = dfCluster$centers, iter.max = 1)
done <- TRUE
},
warning=function(w) {done <- FALSE})
plot(df[,1], df[,2], col=dfCluster$cluster,pch=19,cex=2, main=paste("iter",i))
points(dfCluster$centers,col=1:5,pch=3,cex=3,lwd=3)
if(done) break
}
Result:
If you want to get the coordinates at each iteration-step see here: Getting the coordinates of every observation at each iteration of kmeans in R

Related

Density based clustering that allows user to specify number of clusters

I have data that consists of roughly 100,000 points on a 2-d graph. Each point has X and Y coordinates. I'm looking for an algorithm that will cluster these points based on density but I want to specify the number of clusters.
I originally tried K-Means since this would allow me to specify the number of clusters. However, my data naturally "clumps" into ridges. K-Means would inevitably bisect some of these ridges. DBSCAN seems like a better fit simply due to the shape of my data, but with DBSCAN I can't specify the number of clusters I'd like.
Essentially what I'm trying to find is an algorithm that will optimally cluster the graph into N groups based on density. Where N is supplied by me. At this point I don't care where it's implemented (R, Python, FORTRAN...).
Any direction you can provide would be much appreciated.
In an area of high density, the points tend to be close together, so clustering on the (euclidian) distance may give similar results (not always).
For example, with these three normals in 2 dimensions:
x1 <- mnormt::rmnorm(200, c(10,10), matrix(c(20,0,0,.1), 2, 2))
x2 <- mnormt::rmnorm(100, c(10,20), matrix(c(20,0,0,.1), 2, 2))
x3 <- mnormt::rmnorm(300, c(23, 15), matrix(c(.1,0,0,35), 2, 2))
xx <- rbind(x1, x2, x3)
plot(xx, col=rep(c("grey10","pink2", "green4"), times=c(200,100,300)))
We can apply different clustering algorithms:
# hierarchical
clustering <- hclust(dist(xx,
method = "euclidian"),
method = "ward.D")
h.cl <- cutree(clustering, k=3)
# K-means and dbscan
k.cl <- kmeans(xx, centers = 3L)
d.cl <- dbscan::dbscan(xx, eps = 1)
And we see on this particular example, the hierarchical clustering and DBSCAN produced similar results, whereas K-means cut one of the clusters in a wrong way.
opar <- par(mfrow=c(3,1), mar = c(1,1,1,1))
plot(xx, col = k.cl$cluster, main="K-means")
plot(xx, col = d.cl$cluster, main="DBSCAN")
plot(xx, col = h.cl, main="Hierarchical")
par(opar)
Of course, there is no guarantee this will work on your particular data.

R, SOM, Kohonen Package, Outlier Detection

With SOM I experimented a little. First I used MiniSOM in Python but I was not impressed and changed to the kohonen package in R, which offers more features than the previous one. Basically, I applied SOM for three use cases: (1) clustering in 2D with generated data, (2) clustering with more-dimensional data: built-in wine data set, and (3) outlier detection. I solved all the three use cases but I would like to raise a question in connection with the outlier detection I applied. For this purpose I used the vector som$distances, which contains a distance for each row of the input data set. The values with excelling distances can be outliers. However, I do not know how this distance is computed. The package description (https://cran.r-project.org/web/packages/kohonen/kohonen.pdf) states for this metric : "distance to the closest unit".
Could you please tell how this distance is computed?
Could you please comment the outlier detection I used? How would you have done it? (In the generated data set it really finds the outliers. In
the real wine data set there are four relatively excelling values among the 177 wine sorts. See
the charts below. The idea that crossed my mind to use bar charts for depicting this I really like.)
Charts:
Generated data, 100 point in 2D in 5 distinct clusters and 2
outliers (Category 6 shows the outliers):
Distances shown for all the 102 data points, the last two ones are
the outliers which were correctly identified. I repeated the test
with 500, and 1000 data points and added solely 2 outliers. The
outliers were also found in those cases.
Distances for the real wine data set with potential outliers:
The row id of the potential outliers:
# print the row id of the outliers
# the threshold 10 can be taken from the bar chart,
# below which the vast majority of the values fall
df_wine[df_wine$value > 10, ]
it produces the following output:
index value
59 59 12.22916
110 110 13.41211
121 121 15.86576
158 158 11.50079
My annotated code snippet:
data(wines)
scaled_wines <- scale(wines)
# creating and training SOM
som.wines <- som(scaled_wines, grid = somgrid(5, 5, "hexagonal"))
summary(som.wines)
#looking for outliers, dist = distance to the closest unit
som.wines$distances
len <- length(som.wines$distances)
index_in_vector <- c(1:len)
df_wine<-data.frame(cbind(index_in_vector, som.wines$distances))
colnames(df_wine) <-c("index", "value")
po <-ggplot(df_wine, aes(index, value)) + geom_bar(stat = "identity")
po <- po + ggtitle("Outliers?") + theme(plot.title = element_text(hjust = 0.5)) + ylab("Distances in som.wines$distances") + xlab("Number of Rows in the Data Set")
plot(po)
# print the row id of the outliers
# the threshold 10 can be taken from the bar chart,
# below which the vast majority of the values fall
df_wine[df_wine$value > 10, ]
Further Code Samples
With regard to the discussion in the comments I also post the code snippets asked for. As far as I remember, the code lines responsible for clustering I constructed based on samples I found in the description of the Kohonen package (https://cran.r-project.org/web/packages/kohonen/kohonen.pdf). However, I am not completely sure, it was more than a year ago. The code is provided as is without any warranty :-). Please bear in mind that a particular clustering approach may perform with different accuracy on different data. I would also recommend to compare it with t-SNE on the wine data set (data(wines) available in R). Moreover, implement the heat-maps to demonstrate how the data with regard to individual variables are located. (In the case of the above example with 2 variables it is not important but it would be nice for the wine data set).
Data Generation with Five Clusters and 2 Outliers and Plotting
library(stats)
library(ggplot2)
library(kohonen)
generate_data <- function(num_of_points, num_of_clusters, outliers=TRUE){
num_of_points_per_cluster <- num_of_points/num_of_clusters
cat(sprintf("#### num_of_points_per_cluster = %s, num_of_clusters = %s \n", num_of_points_per_cluster, num_of_clusters))
arr<-array()
standard_dev_y <- 6000
standard_dev_x <- 2
# for reproducibility setting the random generator
set.seed(10)
for (i in 1:num_of_clusters){
centroid_y <- runif(1, min=10000, max=200000)
centroid_x <- runif(1, min=20, max=70)
cat(sprintf("centroid_x = %s \n, centroid_y = %s", centroid_x, centroid_y ))
vector_y <- rnorm(num_of_points_per_cluster, mean=centroid_y, sd=standard_dev_y)
vector_x <- rnorm(num_of_points_per_cluster, mean=centroid_x, sd=standard_dev_x)
cluster <- array(c(vector_y, vector_x), dim=c(num_of_points_per_cluster, 2))
cluster <- cbind(cluster, i)
arr <- rbind(arr, cluster)
}
if(outliers){
#adding two outliers
arr <- rbind(arr, c(10000, 30, 6))
arr <- rbind(arr, c(150000, 70, 6))
}
colnames(arr) <-c("y", "x", "Cluster")
# WA to remove the first NA row
arr <- na.omit(arr)
return(arr)
}
scatter_plot_data <- function(data_in, couloring_base_indx, main_label){
df <- data.frame(data_in)
colnames(df) <-c("y", "x", "Cluster")
pl <- ggplot(data=df, aes(x = x,y=y)) + geom_point(aes(color=factor(df[, couloring_base_indx])))
pl <- pl + ggtitle(main_label) + theme(plot.title = element_text(hjust = 0.5))
print(pl)
}
##################
# generating data
data <- generate_data(100, 5, TRUE)
print(data)
scatter_plot_data(data, couloring_base_indx<-3, "Original Clusters without Outliers \n 102 Points")
Preparation, Clustering and Plotting
I used the hierarchical clustering approach with the Kohonen Map (SOM).
normalising_data <- function(data){
# normalizing data points not the cluster identifiers
mtrx <- data.matrix(data)
umtrx <- scale(mtrx[,1:2])
umtrx <- cbind(umtrx, factor(mtrx[,3]))
colnames(umtrx) <-c("y", "x", "Cluster")
return(umtrx)
}
train_som <- function(umtrx){
# unsupervised learning
set.seed(7)
g <- somgrid(xdim=5, ydim=5, topo="hexagonal")
#map<-som(umtrx[, 1:2], grid=g, alpha=c(0.005, 0.01), radius=1, rlen=1000)
map<-som(umtrx[, 1:2], grid=g)
summary(map)
return(map)
}
plot_som_data <- function(map){
par(mfrow=c(3,2))
# to plot some charactristics of the SOM map
plot(map, type='changes')
plot(map, type='codes', main="Mapping Data")
plot(map, type='count')
plot(map, type='mapping') # how many data points are held by each neuron
plot(map, type='dist.neighbours') # the darker the colours are, the closer the point are; the lighter the colours are, the more distant the points are
#to switch the plot config to the normal
par(mfrow=c(1,1))
}
plot_disstances_to_the_closest_point <- function(map){
# to see which neuron is assigned to which value
map$unit.classif
#looking for outliers, dist = distance to the closest unit
map$distances
max(map$distances)
len <- length(map$distances)
index_in_vector <- c(1:len)
df<-data.frame(cbind(index_in_vector, map$distances))
colnames(df) <-c("index", "value")
po <-ggplot(df, aes(index, value)) + geom_bar(stat = "identity")
po <- po + ggtitle("Outliers?") + theme(plot.title = element_text(hjust = 0.5)) + ylab("Distances in som$distances") + xlab("Number of Rows in the Data Set")
plot(po)
return(df)
}
###################
# unsupervised learning
umtrx <- normalising_data(data)
map<-train_som(umtrx)
plot_som_data(map)
#####################
# creating the dendogram and then the clusters for the neurons
dendogram <- hclust(object.distances(map, "codes"), method = 'ward.D')
plot(dendogram)
clusters <- cutree(dendogram, 7)
clusters
length(clusters)
#visualising the clusters on the map
par(mfrow = c(1,1))
plot(map, type='dist.neighbours', main="Mapping Data")
add.cluster.boundaries(map, clusters)
Plots with the Clusters
You can also create nice heat-maps for selected variables but I had not implemented them for clustering with 2 variables it does not really make sense. If you implement it for the wine data set, please add the code and the charts to this post.
#see the predicted clusters with the data set
# 1. add the vector of the neuron ids to the data
mapped_neurons <- map$unit.classif
umtrx <- cbind(umtrx, mapped_neurons)
# 2. taking the predicted clusters and adding them the the original matrix
# very good description of the apply functions:
# https://www.guru99.com/r-apply-sapply-tapply.html
get_cluster_for_the_row <- function(x, cltrs){
return(cltrs[x])
}
predicted_clusters <- sapply (umtrx[,4], get_cluster_for_the_row, cltrs<-clusters)
mtrx <- cbind(mtrx, predicted_clusters)
scatter_plot_data(mtrx, couloring_base_indx<-4, "Predicted Clusters with Outliers \n 100 points")
See the predicted clusters below in case there were outliers and in case there were not.
I am not quite sure though, but I often find that the distance measurement of two objects reside in a particular dimensional space uses mostly Euclidean distance. For example, two points A and B in a two dimensional space having location of A(x=3, y=4) and B(x=6, y=8) are 5 distance unit apart. It is a result of performing calculation of squareroot((3-6)^2 + (4-8)^2). This is also applied to the data whose greater dimension, by adding trailing power of two of the difference of the two point's value in a particular dimension. If A(x=3, y=4, z=5) and B(x=6, y=8, z=7) then the distance is squareroot((3-6)^2 + (4-8)^2 + (5-7)^2), and so on. In kohonen, I think that after the model has finished the training phase, the algorithm then calculates the distances of each datum to all nodes and then assign it to the nearest node (a node which has the smallest distance to it). Eventually, the values inside the variable 'distances' returned by the model is the distance of every datum to its nearest node. One thing to note from your script is that the algorithm does not measure the distance directly from the original property values that the data have, because they have been scaled prior to feeding the data to the model. The distance measurement is applied to the scaled version of the data. The scaling is a standard procedure to eliminate the dominance of a variable on top of another.
I believe that your method is acceptable, because the values inside the 'distances' variable are the distance of each datum to its nearest node. So if a value of the distance between a datum and its nearest node is high, then this also means: the distance of the datum to other nodes are obviously much much higher.

Why does cluster_infomap in igraph R give different communities each time?

I am using the cluster_infomap function from igraph in R to detect communities in a undirected, unweighted, network with ~19,000 edges, but I get a different number of communities each time I run the function. This is the code I am using:
clusters <- list()
clusters[["im"]] <- cluster_infomap(graph)
membership_local_method <- membership(clusters[["im"]])
length(unique(membership_local_method))
The result of the last line of code ranges from 805-837 in the tests I have performed. I tried using set.seed() in case it was an issue of random number generation, but this does not solve the problem.
My questions are (1) why do I get different communities each time, and (2) is there a way to make it stable?
Thanks!
cluster_infomap (see ?igraph::cluster_infomap for help) finds a
community structure that minimizes the expected description length of
a random walker trajectory
Whenever you deal with random number generation, then you get different results on each run. Most of the time, you can override this by setting a seed using set.seed (see ?Random for help) beforehand:
identical(cluster_infomap(g), cluster_infomap(g))
# [1] FALSE
identical({set.seed(1);cluster_infomap(g)},{set.seed(1);cluster_infomap(g)})
# [1] TRUE
or graphically:
library(igraph)
set.seed(2)
g <- ba.game(150)
coords <- layout.auto(g)
par(mfrow=c(2,2))
# without seed: different results
for (x in 1:2) {
plot(
cluster_infomap(g),
as.undirected(g),
layout=coords,
vertex.label = NA,
vertex.size = 5
)
}
# with seed: equal results
for (x in 1:2) {
set.seed(1)
plot(
cluster_infomap(g),
as.undirected(g),
layout=coords,
vertex.label = NA,
vertex.size = 5
)
}

Clustering based on connectivity of points

I have 1 million records of lat long [5 digits precision] and Route. I want to cluster those data points.
I dont want to use standard k-means clustering as I am not sure how many clsuters [tried Elbow method but not convinced].
Here is my Logic -
1) I want to reduce width of lat long from 5 digits to 3 digits.
2) Now lat longs which are in range of +/- 0.001 are to be clustered in once cluster. Calculate centroid of cluster.
But in doing so I am unable to find good algorithm and R Script to execute my thought code.
Can any one please help me in above problem.
Thanks,
Clustering can be done based on connected components.
All points that are in +/-0.001 distance to each other can be connected so we will have a graph that contains subgraphs that each may be a single poin or a series of connected points(connected components)
then connected components can be found and their centeroid can be calculated.
Two packages required for this task :
1.deldir to form triangulation of points and specify which points are adaject to each other and to calculate distances between them.
2 igraph to find connected components.
library(deldir)
library(igraph)
coords <- data.frame(lat = runif(1000000),long=runif(1000000))
#round to 3 digits
coords.r <- round(coords,3)
#remove duplicates
coords.u <- unique(coords.r)
# create triangulation of points. depends on the data may take a while an consume more memory
triangulation <- deldir(coords.u$long,coords.u$lat)
#compute distance between adjacent points
distances <- abs(triangulation$delsgs$x1 - triangulation$delsgs$x2) +
abs(triangulation$delsgs$y1 - triangulation$delsgs$y2)
#remove edges that are greater than .001
edge.list <- as.matrix(triangulation$delsgs[distances < .0011,5:6])
if (length(edge.list) == 0) { #there is no edge that its lenght is less than .0011
coords.clustered <- coords.u
} else { # find connected components
#reformat list of edges so that if the list is
# 9 5
# 5 7
#so reformatted to
# 3 1
# 1 2
sorted <- sort(c(edge.list), index.return = TRUE)
run.length <- rle(sorted$x)
indices <- rep(1:length(run.length$lengths),times=run.length$lengths)
edge.list.reformatted <- edge.list
edge.list.reformatted[sorted$ix] <- indices
#create graph from list of edges
graph.struct <- graph_from_edgelist(edge.list.reformatted, directed = FALSE)
# cluster based on connected components
clust <- components(graph.struct)
#computation of centroids
coords.connected <- coords.u[run.length$values, ]
centroids <- data.frame(lat = tapply(coords.connected$lat,factor(clust$membership),mean) ,
long = tapply(coords.connected$long,factor(clust$membership),mean))
#combine clustered points with unclustered points
coords.clustered <- rbind(coords.u[-run.length$values,], centroids)
# round the data and remove possible duplicates
coords.clustered <- round(coords.clustered, 3)
coords.clustered <- unique(coords.clustered)
}

spatial distribution of points, R

What would be an easy way to generate a 3 different spatial distribution of points (N = 20 points) using R. For example, 1) random, 2) uniform, and 3) clustered on the same space (50 x 50 grid)?
1) Here's one way to get a very even spacing of 5 points in a 25 by 25 grid numbered from 1 each direction. Put points at (3,18), (8,3), (13,13), (18,23), (23,8); you should be able to generalize from there.
2) as you suggest, you could use runif ... but I'd have assumed from your question you actually wanted points on the lattice (i.e. integers), in which case you might use sample.
Are you sure you want continuous rather than discrete random variables?
3) This one is "underdetermined" - depending on how you want to define things there's a bunch of ways you might do it. e.g. if it's on a grid, you could sample points in such a way that points close to (but not exactly on) already sampled points had a much higher probability than ones further away; a similar setup works for continuous variables. Or you could generate more points than you need and eliminate the loneliest ones. Or you could start with random uniform points and them make them gravitate toward their neighbors. Or you could generate a few cluster-centers (4-10, say), and then scatter points about those centers. Or you could do any of a hundred other things.
A bit late, but the answers above do not really address the problem. Here is what you are looking for:
library(sp)
# make a grid of size 50*50
x1<-seq(1:50)-0.5
x2<-x1
grid<-expand.grid(x1,x2)
names(grid)<-c("x1","x2")
# make a grid a spatial object
coordinates(grid) <- ~x1+x2
gridded(grid) <- TRUE
First: random sampling
# random sampling
random.pt <- spsample(x = grid, n= 20, type = 'random')
Second: regular sampling
# regular sampling
regular.pt <- spsample(x = grid, n= 20, type = 'regular')
Third: clustered at a distance of 2 from a random location (can go outside the area)
# random sampling of one location
ori <- data.frame(spsample(x = grid, n= 1, type = 'random'))
# select randomly 20 distances between 0 and 2
n.point <- 20
h <- rnorm(n.point, 1:2)
# empty dataframe
dxy <- data.frame(matrix(nrow=n.point, ncol=2))
# take a random angle from the randomly selected location and make a dataframe of the new distances from the original sampling points, in a random direction
angle <- runif(n = n.point,min=0,max=2*pi)
dxy[,1]= h*sin(angle)
dxy[,2]= h*cos(angle)
cluster <- data.frame(x=rep(NA, 20), y=rep(NA, 20))
cluster$x <- ori$coords.x1 + dxy$X1
cluster$y <- ori$coords.x2 + dxy$X2
# make a spatial object and plot
coordinates(cluster)<- ~ x+y
plot(grid)
plot(cluster, add=T, col='green')
plot(random.pt, add=T, col= 'red')
plot(regular.pt, add=T, col= 'blue')

Resources