Related
I am trying to perform DBSCAN clustering on the data https://www.kaggle.com/arjunbhasin2013/ccdata. I have cleaned the data and applied the algorithm.
data1 <- read.csv('C:\\Users\\write\\Documents\\R\\data\\Project\\Clustering\\CC GENERAL.csv')
head(data1)
data1 <- data1[,2:18]
dim(data1)
colnames(data1)
head(data1,2)
#to check if data has empty col or rows
library(purrr)
is_empty(data1)
#to check if data has duplicates
library(dplyr)
any(duplicated(data1))
#to check if data has NA values
any(is.na(data1))
data1 <- na.omit(data1)
any(is.na(data1))
dim(data1)
Algorithm was applied as follows.
#DBSCAN
data1 <- scale(data1)
library(fpc)
library(dbscan)
set.seed(500)
#to find optimal eps
kNNdistplot(data1, k = 34)
abline(h = 4, lty = 3)
The figure shows the 'knee' to identify the 'eps' value. Since there are 17 attributes to be considered for clustering, I have taken k=17*2 =34.
db <- dbscan(data1,eps = 4,minPts = 34)
db
The result I obtained is "The clustering contains 1 cluster(s) and 147 noise points."
No matter whatever values I change for eps and minPts the result is same.
Can anyone tell where I have gone wrong?
Thanks in advance.
You have two options:
Increase the radius of your center points (given by the epsilon parameter)
Decrease the minimum number of points (minPts) to define a center point.
I would start by decreasing the minPts parameter, since I think it is very high and since it does not find points within that radius, it does not group more points within a group
A typical problem with using DBSCAN (and clustering in general) is that real data typically does not fall into nice clusters, but forms one connected point cloud. In this case, DBSCAN will always find only a single cluster. You can check this with several methods. The most direct method would be to use a pairs plot (a scatterplot matrix):
plot(as.data.frame(data1))
Since you have many variables, the scatterplot pannels are very small, but you can see that the points are very close together in almost all pannels. DBSCAN will connect all points in these dense areas into a single cluster. k-means will just partition the dense area.
Another option is to check for clusterability with methods like VAT or iVAT (https://link.springer.com/chapter/10.1007/978-3-642-13657-3_5).
library("seriation")
## calculate distances for a small sample
d <- dist(data1[sample(seq(nrow(data1)), size = 1000), ])
iVAT(d)
You will see that the plot shows no block structure around the diagonal indicating that clustering will not find much.
To improve clustering, you need to work on the data. You can remove irrelevant variables, you may have very skewed variables that should be transformed first. You could also try non-linear embedding before clustering.
I have 20 elements with labels. I'd like to clustering this elements through some techniques without using the labels, for example Hierarchical clustering.
Now for each of my elements I have the original labels, for example:
c(rep("a",7),rep("b","8"),rep("c",5)) ## my labels
and the labels obtained through the hierarchical clustering
c(1,1,1,1,2,3,2,2,2,2,2,2,1,3,3,1,2,3,3,3) ## labels through HC
Now, How i Can use normalised mutual information with different labels?
If i understood correctly, this shouldn't be a problem. Just remember that NMI takes data frames or matrices as input.
If you would take your variable names as 1...20, this should work:
NMI(cbind(seq(1:20), original.labels), cbind(seq(1:20), new.labels))
NMI compares every label in one with every label in the other.
So it does not matter if they are different.
It only matters how they intersect.
I have a dataset containing 1599 observations and 10 attributes on which iIneed to do kmeans clustering. I have done the kmeans with 6 clusters and I can see the cluster centers, size, etc. and which observation lies in which cluster. Now, I need to plot these results such that I have in a single plot the following information: On x-axis, I want 1 of the 10 attributes of my original data, on y-axis I want another attribute and in the plot, I want all 1599 observations, but I want them in 6 different colors for each cluster they belong. So, I will have 10C2 = 45 plots. Basically, this should give me the information that cluster 1 is high/medium/low in terms of a particular attribute while cluster 2 is so and so.....for all 6 clusters.
I tried the function plotcluster from fpc package but from what I understood, it maps the data into 2D, using PCA, and then plots the clusters in terms of 2 dimensions which are different from the original attributes. So now when I will say cluster 1 is low, in dim1, it wouldn't really make much sense.
Is there a function to do what I want, or should I just append the '$cluster' information from the kmeans output with my original data and try to plot taking 2 columns from my data at a time using the basic function plot()?
I suggest one solution, probably not the simplest one (with a for loop) but it seems to answer what you need:
df=mtcars
df$cluster = factor( kmeans(df, centers=6)$clust )
mycomb <- combn(1:ncol(df), 2)
for (xy in 1:45 ) {
plot(x=df[, mycomb[1,xy]],
y=df[, mycomb[2,xy]],
col=as.numeric(df$clust),
xlab=names(df)[mycomb[1,xy]],
ylab=names(df)[mycomb[2,xy]])
}
I am trying to reproduce the first figure of this paper on graph clustering:
Here is a sample of my adjacency matrix:
data=cbind(c(48,0,0,0,0,1,3,0,1,0),c(0,75,0,0,3,2,1,0,0,1),c(0,0,34,1,16,0,3,0,1,1),c(0,0,1,58,0,1,3,1,0,0),c(0,3,16,0,181,6,6,0,2,2),c(1,2,0,1,6,56,2,1,0,1),c(3,1,3,3,6,2,129,0,0,1),c(0,0,0,1,0,1,0,13,0,1),c(1,0,1,0,2,0,0,0,70,0),c(0,1,1,0,2,1,1,1,0,85))
colnames(data)=letters[1:nrow(data)]
rownames(data)=colnames(data)
And with these commands I obtain the following heatmap:
library(reshape)
library(ggplot2)
data.m=melt(data)
data.m[,"rescale"]=round(rescale(data.m[,"value"]),3)
p=ggplot(data.m,aes(X1, X2))+geom_tile(aes(fill=rescale),colour="white")
p=p+scale_fill_gradient(low="white",high="black")
p+theme(text=element_text(size=10),axis.text.x=element_text(angle=90,vjust=0))
This is very similar to the plot on the left of Figure 1 above. The only differences are that (1) the nodes are not ordered randomly but alphabetically, and (2) instead of just having binary black/white pixels, I am using a "shades of grey" palette to be able to show the strength of the co-occurrence between nodes.
But the point is that it is very hard to distinguish any cluster structure (and this would be even more true with the full set of 100 nodes). So, I want to order my vertices by clusters on the heatmap. I have this membership vector from a community detection algorithm:
membership=c(1,2,4,2,5,3,1,2,2,3)
Now, how can I obtain a heatmap similar to the plot on the right of Figure 1 above?
Thanks a lot in advance for any help
PS: I have experimented R draw kmeans clustering with heatmap and R: How do I display clustered matrix heatmap (similar color patterns are grouped) but could not get what I want.
Turned out this was extremely easy. I am still posting the solution so others in my case don't waste time on that like I did.
The first part is exactly the same as before:
data.m=melt(data)
data.m[,"rescale"]=round(rescale(data.m[,"value"]),3)
Now, the trick is that the levels of the factors of the melted data.frame have to be ordered by membership:
data.m[,"X1"]=factor(data.m[,"X1"],levels=levels(data.m[,"X1"])[order(membership)])
data.m[,"X2"]=factor(data.m[,"X2"],levels=levels(data.m[,"X2"])[order(membership)])
Then, plot the heat map (same as before):
p=ggplot(data.m,aes(X1, X2))+geom_tile(aes(fill=rescale),colour="white")
p=p+scale_fill_gradient(low="white",high="black")
p+theme(text=element_text(size=10),axis.text.x=element_text(angle=90,vjust=0))
This time, the cluster is clearly visible.
Hi I am using partitioning around medoids algorithm for clustering using the pam function in clustering package. I have 4 attributes in the dataset that I clustered and they seem to give me around 6 clusters and I want to generate a a plot of these clusters across those 4 attributes like this 1: http://www.flickr.com/photos/52099123#N06/7036003411/in/photostream/lightbox/ "Centroid plot"
But the only way I can draw the clustering result is either using a dendrogram or using
plot (data, col = result$clustering) command which seems to generate a plot similar to this
[2] : http://www.flickr.com/photos/52099123#N06/7036003777/in/photostream "pam results".
Although the first image is a centroid plot I am wondering if there are any tools available in R to do the same with a medoid plot Note that it also prints the size of each cluster in the plot. It would be great to know if there are any packages/solutions available in R that facilitate to do this or if not what should be a good starting point in order to achieve plots similar to that in Image 1.
Thanks
Hi All,I was trying to work out the problem the way Joran told but I think I did not understand it correctly and have not done it the right way as it is supposed to be done. Anyway this is what I have done so far. Following is how the file looks like that I tried to cluster
geneID RPKM-base RPKM-1cm RPKM+4cm RPKMtip
GRMZM2G181227 3.412444267 3.16437442 1.287909035 0.037320722
GRMZM2G146885 14.17287135 11.3577013 2.778514642 2.226818648
GRMZM2G139463 6.866752401 5.373925806 1.388843962 1.062745344
GRMZM2G015295 1349.446347 447.4635291 29.43627879 29.2643755
GRMZM2G111909 47.95903081 27.5256729 1.656555758 0.949824883
GRMZM2G078097 4.433627458 0.928492841 0.063329249 0.034255945
GRMZM2G450498 36.15941083 9.45235616 0.700105077 0.194759794
GRMZM2G413652 25.06985426 15.91342458 5.372151214 3.618914949
GRMZM2G090087 21.00891969 18.02318412 17.49531186 10.74302155
following is the Pam clustering output
GRMZM2G181227
1
GRMZM2G146885
2
GRMZM2G139463
2
GRMZM2G015295
2
GRMZM2G111909
2
GRMZM2G078097
3
GRMZM2G450498
3
GRMZM2G413652
2
GRMZM2G090087
2
AC217811.3_FG003
2
Using the above two files I generated a third file that somewhat looks like this and has cluster information in the form of cluster type K1,K2,etc
geneID RPKM-base RPKM-1cm RPKM+4cm RPKMtip Cluster_type
GRMZM2G181227 3.412444267 3.16437442 1.287909035 0.037320722 K1
GRMZM2G146885 14.17287135 11.3577013 2.778514642 2.226818648 K2
GRMZM2G139463 6.866752401 5.373925806 1.388843962 1.062745344 K2
GRMZM2G015295 1349.446347 447.4635291 29.43627879 29.2643755 K2
GRMZM2G111909 47.95903081 27.5256729 1.656555758 0.949824883 K2
GRMZM2G078097 4.433627458 0.928492841 0.063329249 0.034255945 K3
GRMZM2G450498 36.15941083 9.45235616 0.700105077 0.194759794 K3
GRMZM2G413652 25.06985426 15.91342458 5.372151214 3.618914949 K2
GRMZM2G090087 21.00891969 18.02318412 17.49531186 10.74302155 K2
I certainly don't think that this is the file that joran would have wanted me to create but I could not think of anything else thus I ran lattice on the above file using the following code.
clusres<- read.table("clusinput.txt",header=TRUE,sep="\t");
jpeg(filename = "clusplot.jpeg", width = 800, height = 1078,
pointsize = 12, quality = 100, bg = "white",res=100);
parallel(~clusres[2:5]|Cluster_type,clusres,horizontal.axis=FALSE);
dev.off();
and I get a picture like this
Since I want one single line as the representative of the whole cluster at four different points this output is wrong moreover I tried playing with lattice but I can not figure out how to make it accept the Rpkm values as the X coordinate It always seems to plot so many lines against a maximum or minimum value at the Y coordinate which I don't understand what it is.
It will be great if anybody can help me out. Sorry If my question still seems absurd to you.
I do not know of any pre-built functions that generate the plot you indicate, which looks to me like a sort of parallel coordinates plot.
But generating such a plot would be a fairly trivial exercise.
Add a column of cluster labels (K1,K2, etc.) to your original data set, based on your clustering algorithm's output.
Use one of the many, many tools in R for aggregating data (plyr, aggregate, etc.) to calculate the relevant summary statistics by cluster on each of the four variables. (You haven't said what the first graph is actually plotting. Mean and sd? Median and MAD?)
Since you want the plots split into six separate panels, or facets, you will probably want to plot the data using either ggplot or lattice, both of which provide excellent support for creating the same plot, split across a single grouping vector (i.e. the clusters in your case).
But that's about as specific as anyone can get, given that you've provided so little information (i.e. no minimal runnable example, as recommended here).
How about using clusplot from package cluster with partitioning around medoids? Here is a simple example (from the example section):
require(cluster)
#generate 25 objects, divided into 2 clusters.
x <- rbind(cbind(rnorm(10,0,0.5), rnorm(10,0,0.5)),
cbind(rnorm(15,5,0.5), rnorm(15,5,0.5)))
clusplot(pam(x, 2)) #`pam` does you partitioning