ANOSIM with cutree groupings - r

What i would like to do is an ANOSIM of defined groupings in some assemblage data to see whether the groupings are significantly different from one another, in a similar fashion to this example code:
data(dune)
data(dune.env)
dune.dist <- vegdist(dune)
attach(dune.env)
dune.ano <- anosim(dune.dist, Management)
summary(dune.ano)
However in my own data I have the species abundance in a bray-curtis matrices and after creating hclust() diagrams and creating my own groupings visually by looking at the dendrogram and setting the height. I can then through cutree() get these groupings which can be superimposed on MDS plots etc. but I would like to check the significance of the similarity between the groupings i have created - i.e are the groupings significantly different or just arbitrary groupings?
e.g.
data("dune")
dune.dist <- vegdist(dune)
clua <- hclust(dune.dist, "average")
plot(clua)
rect.hclust(clua, h =0.65)
c1 <- cutree(clua, h=0.65)
I then want to use the c1 defined category as the groupings, which in the example code given was the management factor, and test their similarities to see whether they are actually different via anosim().
I am pretty sure this is just a matter of my inept coding.... any advice would be appreciated.

cutree returns groups as integers: you must change these to factors if you want to use them in anosim: Try anosim(vegdist(dune), factor(c1)). You better contact a local statistician for using anosim to analyse dissimilarities using clusters created from these very same dissimilarities.

Related

Plotting different mixture model clusters in the same curve

I have two sets of data, one representing a healthy data set having 4 variables and 11,000 points and another representing a faulty set having 4 variables and 600 points. I have used R's package MClust to obtain GMM clustering for each data set separately. What I want to do is to obtain both clusters in the same frame so as to study them at the same time. How can that be done?
I have tried joining both the datasets but the result I am obtaining is not what I want.
The code in use is:
Dat4M <- Mclust(Dat3, G = 3)
Dat3 is where I am storing my dataset, Dat4M is where I store the result of Mclust. G = 3 is the number of Gaussian mixtures I want, which in this case is three. To plot the result, the following code is used:
plot(Dat4M)
The following is obtained when I apply the above code in my Healthy dataset:
The following is obtained when the above code is used on Faulty dataset:
Notice that in the faulty data density curve, consider the mixture of CCD and CCA, we see that there are two density points that have been obtained. Now, I want to place the same in the same block in the healthy data and study the differences.
Any help on how to do this will be appreciated.

Clustering as a dimension reduction technique, and how to pick representatives elements for each cluster?

I have some dataset in which some observations are highly correlated. I am doing a clustering analysis on the distance matrix obtained from the correlation matrix. Some elements in this datasets are redundant and I want to select some representatives elements with a minimal mutual correlation. I think that a brute-force method is to simply choose one element from each cluster. But I want to know if there are more formal methods for such conceived dimensionality reduction in R ?
For instance, we are doing the clustering on the mtcars dataset in the following manner:
> m=cor(t(mtcars))
> hc=hclust(as.dist(m),"ave")
> plot(hc)
We are obtaining the following dendrogram:
How to extract from the above dendrograms essential elements ? This mean elements which are minimally mutually correlated ?
One option would be to use some of the pre-processing functions within the caret package.
Using your example, the code below will remove all columns that have 0.95 correlation with another column.
library(caret)
m <- cor(t(mtcars))
highlyCor <- findCorrelation(m, cutoff = .95)
t(mtcars)[,-highlyCor]
The above code is adapted from Max Kuhn's excellent book. Refer to it and caret documentation for more background and information.

Hierarchical clustering on continuous heterogeneous variables with different range/scales in R

I would like to use R to perform hierarchical clustering with two groups of variables describing the same samples. One group is microarray gene expression data (for specific genes) that have been normalized and batch effect corrected. The other group also has some quantitative clinical parameters that describe the same samples. However, these clinical variables have not been normalized or subjected to any kind of transformation(i.e. raw continuous values).
For example, one variable of these could have range of values from 2 to 35, whereas another from 0.1 to 0.9, etc.
Thus, as my ultimate goal in to implement hierarchical clustering and use both groups simultaneously (merged in a matrix/dataframe), in order to inspect which of these clinical variables cluster with specific genes, etc:
1) Is an initial transformation in the group of the clinical variables necessary before merging with the genes and perform the clustering ? For example: log2 transformation, which has also been done to part of my gene expression data !!
2) Or, a row scaling (that is the total features in the input data) would take into account this discrepancy ?
3) For a similar analysis/approach, like constructing a correlation plot of the above total variables, would a simple scaling be sufficient?
Without having seen your gene expression data, I can only provide you some general suggestions based on your description, in the context of the 3 questions you asked:
1) You should definitely check the distribution of each group. In R, you may use one or more of the following function to visualize the distribution:
hist(expression_data) ##histogram
plot(density(expression_data)) ##density plot; alternative to histogram
qqnorm(expression_data); qqline(expression_data) #QQ plot
Since my understanding is that one of your expression data group is log2 transformed, that particular group should have a normal distribution (i.e. a bell curve shape in the histogram and a straight line in the QQ plot). Whether to transform the group that has not yet been transformed will depend on what you want to do with the data. For instance, if you want to use a t-test to compare the two groups, then you definitely need a transformation, as there is a normality assumption associated with a t-test. With regard to hierarchical clustering, if you decide to use both groups in a single clustering analysis, then why would you ever keep one transformed and the other not?
2) Scaling by features is a reasonable approach. Here is a clustering lecture from a Utah State Univ. stats course, with an example. scale=TRUE is an option for you if you decide to use heatmap function in R.
3) I don't think there is a definitive answer to your third question. It has to depend on how many available features you have and what analyses you will be doing downstream. Similar to question 1, I would argue that simple scaling may be sufficient for visualizing your data by hierarchical clustering. However, do keep in mind that, say you decide to perform a linear model (which is very common with microarray gene expression data), you might want to consider more sophisticated data scaling.

Can PCA or Principal component regression reveal information not seen in the univariate case?

I am wondering if there is a case where you see something in the principal components (PC) what you do not see by looking univariately at the variables that the PCA is based on. For instance, considering the case of group differences: that you see a separation of two groups in one of the PCs, but not in a single variable (univariate).
I will use an example in the two dimensional setting to better illustrate my question: Lets suppose we have two groups, A and B, and for each observations we have two multivariate-normal distributed covariables.
# First Setting:
group_A <- mvrnorm(n=1000, mu=c(0,0), Sigma=matrix(c(10,3,3,2),2,2))
group_B <- mvrnorm(n=1000, mu=c(10,3), Sigma=matrix(c(10,3,3,2),2,2))
dat <- rbind(cbind.data.frame(group_A, group="A"),cbind.data.frame(group_B, group="B"))
plot(dat[,1:2], xlab="x", ylab="y", col=dat[,"group"])
In this first setting you see a group separation in the variable x, in the variable y, and you will also see a separation in both principal components. Hence, using the PCA we get the same result we got in the univariate case: the groups A and B have different values in the variables x and y.
In a second example generated by myself, you do not see a separation in variable x, variable y, or in PC1 or PC2. Hence, although our common sense suggests that we can distinguish between the two groups based on x and y, we do not observe this in the univariate case and the PCA doesn't help us either:
# Second setting
group_A <- mvrnorm(n=1000, mu=c(0,0), Sigma=matrix(c(10,3,3,2),2,2))
group_B <- mvrnorm(n=1000, mu=c(0,0), Sigma=matrix(c(10,-3,-3,2),2,2))
dat <- rbind(cbind.data.frame(group_A, group="A"),cbind.data.frame(group_B, group="B"))
plot(dat[,1:2], xlab="x", ylab="y", col=dat[,"group"])
QUESTION: Is there a case in where the PCA helps us in extracting correlations or separations we would not see in the univariate case? Can you construct one or is this not possible in the two-dimensional case.
Thank you all in advance for helping me to disentanglie this.
I think your question is mainly the result of a misunderstanding of what PCA does. It does't find clusters of data like, say, kmeans or DBSCAN. It projects n-dimensional data onto an orthogonal basis. Then it selects the top k dimensions (according to variance explained), where k < n.
So in your example, PCA doesn't know that group A was generated by some distribution and group B by another. It just sees the data in 2 dimensions and finds two principle components (from which you may or may not select 1). You might as well plot all 2000 data points in the same color.
However, if you wanted to use PCA in this instance, you would indicate that a 3rd dimension distinguishes between group A and group B. You could, for example, label group A +1 and group B -1 (or something that makes sense relative to the scale of the other dimensions). Then perform PCA on 3 dimensions, reducing to 2 or 1, depending on what the eigenvalues tell you about the variation explained.

Timeseries cluster validation: using cluster.stats metrics to decide optimal cluster number

I am clustering timeseries data using appropriate distance measures and clustering algorithms for longitudinal data. My goal is to validate the optimal number of clusters for this dataset, through cluster result statistics. I read a number of articles and posts on stackoverflow on this subject, particularly: Determining the Optimal Number of Clusters. Visual inspection is only possible on a subset of my data; I cannot rely on it to be representative of my whole dataset since I am dealing with big data.
My approach is the following:
1. I cluster several times using different numbers of clusters and calculate the cluster statistics for each of these options
2. I calculate the cluster statistic metrics using FPC's cluster.stats R package: Cluster.Stats from FPC Cran Package. I plot these and decide for each metric which is the best cluster number (see my code below).
My problem is that these metrics each evaluate a different aspect of the clustering "goodness", and the best number of clusters for one metric may not coincide with the best number of clusters of a different metric. For example, Dunn's index may point towards using 3 clusters, while the within-sum of squares may indicate that 75 clusters is a better choice.
I understand the basics: that distances between points within a cluster should be small, that clusters should have a good separation from each other, that the sum of squares should be minimized, that observations which are in different clusters should have a large dissimilarity / different clusters should ideally have a strong dissimilarity. However, I do not know which of these metrics is most important to consider in evaluating cluster quality.
How do I approach this problem, keeping in mind the nature of my data (timeseries) and the goal to cluster identical series / series with strongly similar pattern regions together?
Am I approaching the clustering problem the right way, or am I missing a crucial step? Or am I misunderstanding how to use these statistics?
Here is how I am deciding the best number of clusters using the statistics:
cs_metrics is my dataframe which contains the statistics.
Average.within.best <- cs_metrics$cluster.number[which.min(cs_metrics$average.within)]
Average.between.best <- cs_metrics$cluster.number[which.max(cs_metrics$average.between)]
Avg.silwidth.best <- cs_metrics$cluster.number[which.max(cs_metrics$avg.silwidth)]
Calinsky.best <- cs_metrics$cluster.number[which.max(cs_metrics$ch)]
Dunn.best <- cs_metrics$cluster.number[which.max(cs_metrics$dunn)]
Dunn2.best <- cs_metrics$cluster.number[which.max(cs_metrics$dunn2)]
Entropy.best <- cs_metrics$cluster.number[which.min(cs_metrics$entropy)]
Pearsongamma.best <- cs_metrics$cluster.number[which.max(cs_metrics$pearsongamma)]
Within.SS.best <- cs_metrics$cluster.number[which.min(cs_metrics$within.cluster.ss)]
Here is the result:
Here are the plots that compare the cluster statistics for the different numbers of clusters:

Resources