MY DATA
I have a matrix Median that contains three qualities, Speed, Angle & Acceleration, in virtual 3D space. Each set of qualities belongs to an individual person, termed Class.
Speed<-c(18,21,25,19)
Angle<-c(90,45,90,120)
Acceleration<-c(4,5,9,4)
Class<-c("Nigel","Paul","Kelly","Steve")
Median = data.frame(Class,Speed,Angle,Acceleration)
mm = as.matrix(Median)
In the example above, Nigel's Speed, Angle and Acceleration qualities would be (18,90,4).
MY PROBLEM
I wish to know the euclidean distance between each individual person/class. For example, the euclidean distance between Nigel and Paul, Nigel and Kelly etc. I then wish to display the results in a dendrogram, as a result of hierarchical clustering.
WHAT I HAVE (UNSUCCESSFULLY) ATTEMPTED
I first used hc = hclust(dist(mm)) then plot(hc) although this results in a dendrogram of Speed only. It seems the function pdist() can compute distance between two matrices of observations, but I have three matrices. Is this possible in R? I am new to the language and have found a similar question in MATLAB here Calculating Euclidean distance of pairs of 3D points in matlab but how do I write this in R code?
Many thanks.
When you transform your data.frame into a matrix, all values become characters, I don't think that is what you want... (moreover, you're trying to compute distance with the "class" names as one of the variables...)
The best would be to put your "Class" as row.names and then compute your distances and hclust :
mm<-Median[,-1]
row.names(mm)<-Median[,1]
Then you can compute the euclidean distances between Class with
dist(mm,method="euclidean") :
> dist(mm,method="euclidean")
Nigel Paul Kelly
Paul 45.110974
Kelly 8.602325 45.354162
Steve 30.016662 75.033326 31.000000
Finally, perform your hierarchical classification :
hac<-hclust(dist(mm,method="euclidean"))
and plot(hac,hang=-1) to display the dendrogram.
Related
I want to quantify the dissimilarity between two group. Each group has 5 observations, so there are 25 combinations.
For each combination, I have calculated their pairwise Euclidean distance (based on feature space). So I have had a vector of pairwise Euclidean distances as follows:
set.seed(1)
runif(n=25, min=50, max=90)
[1] 60.62035 64.88496 72.91413 86.32831 58.06728 85.93559 87.78701 76.43191 75.16456 52.47145 58.23898 57.06227 77.48091
[14] 65.36415 80.79366 69.90797 78.70474 89.67624 65.20141 81.09781 87.38821 58.48570 76.06695 55.02220 60.68883
I want to use Kernel function to assign weights to the 25 combinations based on the vector of pairwise Euclidean distances. Shorter distance, larger weight.
How can I do it in R?
I have limited knowledge about kernel. Thank you in advance for any suggestions!
I would really appreciated it even if you can give me some hints about the mathematical formula without any programming.
I have a large matrix of 500K observations to cluster using hierarchical clustering. Due to the large size, i do not have the computing power to calculate the distance matrix.
To overcome this problem I chose to aggregate my matrix to merge those observations which were identical to reduce my matrix to about 10K observations. I have the frequency for each of the rows in this aggregated matrix. I now need to incorporate this frequency as a weight in my hierarchical clustering.
The data is a mixture of numerical and categorical variables for the 500K observations so i have used the daisy package to calculate the gower dissimilarity for my aggregated dataset. I want to use hclust in the stats package for the aggregated dataset however i want to take into account the frequency of each observation. From the help information for hclust the arguments are as follows:
hclust(d, method = "complete", members = NULL)
The information for the members argument is:, NULL or a vector with length size of d. See the ‘Details’ section. When you look at the details section you get: If members != NULL, then d is taken to be a dissimilarity matrix between clusters instead of dissimilarities between singletons and members gives the number of observations per cluster. This way the hierarchical cluster algorithm can be ‘started in the middle of the dendrogram’, e.g., in order to reconstruct the part of the tree above a cut (see examples). Dissimilarities between clusters can be efficiently computed (i.e., without hclust itself) only for a limited number of distance/linkage combinations, the simplest one being squared Euclidean distance and centroid linkage. In this case the dissimilarities between the clusters are the squared Euclidean distances between cluster means.
From the above description, i am unsure if i can assign my frequency weights to the members arguments as it is not clear if this is the purpose of this argument. I would like to use it like this:
hclust(d, method = "complete", members = df$freq)
Where df$freq is the frequency of each row in the aggregated matrix. So if a row is duplicated 10 times this value would be 10.
If anyone can help me that would be great,
Thanks
Yes, this should work fine for most linkages, in particular single, group average and complete linkage. For ward etc. you need to correctly take the weights into account yourself.
But even that part is not hard. Just make sure to use the cluster sizes, because you need to pass the distance of two clusters, not two points. So the matrix should contain the distance of n1 points at location x and n2 points at location y. For min/max/mean this n disappears or cancels out. For ward, you should get a SSQ like formula.
(1) I have n points in 3D space
(2) I have a random vector
(3) I project all n points into the vector
Then I find the average distance between all points
How could I find the vector in which after projecting the points into it, the average distance between points is the greatest?
Can this be done in O(n)?
There is one method which you can use from machine learning, specifically dimensionality reduction. (This is based on PCA which was mentioned in one of the comments.)
Compute the covariance matrix.
Find the eigenvalues and the eigenvectors.
The eigenvector with the largest eigenvalue will correspond to the direction of the most variance, so the direction in which the points are most spread out.
Map the points onto the line defined by the vector.
Centring the points around 0 before the projection, and then moving them back afterwards may be needed as well. The issue with this, is that it is quite expensive in terms of time. For more details looks at this question: How is the complexity of PCA O(min(p^3,n^3))?
I am trying to cluster a Multidimensional Functional Object with the "kmeans" algorithms. What does it mean: So I don't have anymore a vector per each row or Individual, even more a 3x3 observation matrix per each Individual.For example: Individual = 1 has the following observations:
(x1, x2, x3),(y1,y2,y3),(z1,z2,z3).
The same structure of observations is also given for the other Individuals. So do you know how I can cluster with "kmeans" including all 3 observation vectors -and not only one observation vector how it is normal used for "kmeans" clustering?
Would you do it for each observation vector, f.e. (x1, x2, x3), separately and then combine the Information somehow together? I want to do this with the kmeans() Function in R.
Many thanks for your answers!
Using k-means you interpret each observation as a point in an N-dimensional vector space. Then you minimize the distances between your observations and the cluster centers.
Since, the data is viewed as dots in an N-dim space, the actual arrangement of the values does not matter.
You can, therefore, either tell your k-means routine to use a matrix norm, for example the Frobenius norm, to compute the distances. The other way would be to flatten your observations from 3 by 3 matrices to 1 by 9 vectors. The Frobenius norm of a NxN matrix is equivalent to the euclidean norm of a 1xN^2 vector.
Just give the argument to kmeans() with all the three columns it'll calculate the distances in 3 dimension, if that is what you are looking for.
In dynamical networks, one may calculate the Hamming distance to compare the similarity between two graphs, can anyone explain how?
Assuming that the Hamming distance of two graphs have equal edge density, what is the difference between Hamming distance and expected Hamming distance between two independent Erdos-Renyi random graphs? How does the later arise?
The Hamming distance measures the minimum number of substitutions required to change (transform) one mathematical 'object' (i.e. strings or binary) into another.
So in network theory it can be defined as a the number of different connections between two networks (it can be formulated also for not equally-sized networks and for weighted or directed graphs). In a simple case in which you have two Erdos-Renyi networks (the adjacency matrix has 1 if the node pair is connected and 0 if not) the distance is mathematically defined as follows:
The values that are subtracted are the two adjacency matrix. If you take two Erdos-Renyi networks with wiring probability of 0.5 and compute the hamming distance between them you should get a value around 0.5. I generated different Erdos-Renyi graph and their Hamming distances produced a Gaussian curve around 0.5 (as we can expect; see below).
If it is needed I can give you the code I used.