KNN implementation based on distance matrix in R - r

The problem should be straightforward, but I'm lost anyways...
I have n samples, and already calculated a distance matrix (b.c. I do not want to use euclidean distance and couldn't find a way to specify another distance measure for for example the knn() function).
I then found (knn_1,knn_2) and used them to get the nearest neighbors from the distance matrix (As far as I can tell it's just ordering by rows).
Now, I do not know any clusters in the beginning,and do not need to insert any new data points afterwards.
Basically my question is, how do I initialize the clusters.
An example to illustrate my problem: Let's assume our nearest neighbors (k=2, n = 4) are as follows:
i = 1: 2,3
i = 2: 3,4
i = 3: 1,3
i = 4: 1,2
How would you find the clusters?
Ideas I had: start with assigning i =1 to cluster 1, and then subsequently assign its nearest neighbors (2,3) to it. But based on that logic, in the end everything would be in this one cluster, because it just propagates.
So, next idea: Start by assigning k elements to k cluster. I.e. assign i = 1 to cluster 1, i = 2 to cluster 2 and i = 3 to cluster 3. But what justification would I have for that? It would make sense for k-means clustering, but not to KNN...
Add each element to its own clusters and subsequently merge them. Sounds good, but don't know how to do that...
If you know of any R-packages that do KNN clustering based on a distance matrix, that's exactly what I am looking for! I have looked into the FastKNN, the class, the proxy and the philentropy (latter two to calculate distances) but haven't found anything so far.
Thanks so much!

Related

Understanding R subspace package clustering output

Somewhat related to this question.
I am using R subspace package for subspace clustering. As in the question above, I have failed to use the generic plotting method to plot out my resulting clusters in a way native to the package. The next step is to understand the output of the command
CLIQUE(df, xi = 40, tau = 0.2)
That looks something like this:
I understand that the "object" is the row number for the clustered unit, and the subspace indicates the dimensions of the data in which the clustering was done. However I don't see how the clusters in the given dimensions can be distinguished.
The documentation does not contain information on the output. Ideally, my goal is to plot out all the clusters with something like ggplot2, or in 3D, what have you. And I need to know which units are in which clusters in corresponding dimensions.
Additionally, checked if the dimensions of any of the two members of the output list are the same like this:
cluster_result <- clique_model
equalities_matrix <- matrix(
0L, nrow = length(cluster_result), ncol = length(cluster_result)
)
for (i in 1:length(cluster_result)){
for (j in 1:length(cluster_result)){
equalities_matrix[i,j] <- (
all(cluster_result[[i]]$subspace == cluster_result[[j]]$subspace)
)
}
}
sum(equalities_matrix)
The answer is no.
So, here is what my researches lead to, might be helpful for someone in the future.
Above, the CLIQUE algorithm above outputted one cluster per every set of dimensions, possibly by the virtue of the data or tuning of the algorithm. I added more features to the data, ran it again, and checked if the dimensions of any of the two members of the output list are the same again. This time, yes, several dimensions were the same as the sum(equalities_matrix) yielded a number larger than the number of features.
In conclusion, the output of the algorithm is a list of lists where each member list represents one cluster in one subspace:
subspace ... the dimensions making the subspace indicated with TRUE,
objects ... members of the cluster.
If there is more than one cluster in a given subspace, there will be more member lists with the same subspace, and different members of the cluster.
Here are the papers that helped me understand the theory:
Parsons, Haque, and Liu; 2004
Agrawal Gehrke, Gunopulos, Raghavan

Working with spatial data: How to find the nearest neighbour of points without replacement?

I am currently working with some forest inventory data.
The data were collected on sample plots whose positions are available as point data (spatial data).
I have two datasets:
dataset dat.1 with n sample plots of species A
dataset dat.2 with k sample plots of species B
with n < k
What I want to do is to match every point of dat.1 with a point of dat.2. The result should be n pairs of points. So n of k plots from dat.2 should be selected.
The criteria for matching are:
spatial distance between a pair of points is as close as possible
one point of dat.2 can only be matched with one point in dat.1 and vice versa. So if there is a pair of points, these points should not be used in any other pair, even if it would be useful in terms of shortest distance. The "occupied" points should not be replaced and should not be used in the further matching process.
I have been looking for a very long time for ways to perform this analysis. There are functions like st_nn from 'nngeo' or nn2 from 'RANN' which give out the k nearest neighbours of a point. However, it is not possible to exclude the possibility of a replacement with these functions.
In the package 'matchIt' there are possibilites to perform a nearest neighbour matching without replacement. Yet these functions are adapted to find the closest distance between control variables and not between spatial locations.
Could anyone come up with an idea for a possibility to match my requirements?
I would really appreciate any hints or suggestions for packages and / or functions that could help me with this issue.
The first thing you should do is create your own distance matrix. The rows should correspond to those in dat.1 and the columns to those in dat.2, and each entry in the matrix is the distance between the plot in the row and the plot in the column. You can do this manually by looping through your datasets and computing the Euclidean (or other) distance between the points. You can also use the match_on function in the optmatch package to do this with the following code:
d <- rbind(dat.1, dat.2)
d$dat <- c(rep(1, nrow(dat.1)), rep(0, nrow(dat.2))
dist <- optmatch::match_on(dat ~ x.coor + y.coord, data = d,
method = "euclidean")
Once you have a distance matrix in this form, you can supply it to pairmatch in the optmatch package. pairmatch performs K:1 optimal matching without replacement. The matching is optimal in that the sum of the absolute distances between matched pairs in the matched sample is as low as possible. It doesn't guarantee that any one unit will get its nearest neighbor, but it does yield matched samples that ensure no units are matched to other units too far apart from them. You can specify an argument to controls to choose how many dat.2 units you want to be matched to each dat.1 unit. For example, to match 2 plots from dat.2 to each unit in dat.1, you can use
d$pairs <- optmatch::pairmatch(dist)
The output is a factor containing pair membership for each unit. Unmatched units will have a value of NA.
You can also do this in one single step with
d$pairs <- optmatch::pairmatch(dat ~ x.coor + y.coord, data = d,
method = "euclidean")
Then you can subset your dataset so only matched plots remain:
matched <- d[!is.na(d$pairs),]

Weighted observation frequency clustering using hclust in R

I have a large matrix of 500K observations to cluster using hierarchical clustering. Due to the large size, i do not have the computing power to calculate the distance matrix.
To overcome this problem I chose to aggregate my matrix to merge those observations which were identical to reduce my matrix to about 10K observations. I have the frequency for each of the rows in this aggregated matrix. I now need to incorporate this frequency as a weight in my hierarchical clustering.
The data is a mixture of numerical and categorical variables for the 500K observations so i have used the daisy package to calculate the gower dissimilarity for my aggregated dataset. I want to use hclust in the stats package for the aggregated dataset however i want to take into account the frequency of each observation. From the help information for hclust the arguments are as follows:
hclust(d, method = "complete", members = NULL)
The information for the members argument is:, NULL or a vector with length size of d. See the ‘Details’ section. When you look at the details section you get: If members != NULL, then d is taken to be a dissimilarity matrix between clusters instead of dissimilarities between singletons and members gives the number of observations per cluster. This way the hierarchical cluster algorithm can be ‘started in the middle of the dendrogram’, e.g., in order to reconstruct the part of the tree above a cut (see examples). Dissimilarities between clusters can be efficiently computed (i.e., without hclust itself) only for a limited number of distance/linkage combinations, the simplest one being squared Euclidean distance and centroid linkage. In this case the dissimilarities between the clusters are the squared Euclidean distances between cluster means.
From the above description, i am unsure if i can assign my frequency weights to the members arguments as it is not clear if this is the purpose of this argument. I would like to use it like this:
hclust(d, method = "complete", members = df$freq)
Where df$freq is the frequency of each row in the aggregated matrix. So if a row is duplicated 10 times this value would be 10.
If anyone can help me that would be great,
Thanks
Yes, this should work fine for most linkages, in particular single, group average and complete linkage. For ward etc. you need to correctly take the weights into account yourself.
But even that part is not hard. Just make sure to use the cluster sizes, because you need to pass the distance of two clusters, not two points. So the matrix should contain the distance of n1 points at location x and n2 points at location y. For min/max/mean this n disappears or cancels out. For ward, you should get a SSQ like formula.

Cylindrical Clustering in R - clustering timestamp with other data

I'm learning R and I have to cluster numeric data with a timestamp field.
One of the parameters is a time, and since the data is strictly day-night dependent, I want to take into account the "spherical" nature of this data.
As far as I saw from the manual, libraries such as skmeans cannot handle "cylindrical" data but only "spherical" data (i.e. where all the components are in polar coordinates).
My idea for a suitable solution is the follwing: I can decompose the HOUR column (0-24) into two different colums X,Y and express the time in polar coordinates, such as x^2+y^2=1.
In this way a k-means with euclidean distance should not have problem interpreting the data.
Am I right?
Here is such a mapping of h to m where h is the time in hours (and fraction of an hour). Then we try kmeans and at least in this test it seems to work:
h <- c(22, 23, 0, 1, 2, 10, 11, 12)
ha <- 2*pi*h/24
m <- cbind(x = sin(ha), y = cos(ha))
kmeans(m, 2)$cluster # compute cluster assignments via kmeans
## [1] 2 2 2 2 2 1 1 1
k-means should use squared Euclidean distance.
But indeed: projecting your data into a meaningful Euclidean space is an easy way to avoid this kind of problems.
However be aware that your mean will no longer lie on the cylinder. In many cases, you can just scale the mean to the desired cylinder. But it might become 0, then no meaningful rescaling is possible.
The other option is kernel k-means. As your desired distance is Euclidean after a data transformation, you can also "kernelize" this transformation, and use kernel k-means. But it may actually be faster to transform your data in your particular case. It will likely only pay off when using much more complex transformations (say, to an infinite dimensional vector space).

Clustering - how to find the nearest to a cluster

Hints I got as to a different question puzzled me quite a bit.
I got an exercise, actually part of a larger exercise:
Cluster some data, using hclust (done)
Given a totally new vector, find out to which of the clusters you got in 1 it is nearest.
According to the excercise, this should be done in quite short a time.
However, after weeks I am puzzled whether this can be done at all, as apparently all I really get from hclust is a tree - and not, as I assumed, a number of clusters.
As I suppose I was unclear:
Say, for instance, I feed hclust a matrix which consists of 15 1x5 Vectors, 5 times (1 1 1 1 1 ), 5 times (2 2 2 2 2) and 5 times (3 3 3 3 3). This should give me three quite distinct clusters of size 5, anyone can easily do that by hand. Is there a command to use so that I can actually find out from the program that there are 3 such clusters in my hclust-object and what they contain?
You'll have to think about what the right metric is to define closeness to the cluster. Building on the example in the hclust doc, here's a way to compute the means for each cluster and then measure the distance between the new data point and the set of means.
# Leave out one state
A <-USArrests
B <-A[rownames(A)!="Kentucky",]
KY <- A[rownames(A)=="Kentucky",]
# Put the B data into 10 clusters
hc <- hclust(dist(B), "ave")
memb <- cutree(hc, k = 10)
B$cluster = memb[rownames(B)==names(memb)]
# Compute the averages over the clusters
M <-aggregate( .~cluster, data=B, FUN=mean)
M$cluster=NULL
# Now add the hold out state to the set of averages
M <-rbind(M,KY)
# Compute the distance between the clusters and the hold out state.
# This is a pretty silly way to do this but it works.
D <- as.matrix(dist(as.matrix(M),diag=TRUE,upper=TRUE))["Kentucky",]
names(D) = rownames(M)
KYclust = which.min(D[-length(D)])
memb[memb==KYclust]
# Now cluster the full set of states and compare the results.
hc <- hclust(dist(A), "ave")
memb <- cutree(hc, k = 10)
a=memb[which(names(memb)=="Kentucky")]
memb[memb==a]
In contrast to k-means, clusters found by hclust can be of arbitrary shape.
The distance to the nearest cluster center therefore is not always meaningful.
Doing a 1 nearest neighbor style assignment probably is better.

Resources