For some reason, I have to find the 10~30 nearest neighbors for each samples in a geo-dataset(have lat, lon, and some categorical features, rows >10M) with various kinds of distance metrics, mostly Haversine Distance or Gower Distance.
Here, I need a fast implementation/package for obtaining the index and actual distance of the samples for each data point. Actually, the function get.knn in FNN package works very well and it meets my requirements. Unfortunately, it does not support custom distance settings and only provides euclidean distance.
I was wondering that is there any other package that can perform knn at least with Haversine Distance and output the index and distance in a very fast manner?
Many thanks!
Related
I want to compute Jaccard index of similarity based on continuous quantities. I found the package vegan that can compute the Jaccard index for continuous cases based on Bray-Curtis measure through the function vegdist. I was able to do it by choosing the number of sites randomly and compute the Jaccard index between all pairs of the chosen sites and take the average after. This procedure takes a lot of time especially that I have many scenarios to treat. I wonder if there is a way to do it by using rasters directly (using all the non-NA pixels) without using binary maps in faisable time.
I was trying to use the dbscan package in R to try to cluster some spatial data. The dbscan::dbscan function takes eps and minpts as input. I have a dataframe with two columns longitude and latitude expressed in degree decimals like in the following:
df <- data.frame(lon = c(seq(1,5,1), seq(1,5,1)),
lat = c(1.1,3.1,1.2,4.1,2.1,2.2,3.2,2.4,1.4,5.1))
and I apply the algorithm:
db <- fpc::dbscan(df, eps = 1, MinPts = 2)
will eps here be defined in degrees or in some other unit ? I'm really trying to understand in which unit this maximum distance eps value is expressed so any help is appreciated
Never use the fpc package, always use dbscan::dbscan instead.
If you have latitude and longitude, you need to choose an appropriate distance function such as Haversine.
The default distance function, Euclidean, ignores the spherical nature of earth. The eps value then is a mixture of degrees latitude and longitude, but these do not correspond to uniform distances! One degree east at the equator is much farther than one degree east in Vancouver.
Even then, you need to pay attention to units. One implementation of Haversine may yield radians, another one meters, and of course someone crazy will work in miles.
Unfortunately, as far as I can tell, none of the R implementations can accelerate Haversine distance. So it may be much faster to cluster the data in ELKI instead (you need to add an index yourself though).
If your data is small enough, you can however use a precomputed distance matrix (dist object) in R. But that will take O(n²) time and memory, so it is not very scalable.
I have a similarity matrix that I created using Harry—a tool for string similarity, and I wanted to plot some dendrograms out of it to see if I could find some clusters / groups in the data. I'm using the following similarity measures:
Normalized compression distance (NCD)
Damerau-Levenshtein distance
Jaro-Winkler distance
Levenshtein distance
Optimal string alignment distance (OSA)
("For comparison Harry loads a set of strings from input, computes the specified similarity measure and writes a matrix of similarity values to output")
At first, it was like my first time using R, I didn't pay to much attention on the documentation of hclust, so I used it with a similarity matrix. I know I should have used a dissimilarity matrix, and I know, since my similarity matrix is normalized [0,1], that I could just do dissimilarity = 1 - similarity and then use hclust.
But, the groups that I get using hclustwith a similarity matrix are much better than the ones I get using hclustand it's correspondent dissimilarity matrix.
I tried to use the proxy package as well and the same problem, the groups that I get aren't what I expected, happens.
To get the dendrograms using the similarity function I do:
plot(hclust(as.dist(""similarityMATRIX""), "average"))
With the dissimilarity matrix I tried:
plot(hclust(as.dist(""dissimilarityMATRIX""), "average"))
and
plot(hclust(as.sim(""dissimilarityMATRIX""), "average"))
From (1) I get what I believe to be a very good dendrogram, and so I can get very good groups out of it. From (2) and (3) I get the same dendrogram and the groups that I can get out of it aren't as good as the ones I get from (1)
I'm saying that the groups are bad/good because at the moment I have a somewhat little volume of data to analyse, and so I can check them very easily.
Does this that I'm getting makes any sense? There is something that justify this? Some suggestion on how to cluster with a similarity matrizx. Is there a better way to visualize a similarity matrix than a dendrogram?
You can visualize a similarity matrix using a heatmap (for example, using the heatmaply R package).
You can check if a dendrogram fits by using the dendextend R package function cor_cophenetic (use the most recent version from github).
Clustering which is based on distance can be done using hclust, but also using cluster::pam (k-medoids).
Can I apply DBSCAN with other features in addition to location ? and if it is available how can it be done through R or Spark ?
I tried preparing an R table of 3 columns one for latitude, longitude and score (the feature I wanna cluster upon in addition to space feature) and when tried running DBSCAN with the following R code, I get the following plot which tells that the algorithm makes clusters upon each pair of columns (long, lat), (long, score), (lat, score), ...
my R Code:
df = read.table("/home/ahmedelgamal/Desktop/preparedData")
var = dbscan(df, eps = .013)
plot(x = var, data = df)
and the plot I get:
You are misinterpreting the plot.
You don't get one result per plot, but all plots show the same clusters, only in different attributes.
But you also have the issue that the R version is (to my knowledge) only fast for Euclidean distance.
In your current code, points are neighbors if (lat[i]-lat[j])^2+(lon[i]-lon[j])^2+(score[i]-score[j])^2 <= eps^2. This bad because: 1. latitude and longitude are not Euclidean, you should be using haversine instead, and 2. your additional attribute has much larger scale and thus you pretty much only cluster points with near-zero score, and 3) your score attribute is skewed.
For this problrm you should probably be using Generalized DBSCAN. Points are similar if their haversine distance is less than e.g. 1 mile (you want to measure geographic distance here, not coordinates, because of distortion) and if their score differs by a factor of at most 1.1 (i.e. compare score[y] / score[x] or work in logspace?). Since you want both conditipns to hold, the usual Euclidean DBSCAN implementation is not yet enough, but you need a Generalized DBSCAN that allows multiple conditions. Look for an implementation of Generalized DBSCAN instead (I believe there id one in ELKI that you may be able to access from Spark), or implement it yourself. It's not very hard to do.
If quadratic runtime is okay for you, you can probably use any distance-matrix-based DBSCAN, and simply "hack" a binary distance matrix:
compute Haversine distances
compute Score dissimilarity
distance = 0 if haversine < distance-threshold and score-dissimilarity < score-threshold, otherwise 1.
run DBSCAN with precomputed distance matrix and eps=0.5 (since it is a binary matrix, don't change eps!)
It's reasonably fast, but needs O(n^2) memory. In my experience, the indexes of ELKI yield a good speedup if you have larger data, and are worth a try if you run out of memory or time.
You need to scale your data. V3 has a range which is much larger than the range for the V1 and V2 and thus DBSCAN currently mostly ignores V3.
In R you can use all sorts of metrics to build a distance matrix prior to clustering, e.g. binary distance, Manhattan distance, etc...
However, when it comes to choosing a linkage method (complete, average, single, etc...), these linkage all use euclidean distance. This does not seem particularly appropriate if you rely on a difference metric to build the distance matrix.
Is there a way (or a library...) to apply other distances to linkage methods when building a clustering tree?
Thanks!
I don't really get your question. For example, suppose I have the following data:
x <- matrix(rnorm(100), nrow=5)
then I can build a distance matrix using dist
##Changing the distance measure
d_e = dist(x, method="euclidean")
d_m = dist(x, method="maximum")
I can then cluster in however I want:
##Changing the clustering method
hclust(d_m, method="median")
If you have constructed a matrix that already represents the pairwise distances, use e.g.
hclust(as.dist(mx), method="single")
You might want to try using agnes, rather than hclust, and hand it a distance matrix. There's a nice tutorial on this here:
http://strata.uga.edu/software/pdf/clusterTutorial.pdf
From the tutorial, here's how you would generate and use a distance matrix for clustering:
> library(vegan)
# load library for distance functions
> mydata.bray <- vegdist(mydata, method="bray")
# calculates bray (=Sørenson) distances among samples
> mydata.bray.agnes <- agnes(mydata.bray)
# run the cluster analysis
I myself use Prof. Daniel Müllner's fastcluster library, which has exactly the same API as agnes but is orders of magnitude faster for large data sets.