Here's how the literature explains how to compute multidimensional dynamic time warping of two time series:
library(dtw)
x<- cbind(1:10,1)
y<- cbind(11:15,2)
cxdist <-dist(x,y,method="euclidean")
dtw(cxdist)$distance
In fact it first computes the cross distance matrix and then use it as input in the dtw function.
I'd like to use multidimensional dynamic time warping in image classification with quite large images.
Image values are stored in a data frame that could look like this:
inDf <- data.frame(matrix(rnorm(60), ncol = 6))
colnames(inDf) <- c('var1t1','var2t1','var1t2','var2t2','var1t3','var2t3')
In this example, there are two variables (var1 and var2) observed three times.
The question is how to get the dtw distance matrix with the as much efficiency as possible regarding computing intensity?
Here are some thoughts:
- iterate through each values of the input image matrices, reshape the vectors to matrices in order to be able to compute the cross distances and then compute the dtw distance and store it in a dedicated matrix.
This is certainly the most computing intensive solution
When dealing with intensive computations always makes sense to consider Rcpp package. If you want to get distance matrix with euclidean distances faster, you can implement corresponding Rcpp function:
library(Rcpp)
library(inline)
# Rcpp function for euclidean distance
fastdist <- cxxfunction(signature(x="matrix", y="matrix"), plugin="Rcpp",
body='
Rcpp::NumericMatrix dx(x);
Rcpp::NumericMatrix dy(y);
const int N = dx.nrow();
const int M = dy.nrow();
Rcpp::NumericMatrix res(N, M);
for(int i=0; i<N; i++){
for(int j=0; j<M; j++){
res(i,j) = sqrt(sum((dx(i,_)-dy(j,_))*(dx(i,_)-dy(j,_))));
}
}
return res;
')
It uses Rcpp syntactic sugar in order to make code more compact and readable. However, sometimes it's better having wrapper function for checking types, coercing, etc. It's not necessary - you can call fastdist directly. But, anyway, wrapper can look like this:
# Wrapper R function
fast.dist <- function(x, y){
stopifnot(class(x) %in% c("data.frame","matrix") &
class(y) %in% c("data.frame","matrix") &
ncol(x)==ncol(y))
fastdist(as.matrix(x), as.matrix(y))
}
Now we can turn to literature example.
library(dtw)
# EXAMPLE 1
x<- cbind(1:10,1)
y<- cbind(11:15,2)
# Check results
all.equal(fast.dist(x,y), dist(x,y,method="euclidean"), check.attributes=F)
# [1] "target is matrix, current is crossdist"
all.equal(fast.dist(x,y), matrix(dist(x,y,method="euclidean"), ncol=nrow(y)))
# [1] TRUE
Note, that dist returns result of class crossdist. So, for comparison it should be coerced to matrix.
And now your primary question - we're generating data first:
# EXAMPLE 2
set.seed(1234)
N <- 100
inDf <- data.frame(matrix(rnorm(6*N), ncol = 6))
colnames(inDf) <- c('var1t1','var2t1','var1t2','var2t2','var1t3','var2t3')
# Extracting variables
var1 <- inDf[,c("var1t1","var1t2","var1t3")]
var2 <- inDf[,c("var2t1","var2t2","var2t3")]
I'm not completely sure about your data structure, but in any case you can always prepare variables according to your needs.
Comparison and benchmarking:
library(rbenchmark)
all.equal(fast.dist(var1,var2), matrix(dist(var1,var2), ncol=N))
# [1] TRUE
benchmark(fast.dist(var1,var2), dist(var1,var2), order="relative")[,1:4]
# test replications elapsed relative
# 1 fast.dist(var1, var2) 100 0.081 1.000
# 2 dist(var1, var2) 100 0.246 3.037
fast.dist is roughly 3 times faster than dist in this case. However, while N is growing the relative speed-up will go down.
Also note, as were mentioned in comments, dtw can compute distance matrix by itself. Nevertheless, it's more efficient to have distance matrix precomputed. See quick test below:
cxdist <- fast.dist(var1,var2)
benchmark(dtw(cxdist)$distance, dtw(var1,var2)$distance, order="relative")[,1:4]
# test replications elapsed relative
# 1 dtw(cxdist)$distance 100 0.476 1.000
# 2 dtw(var1, var2)$distance 100 0.736 1.546
Also, if you're interested only in $distance you can pass distance.only=T to dtw() - it gives some speed-up.
Related
I'm working on improving the speed of a function (for a dissimilarity measure) I'm writing which is quite similar mathematically to the Euclidean distance function. However, when I time my function compared to that implemented in the daisy function from the cluster package, I find quite a significant difference in speed, with daisy performing much better. Given that (I'm assuming) a dissimilarity measure would require O(n x p) time due to the need to compare each object to itself over all variables (where n is number of objects and p is number of variables), I find it difficult to understand how the daisy function performs so well (near constant time, from the few experiments I've done) relative to my simple and direct implementation. I present the code I have used both to implement and test below. I have tried looking through the r source code for the implementation of the daisy function, but I found it difficult to understand. I found no nested for loop. Any help with understanding why this function performs so fast and how I could possibly modify my code to have similar speed would be very highly appreciated.
euclidean <- function (df){
no_obj <- nrow(df)
dist <- array(0, dim = c(no_obj, no_obj))
for (i in 1:no_obj){
for (j in 1:no_obj){
dist_v <- 0
if(i != j){
for (v in 1:ncol(df)){
dist_v <- dist_v + sqrt((df[i,v] - df[j,v])^2)
}
}
dist[i,j] <- dist_v
}
}
return(dist)
}
data("iris")
tic <- Sys.time()
dst <- euclidean(iris[,1:4])
time <- difftime(Sys.time(), tic, units = "secs")[[1]]
print(paste("Time taken [Euclidean]: ", time))
tic <- Sys.time()
dst <- daisy(iris[,1:4])
time <- difftime(Sys.time(), tic, units = "secs")[[1]]
print(paste("Time taken [Daisy]: ", time))
one option:
euclidean3 <- function(df) {
require(data.table)
n <- nrow(df)
i <- CJ(1:n, 1:n) # generate all row combinations
dl <- sapply(df, function(x) sqrt((x[i[[1]]] - x[i[[2]]])^2)) # loop over columns
dv <- rowSums(dl) # sum values of columns
d <- matrix(dv, n, n) # fill in matrix
d
}
dst3 <- euclidean3(iris[,1:4])
all.equal(euclidean(iris[,1:4]), dst3) # TRUE
[1] "Time taken [Euclidean3]: 0.008"
[1] "Time taken [Daisy]: 0.002"
Largest bottleneck in your code is selecting data.frame elements in loop (df[j,v])). Maybe changing it to matrix also could improver speed. I believe there could be more performant approach on stackoverflow, you just need to search by correct keywords...
I have a data.frame of items identified by an integer property ID, which is also the row number of the data.frame.
Each item has a vector of features FP associated to it. The elements of each FP are unique (within that FP). So for instance c(1,2,7) but never c(1,7,7).
The Tanimoto distance between any two ID's is defined as 1 minus the number of unique elements in the intersection of their FP's, divided by the number of unique elements in the union of their FP's.
I need to calculate such distances in the context of a 'maxmin' algorithm. See for instance this blog post.
The most important point to note is that I must NOT compute a full distance matrix (even with the best algorithms it would be unfeasible on the scale of datasets I am working with).
As explained in the above post, the strength of the iterative maxmin picker according to Roger Sayle's method is that one can avoid computing most of the pairwise distances, and instead calculate only the few relevant ones. Hence my question.
Here's what I could come up with so far:
# make a random dataset
set.seed(1234567)
d <- sample(30:45, 1000, replace = T)
dd <- setNames(data.frame(do.call(rbind, sapply(d,function(n) list(sample(as.character(1:(45*2)), n, replace = F)), simplify = F))), "FP")
dd["ID"] <- 1:NROW(dd)
# define a pairwise distance function for ID's
distfun <- function(ID1,ID2) {
FP1 <- dd$FP[[ID1]]
FP2 <- dd$FP[[ID2]]
int <- length(intersect(FP1,FP2))
1 - int/(d[ID1]+d[ID2]-int)
}
# test performance of distance function
x <- sample(dd$ID, 200, replace = F)
y <- sample(dd$ID[!(dd$ID %in% x)], 200, replace = F)
pairwise.dist <- NULL
system.time(
for(i in x) {
for (j in y) {
dij <- distfun(i,j)
#pairwise.dist <- rbind(pairwise.dist,c(min(i,j),max(i,j),dij))
}
}
)
# user system elapsed
# 0.86 0.00 0.86
Question 1 : do you think the distance function could be made faster?
I tried making a sparse matrix of the features (ddu.tab in the code below, where I omitted the denominator, which is trivial to compute from the intersection) and defining the distance function as vector operations, but that was much slower (a bit to my surprise, I must say).
ddu <- do.call(rbind, sapply(dd$ID, function(x) {data.frame("ID"=x, "FP"=dd$FP[[x]], stringsAsFactors = F)}, simplify = F))
ddu.tab <- xtabs(~ID+FP, ddu, sparse = T)
system.time(
for(i in x) {
for (j in y) {
dij <- t(ddu.tab[i,]) %*% ddu.tab[j,]
#pairwise.dist <- rbind(pairwise.dist,c(min(i,j),max(i,j),dij))
}
}
)
# user system elapsed
# 32.35 0.03 32.66
Question 2 : actually less important than the distance calculation, but if anyone can advise... The update of pairwise.dist by rbind is (apparently) very costly. I don't know if I can do it differently (meaning not adding new elements at each iteration), because in the maxmin application the pairs of ID's whose distances are to be calculated are not known upfront like in this example, and pairwise.dist is continuously read and appended new elements.
Someone in the past suggested to me that lists may be better than matrices for read/write. If that is the case, I could write out pairwise.dist as a named list.
BTW, just FYI, in this specific example the full distance matrix is calculated quite fast:
system.time(ddu.dist <- dist(ddu.tab, method = "binary"))
# user system elapsed
# 0.61 0.00 0.61
which seems to indicate that there is indeed a fast method to calculate binary distances.
If anyone could please advise and/or point me to relevant resources, it would be great.
Thanks!
Not sure about speeding up the distance function itself, but you could replace your double loop, using the tidyverse, with
library(tidyverse)
results <- crossing(x = x, y = y) %>% #all x,y combinations
filter(x < y) %>% #remove duplicates
mutate(pairwise.dist = map2_dbl(x, y, distfun)) #apply distance function
Setup For the purposes of my simulation, I'm generating a list of B=2000 elements, with each element being the output of a permutation procedure in which I first permute the rows of a 200x8000 matrix and for each column, I calculate the Kolmogorov-Smirnov test statistic between the first and second 100 rows (you can think of the first 100 rows as data from one group and the second 100 rows as data from another group).
Question This process takes a very long time (about 30-40 minutes) to generate the list. Is there a much faster way? In the future, I'd like to increase B to a larger value.
Code
B=2000
n.row=200; n.col=8000
#Generate sample data
samp.dat = matrix(rnorm(n.row*n.col),nrow=n.row)
perm.KS.list = NULL
for (b in 1:B){
#permute the rows
perm.dat.tmp = samp.dat[sample(nrow(samp.dat)),]
#Compute the permutation-based test statistics
perm.KS.list[[b]]= apply(perm.dat.tmp,2,function(y) ks.test.stat(y[1:100],y[101:200]))
}
#Modified KS-test function (from base package)
ks.test.stat <- function(x,y){
x <- x[!is.na(x)]
n <- length(x)
y <- y[!is.na(y)]
n.x <- as.double(n)
n.y <- length(y)
w <- c(x, y)
z <- cumsum(ifelse(order(w) <= n.x, 1/n.x, -1/n.y))
z <- z[c(which(diff(sort(w)) != 0), n.x + n.y)] #exclude ties
STATISTIC <- max(abs(z))
return(STATISTIC)
}
The 1:B loop has several places to optimize, but I agree that the real consumer is that inner function. Because you're simulating your well-behaved bootstrap samples, you can make two simplifying assumptions that the general base function can't:
There aren't missing values. This obviates the is.na() adjustments
The two sides (ie, x & y) have the same number of elements, so you don't need to count them separately. instead of splitting y in the loop, and them joining them back in the function (into w), just keep it together. The balanced sides also permit simplifications like remove the ifelse() clause. It produces a bunch of 0/1s, which are rescaled to -1/1s with integer arithmetic.
The function is reduced, which saves about 25% of the time. I added integers, instead of doubles inside cumsum().
ks.test.stat.balanced <- function(w){
n <- as.integer(length(w) * .5)
# z <- cumsum(ifelse(order(w) <= n, 1L, -1L)) / n
z <- cumsum((order(w) <= n)*2L - 1L) / n
# z <- z[c(which(diff(sort(w)) != 0), n + n)] #exclude ties
return( max(abs(z)) )
}
Ties shouldn't occur often with your gaussian rng, and the diff(sort(.)) is very expensive. If you're willing to remove that protection, the time is reduced by about 65%.
If you move the equation for z into abs(), it saves a little time over all those reps. I kept it separate above, so it's easier to read.
edit in case of an unbalanced simulation I'd recommend you:
still keep out the is.na,
still pass w,
still keep as much as possible in integer, not numeric, but
now include arguments n1 & n2 for the two group sizes.
Also, experiment w/ precalculating 1/n before cumsum() to avoid a lot of expensive divisions. Try to think of other math-y ways to extract calculations from an inner loop so it occurs less frequently.
My question is one of approach. Using SO I iterated through methods to create a 3 dimension array in R (this is my first question; R is a constraint). The use case is that this final array needs to be updated often but the two input arrays are updated at different periods. The goal is to minimize the final array creation time, but also intermediary steps if possible.
I know I can reach out with Rcpp, and I assign more than I need to for readability, but what I am wondering is:
Is there a better approach to completing this operation?
if (!require("geosphere")) install.packages("geosphere")
#simulate real data
dimLength <- 418
latLong <- cbind(rep(40,418),rep(2,418))
potentialChurn <- as.matrix(rep(500,418))
#create 2D matrix
valueMat <- matrix(0,dimLength,dimLength)
value <- potentialChurn
valueTranspose <- t(value)
for (s in 1:dimLength){valueMat[s,] <- value + valueTranspose[s]}
diag(valueMat) <- 0
#create 3D matrix from copying 2D matrix
bigValMat <- array(0,dim=c(dimLength,dimLength,dimLength))
for (d in 1:dimLength){bigValMat[,d,] <- valueMat}
#get crow fly distance between locations, create 2D matrix
distMat <- as.matrix(outer(seq(dimLength), seq(dimLength), Vectorize(function(i, j) distCosine(latLong[i,], latLong [j,]))))
###create 3D matrix by calculating distance between any two locations;
# create 2D matrix from each column in original 2D matrix
# add this column-replicated 2D matrix to the original
bigDistMat <- array(0,dim=c(dimLength,dimLength,dimLength))
for (p in 1:dimLength){
addCol <- distMat[,p]
addMatrix <- as.matrix(addCol)
for (y in 2:dimLength) {addMatrix <- cbind(addMatrix,addCol)}
bigDistMat[,p,] <- data.matrix(distMat) + data.matrix(addMatrix)}
#Final matrix calculation
bigValDistMat <- bigValMat / bigDistMat
...as context this is part of a two step ahead forecast policy developed for a class using Barcelona Bikesharing (Bicing) data. The project is over and I am interested how I could have done better.
In general if you want to speed up your code you want to identify bottle necks and fix them like explained here. Putting all your code before hand in a function would
Be a good idea.
In your specific case, you use much too much for loops for an R code. You need to vectorize your code much more.
Edit
Now for the long answer:
#simulate real data, you want them to be random
dimLength <- 418
latLong <- cbind(rnorm(dimLength,40,0.5),rnorm(dimLength,2,0.5))
potentialChurn <- as.matrix(rnorm(dimLength,500,10))
#create 2D matrix, outer is designed for this operation
valueMat <- outer(value,t(value),FUN="+")[,1,1,]
diag(valueMat) <- 0
# create 3D matrix from copying 2D matrix, again, avoid for loop
bigValMat <- array(rep(valueMat,dimLength),dim=c(dimLength,dimLength,dimLength))
# and use aperm to permute the dimensions
bigValMat <- aperm(bigValMat2,c(1,3,2))
#get crow fly distance between locations, create 2D matrix
# other packages are available to compute that kind of distance matrix
# but let's stay in plain R
# wordy but so much faster (and easier to read)
longs1 <- rep(latLong[,1],dimLength)
lats1 <- rep(latLong[,2],dimLength)
latLong1 <- cbind(longs1,lats1)
longs2 <- rep(latLong[,1],each=dimLength)
lats2 <- rep(latLong[,2],each=dimLength)
latLong2 <- cbind(longs2,lats2)
distMat <- matrix(distCosine(latLong1,latLong2),ncol=dimLength)
###create 3D matrix by calculating distance between any two locations;
# same logic than for bigValMat
addMatrix <- array(rep(distMat,dimLength),dim=rep(dimLength,3))
distMat3D <- aperm(addMatrix,c(1,3,2))
bigDistMat <- addMatrix + distMat3D
#get crow fly distance between locations, create 2D matrix
#Final matrix calculation
bigValDistMat <- bigValMat / bigDistMat
Here it is 25x faster than your initial code (76s -> 3s). It could still be much improved but you got the idea: avoid for and cbind and co at all costs.
I'm looking for a well-optimized function that accepts an n X n distance matrix and returns an n X k matrix with the indices of the k nearest neighbors of the ith datapoint in the ith row.
I find a gazillion different R packages that let you do KNN, but they all seem to include the distance computations along with the sorting algorithm within the same function. In particular, for most routines the main argument is the original data matrix, not a distance matrix. In my case, I'm using a nonstandard distance on mixed variable types, so I need to separate the sorting problem from the distance computations.
This is not exactly a daunting problem -- I obviously could just use the order function inside a loop to get what I want (see my solution below), but this is far from optimal. For example, the sort function with partial = 1:k when k is small (less than 11) goes much faster, but unfortunately returns only sorted values rather than the desired indices.
Try to use FastKNN CRAN package (although it is not well documented). It offers k.nearest.neighbors function where an arbitrary distance matrix can be given. Below you have an example that computes the matrix you need.
# arbitrary data
train <- matrix(sample(c("a","b","c"),12,replace=TRUE), ncol=2) # n x 2
n = dim(train)[1]
distMatrix <- matrix(runif(n^2,0,1),ncol=n) # n x n
# matrix of neighbours
k=3
nn = matrix(0,n,k) # n x k
for (i in 1:n)
nn[i,] = k.nearest.neighbors(i, distMatrix, k = k)
Notice: You can always check Cran packages list for Ctrl+F='knn'
related functions:
https://cran.r-project.org/web/packages/available_packages_by_name.html
For the record (I won't mark this as the answer), here is a quick-and-dirty solution. Suppose sd.dist is the special distance matrix. Suppose k.for.nn is the number of nearest neighbors.
n = nrow(sd.dist)
knn.mat = matrix(0, ncol = k.for.nn, nrow = n)
knd.mat = knn.mat
for(i in 1:n){
knn.mat[i,] = order(sd.dist[i,])[1:k.for.nn]
knd.mat[i,] = sd.dist[i,knn.mat[i,]]
}
Now knn.mat is the matrix with the indices of the k nearest neighbors in each row, and for convenience knd.mat stores the corresponding distances.