Speed up for loop in R, calculating pairwise dissimilarities - r

I'm trying to compute all the pairwise dissimilarities between observations in a data set consisting of only nominal variables using some self-defined dissimilarity metric.
Data looks like
set.seed(3424)
(mydata <- data.table(paste(sample(letters[1:5], 5, replace=T),
sample(LETTERS[1:5], 5, replace=T),
sep = ","),
paste(sample(LETTERS[1:5], 5, replace=T),
sample(LETTERS[1:5], 5, replace=T),
sep = ","),
paste(sample(letters[1:5], 5, replace=T),
sample(letters[1:5], 5, replace=T),
sep = ",")))
V1 V2 V3
1: a,A E,E b,b
2: e,D C,A d,d
3: d,B B,C d,d
4: c,B A,E b,d
5: a,B C,D d,a
library(data.table)
library(dplyr)
library(stringr)
metric <- function(pair){
intersection <- 0
union <- 0
for(i in 1:ncol(mydata)){
A <- pair[[1]][[i]]
B <- pair[[2]][[i]]
if(sum(is.na(A),is.na(B))==1)
union = union + 1
if(sum(is.na(A),is.na(B))==0){
intersection <- intersection + length(intersect(A,B))/length(union(A,B))
union = union + 1
}
}
1 - intersection/union
}
diss <- matrix(nrow = nrow(mydata), ncol = nrow(mydata))
for(i in 1:(nrow(mydata)-1)){
print(i) ## to check progress ##
for(j in (i+1):nrow(mydata)){
pair <- rbind(mydata[i], mydata[j])
diss[j, i] <- apply(pair, 1, function(x) strsplit(x, split=",")) %>% metric()
}
}
These loops work, but really slow when mydata has 1000+ rows and 100+ columns.
The metric I used here is Jaccard index, but a nested version. Since each element in the data is not a single value. So instead of treating each two levels as either match(0) or different(1), I use Jaccard when comparing levels as well.
Update:
Some context about my data, not the toy data I made up.
Each row represents a query, i.e. "SELECT ... FROM ... WHERE ...
...".
Each column contains part of the information in the query, i.e. 1st column contains everything between "SELECT" and "FROM", 2nd column contains what's between "FROM" and "WHERE", etc.
There are 100 columns and 400 rows, I don't why there are so many columns though.
Number of elements in one cell could be really arbitrary, some cells contain very long lists of values, while many are actually NAs. E.g.
SELECT
1: NA
2:p1.PLAYERID,f1.PLAYERNAME,p2.PLAYERID,f2.PLAYERNAME
3: PLAYER f1,PLAYER f2,PLAYS p1
4: NA
5: NA
6: c1.table_name t1,c2.table_name t2
7: NA
...
400: asd,vrht,yuetr,wxeq,yiknuy,sce,ercher

You can gain some speed pretty easily by doing less work. If you are only interested in pairwise comparisons, you only need to do N choose 2 comparisons, instead of N^2. You can implement that with F2() below.
set.seed(3424)
(mydata <- data.table(sample(letters[1:5], 50, replace = T),
sample(LETTERS[1:5], 50, replace = T),
sample(1:3, 50, replace = T)))
mydf<-data.frame(mydata)
f1<- function(){
diss <- matrix(nrow = nrow(mydata), ncol = nrow(mydata))
for(i in 1:(nrow(mydata)-1)){
print(i) ## to check progress ##
for(j in (i+1):nrow(mydata)){
pair <- rbind(mydata[i], mydata[j])
diss[j, i] <- apply(pair, 1, function(x) strsplit(x, split=",")) %>% metric()
}
}
return(diss)
}
f2<-function(){
met<-NULL
A<-NULL
B<-NULL
choices<-choose(nrow(mydf),2)
combs<-combn(nrow(mydf),2)
for(i in 1:choices) {
print(i)
pair<-rbind(mydf[combs[1,i],], mydf[combs[2,i],])
met[i]<- apply(pair, 1, function(x) strsplit(x, split=",")) %>% metric()
A[i]<-mydf[combs[1,i],1]
B[i]<-mydf[combs[2,i],2]
}
results<-data.frame(A,B, met)
return(results)
}
library(microbenchmark)
microbenchmark(f1(), f2(), times = 10)
Unit: milliseconds
expr min lq mean median uq max neval
f1() 1381 1391.2 1416.8 1417.6 1434.9 1456 10
f2() 907 923.6 942.3 946.9 948.9 1008 10
It is a little faster, but not mind-blowingly so. My guess is that some more work needs to be done on the metric function you define. I tried to look at it and determine a way to vectorize it, but I could not find a way. If that can be done this problem would be trivial. For example, I have a similar program that measures pairwise cosine similarity between ~400 vectors of length ~5000. It has to make 400 choose 2 = 79800 comparisons and the entire program takes about 6 seconds to run.

It's similar to the original, but I made a few changes. It runs more quickly, but I didn't bother timing it. 1000 with this code seems about like 100 with the original.
The main changes:
remove rbind by passing in variables to mapply calculate union
variable instead of adding every time (union <- union + 1)
split strings all at once outside of loops
check length intersection before calculating union and adding intersection (lenint > 0)
Hopefully something helps your case.
rownum <- 1000
(mydata <- data.table(paste(sample(letters[1:5], rownum, replace=T),
sample(LETTERS[1:5], rownum, replace=T),
sep = ","),
paste(sample(LETTERS[1:5], rownum, replace=T),
sample(LETTERS[1:5], rownum, replace=T),
sep = ","),
paste(sample(letters[1:5], rownum, replace=T),
sample(letters[1:5], rownum, replace=T),
sep = ",")))
allsplit <- lapply(mydata,strsplit,split = ',')
allsplitdf <- cbind(allsplit[['V1']],allsplit[['V2']],allsplit[['V3']])
allsplitlist <- split(allsplitdf,1:nrow(allsplitdf))
metric2 <- function(p1,p2){
for(i in seq_along(p1)){
intersection <- 0
A <- p1[[i]]
B <- p2[[i]]
if(!any(is.na(A),is.na(B))){
lenint <- length(intersect(A,B))
if(lenint > 0){
intersection <- intersection + lenint/length(union(A,B))
}
}
}
1 - intersection/length(p1)
}
diss <- matrix(nrow = nrow(mydata), ncol = nrow(mydata))
for(i in 1:(nrow(mydata)-1)){
print(i) ## to check progress ##
for(j in (i+1):nrow(mydata)){
diss[j, i] <- mapply(metric2,p1 = allsplitlist[i],p2 = allsplitlist[j])
}
}

When constructing an algorithm it is important to keep in mind the speed/space trade off. What I mean by the speed/space trade off is that by storing your data within a different schema you can usually eliminate for loops. However, data stored within this new schema will generally occupy more space.
The reason your example is slow is because, among other things, you are looping over all the rows and the columns of you're data. With a 1000x100 data.frame that is 1e5 computations. One way to eliminate theloop over your rows is to store you data a bit differently. For example, I use the expand.grid command to combine all pairwise comparisons within the same data.frame, dTMP. I then strip the comma and allow each member of the pair to occupy it's own column (i.e. "a,A" which is originally contained in one variable, is now "a" and "A" and represent entries in two separate variables). In general, reshaping data into different formats is quick, or atleast quicker than looping over each row. This reshaping clearly, however, generate a data set which takes up more RAM. In your case the data.frame will be 1e6x4. Which is very large, but not so large as to clog up all your RAM.
The reward to doing all that hard work is that now it is trivial and extremely fast to obtain the intersect and union variables. You will of course still need to loop over each column, however, we've eliminated one loop by simply arranging your data. It is possible to remove the loop over the columns loop by utilizing 3D arrays, however, such an array would not fit into memory.
f3 <- function(){
intersection <- 0
for(v in names(mydata)){
dTMP <- expand.grid(mydata[[v]], mydata[[v]], stringsAsFactors = FALSE)[,c(2,1)]
#There is likely a more elegant way to do this.
dTMP <-
dTMP$Var2 %>%
str_split(., ",") %>%
unlist(.) %>%
matrix(., ncol = 2, nrow = nrow(dTMP), byrow = TRUE) %>%
cbind(., dTMP$Var1%>%
str_split(., ",") %>%
unlist(.) %>%
matrix(., ncol = 2, nrow = nrow(dTMP), byrow = TRUE)) %>%
as.data.frame(., stringsAsFactors = FALSE)
names(dTMP) <- c("v1", "v2", "v3", "v4")
intersect <- rowSums(dTMP[,c("v1", "v2")] == dTMP[,c("v3", "v4")])
intersect <- ifelse(rowSums(dTMP[,c("v1", "v2")] == dTMP[,c("v4", "v3")]) !=0, rowSums(dTMP[,c("v1", "v2")] == dTMP[,c("v4", "v3")]), intersect)
intersect <- ifelse(dTMP[, "v1"] == dTMP[, "v2"], 1, intersect)
MYunion <- sapply(as.data.frame(t(dTMP)), function(x) n_distinct(x))
intersection <- intersection + intersect/MYunion
}
union <- ncol(mydata)
return(matrix(1 - intersection/union, nrow = nrow(mydata), ncol = nrow(mydata), byrow = TRUE)) #This is the diss matrix, I think. Double check that I got the rows and columns correct
}
Update
I'm still having trouble replicating your results, however, I believe the newly updated code is very close. There is only one cell (2,1) of the dissimilarity matrix which our results differ when set.seed(3424). The problem with the current iteration, however, is that I need to implement a sapply to obtain MYunion. If you can think of a faster way do to do this, you'll get big speed gains. Read this SO post for suggests: Efficient Means of Identifying Number of Distinct Elements in a Row

Related

Two Random Numbers Without Repeating

I'm looking to make a set of two random numbers (e.g., [1,2], [3,12]) with the first number between 1-12, and the second between 1-4. I know how to sample the two numbers independently using:
sample(1:12, 1, replace = T)
sample(1:4, 1, replace = T)
but don't know how to create a system to determine if the pairing of the two numbers has already been rolled, and if so, roll again. Any tips!?
Thanks :)
While this doesn't scale happily (in case you need large-scale simulation), you can do this:
set.seed(42)
di2 <- sample(setdiff(1:4, di1 <- sample(1:12, size = 1)), size = 1)
c(di1, di2)
# [1] 1 2
The inner (di1) assignment takes the first from 1:12, so far so good.
We then set-diff 1:4 from this so that the second sampling only has candidates that are not equal to di1;
The outer (di2) assignment samples from 1:4 without di1 if it was within 1-4.
While not an authoritative proof of correctness,
rand <- replicate(100000, local({ di2 <- sample(setdiff(1:4, di1 <- sample(1:12, size=1)), size = 1); c(di1, di2); }))
dim(rand)
# [1] 2 100000
any(rand[1,] == rand[2,])
# [1] FALSE
Are you looking for sth like:
library(tidyverse)
expand.grid(1:12,1:4) %>%
as.data.frame() %>%
slice_sample (n = 5, replace = FALSE)

How to quantify the frequency of all possible row combinations of a binary matrix in R in a more efficient way?

Lets assume I have a binary matrix with 24 columns and 5000 rows.
The columns are Parameters (P1 - P24) of 5000 subjects. The parameters are binary (0 or 1).
(Note: my real data can contain as much as 40,000 subjects)
m <- matrix(, nrow = 5000, ncol = 24)
m <- apply(m, c(1,2), function(x) sample(c(0,1),1))
colnames(m) <- paste("P", c(1:24), sep = "")
Now I would like to determine what are all possible combinations of the 24 measured parameters:
comb <- expand.grid(rep(list(0:1), 24))
colnames(comb) <- paste("P", c(1:24), sep = "")
The final question is: How often does each of the possible row combinations from comb appear in matrix m?
I managed to write a code for this and create a new column in comb to add the counts. But my code appears to be really slow and would take 328 days to complete to run. Therefore the code below only considers the 20 first combinations
comb$count <- 0
for (k in 1:20){ # considers only the first 20 combinations of comb
for (i in 1:nrow(m)){
if (all(m[i,] == comb[k,1:24])){
comb$count[k] <- comb$count[k] + 1
}
}
}
Is there computationally a more efficient way to compute this above so I can count all combinations in a short time?
Thank you very much for your help in advance.
Data.Table is fast at this type of operation:
m <- matrix(, nrow = 5000, ncol = 24)
m <- apply(m, c(1,2), function(x) sample(c(0,1),1))
colnames(m) <- paste("P", c(1:24), sep = "")
comb <- expand.grid(rep(list(0:1), 24))
colnames(comb) <- paste("P", c(1:24), sep = "")
library(data.table)
data_t = data.table(m)
ans = data_t[, .N, by = P1:P24]
dim(ans)
head(ans)
The core of the function is by = P1:P24 means group by all the columns; and .N the number of records in group
I used this as inspiration - How does one aggregate and summarize data quickly?
and the data_table manual https://cran.r-project.org/web/packages/data.table/vignettes/datatable-intro.html
If all you need is the combinations that occur in the data and how many times, this will do it:
m2 <- apply(m, 1, paste0, collapse="")
m2.tbl <- xtabs(~m2)
head(m2.tbl)
m2
# 000000000001000101010010 000000000010001000100100 000000000010001110001100 000000000100001000010111 000000000100010110101010 000000000100101000101100
# 1 1 1 1 1 1
You can use apply to paste the unique values in a row and use table to count the frequency.
table(apply(m, 1, paste0, collapse = '-'))

Is there a quick way in R to predict the class outcome of an observation from a nearest neighbours model from RANN?

I am trying to identify the most probable group that an observation belongs to, for several thousand large datasets. It is possible that some of the data is incorrectly classified and I am trying to work out the most likely "true" value. I have tried to use knn3 from the caret package but the predictions take too long to compute. In researching alternatives I have came across the nn2 function from RANN package which performs a nearest neighbour search that is significantly faster than K-Nearest Neighbours.
library(RANN)
library(tidyverse)
iris.scaled <- iris %>%
mutate_if(is.numeric, scale)
iris.nn2 <- nn2(iris.scaled[1:4])
The result on the nn2 function is two lists, one of indices and one of distances. I want to use the indices table to work out the most likely grouping of each observation, however it returns the row number of the observation and not it's group. I need to replace this with the group it belongs to (in this case, the species column).
distance.index <- iris.nn2$nn.idx[,-1]
target = iris.scaled$Species
I have removed the first column as the first nearest neighbour is always the observation itself.
matrix(target[distance.index[,]], nrow = nrow(distance.index), ncol = ncol(distance.index))
This code gives me the output I want, but is there a tidier way of creating this table and then calculating the most common response for each row, with the speed of calculation being the key.
Your scaling can be a real bottleneck when you have more columns (tested on 200 x 22216 gene expression matrix). My version might not seem that impressive with the iris dataset, but on the larger dataset I get 1.3 sec vs. 32.8 sec execution time.
Using tabulate instead of table gives an additional improvement, which is dwarfed, however, by the matrix scaling.
I used a custom scale function here, but using base::scale on a matrix would already be a major improvement.
I also addressed the issue raised by M. Papenberg of "self" not being considered the nearest neighbor by setting those to NA.
invisible(lapply(c("tidyverse", "matrixStats", "RANN", "microbenchmark", "compiler"),
require, character.only=TRUE))
enableJIT(3)
# faster column scaling (modified from https://www.r-bloggers.com/author/strictlystat/)
colScale <- function(x, center = TRUE, scale = TRUE, rows = NULL, cols = NULL) {
if (!is.null(rows) && !is.null(cols)) {x <- x[rows, cols, drop = FALSE]
} else if (!is.null(rows)) {x <- x[rows, , drop = FALSE]
} else if (!is.null(cols)) x <- x[, cols, drop = FALSE]
cm <- colMeans(x, na.rm = TRUE)
if (scale) csd <- matrixStats::colSds(x, center = cm, na.rm = TRUE) else
csd <- rep(1, length = length(cm))
if (!center) cm <- rep(0, length = length(cm))
x <- t((t(x) - cm) / csd)
return(x)
}
# your posted version (mostly):
oldv <- function(){
iris.scaled <- iris %>%
mutate_if(is.numeric, scale)
iris.nn2 <- nn2(iris.scaled[1:4])
distance.index <- iris.nn2$nn.idx[,-1]
target = iris.scaled$Species
category_neighbours <- matrix(target[distance.index[,]], nrow = nrow(distance.index), ncol = ncol(distance.index))
class <- apply(category_neighbours, 1, function(x) {
x1 <- table(x)
names(x1)[which.max(x1)]})
cbind(iris, class)
}
## my version:
myv <- function(){
iris.scaled <- colScale(data.matrix(iris[, 1:(dim(iris)[2]-1)]))
iris.nn2 <- nn2(iris.scaled)
# set self neighbors to NA
iris.nn2$nn.idx[iris.nn2$nn.idx - seq_len(dim(iris.nn2$nn.idx)[1]) == 0] <- NA
# match up categories
category_neighbours <- matrix(iris$Species[iris.nn2$nn.idx[,]],
nrow = dim(iris.nn2$nn.idx)[1], ncol = dim(iris.nn2$nn.idx)[2])
# turn category_neighbours into numeric for tabulate
cn <- matrix(as.numeric(factor(category_neighbours, exclude=NULL)),
nrow = dim(iris.nn2$nn.idx)[1], ncol = dim(iris.nn2$nn.idx)[2])
cnl <- levels(factor(category_neighbours, exclude = NULL))
# tabulate frequencies and match up with factor levels
class <- apply(cn, 1, function(x) {
cnl[which.max(tabulate(x, nbins=length(cnl))[!is.na(cnl)])]})
cbind(iris, class)
}
microbenchmark(oldv(), myv(), times=100L)
#> Unit: milliseconds
#> expr min lq mean median uq max neval cld
#> oldv() 11.015986 11.679337 12.806252 12.064935 12.745082 33.89201 100 b
#> myv() 2.430544 2.551342 3.020262 2.612714 2.691179 22.41435 100 a

for each row in a data frame, find whether there is a "close" row in another data frame

I have the following data frame:
library(dplyr)
set.seed(42)
df <- data_frame(x = sample(seq(0, 1, 0.1), 5, replace = T), y = sample(seq(0, 1, 0.1), 5, replace = T), z= sample(seq(0, 1, 0.1), 5, replace = T) )
For each row in df, I would like to find out whether there is a row in df2 which is close to it ("neighbor") in all columns, where "close" means that it is not different by more than 0.1 in each column.
So for instance, a proper neighbor to the row (1, 0.5, 0.5) would be (0.9, 0.6, 0.4).
The second data set is
set.seed(42)
df2 <- data_frame(x = sample(seq(0, 1, 0.1), 10, replace = T), y = sample(seq(0, 1, 0.1), 10, replace = T), z= sample(seq(0, 1, 0.1), 10, replace = T) )
In this case there is no "neighbor", so Im supposed to get "FALSE" for all rows of df.
My actual data frames are much bigger than this (dozens of columns and hundreds of thousands of rows, so the naming has to be very general rather than "x", "y" and "z".
I have a sense that this can be done using mutate and funs, for example I tried this line:
df <- df %>% mutate_all(funs(close = (. <= df2(, .)+0.1) & (. >= df2(, .)-0.1))
But got an error.
Any ideas?
You can use package fuzzyjoin
library(fuzzyjoin)
# adding two rows that match
df2 <- rbind(df2,df[1:2,] +0.01)
df %>%
fuzzy_left_join(df2,match_fun= function(x,y) y<x+0.1 & y> x-0.1 ) %>%
mutate(found=!is.na(x.y)) %>%
select(-4:-6)
# # A tibble: 5 x 4
# x.x y.x z.x found
# <dbl> <dbl> <dbl> <lgl>
# 1 1 0.5 0.5 TRUE
# 2 1 0.8 0.7 TRUE
# 3 0.3 0.1 1 FALSE
# 4 0.9 0.7 0.2 FALSE
# 5 0.7 0.7 0.5 FALSE
find more info there: Joining/matching data frames in R
The machine learning approach to finding a close entry in a multi-dimensional dataset is Euclidian distance.
The general approach is to normalize all the attributes. Make the range for each column the same, zero to one or negative one to one. That equalizes the effect of the columns with large and small values. When more advanced approaches are used one would center the adjusted column values on zero. The test criteria is scaled the same.
The next step is to calculate the distance of each observation from its neighbors. If the data set is small or computing time is cheap, calculate the distance from every observation to every other. The Euclidian distance from observation1 (row1) to observation2 (row2) is sqrt((X1 - X2)^2 + sqrt((Y1 - Y2)^2 + ...). Choose your criteria and select.
In your case, the section criterion is simpler. Two observations are close if no attribute is more than 0.1 from the other observation. I assume that df and df2 have the same number of columns in the same order. I make the assumption that close observations are relatively rare. My approach tells me once we discover a pair is distant, discontinue investigation. If you have hundred of thousands of rows, you will likely exhaust memory if you try to calculate all the combinations at the same time.
~~~~~
You have a big problem. If your data sets df and df2 are one hundred thousand rows each, and four dozen columns, the machine needs to do 4.8e+11 comparisons. The scorecard at the end will have 1e+10 results (close or distant). I started with some subsetting to do comparisons with tearful results. R wanted matrices of the same size. The kluge I devised was unsuccessful. Therefore I regressed to the days of FORTRAN and did it with loops. With the loop approach, you could subset the problem and finish without smoking your machine.
From the sample data, I did the comparisons by hand, all 150 of them: nrow(df) * nrow(df2) * ncol(df). There were no close observations in the sample data by the definition you gave.
Here is how I intended to present the results before transferring the results to a new column in df.
dfclose <- matrix(TRUE, nrow = nrow(df), ncol = nrow(df2))
dfclose # Have a look
This matrix describes the distance from observation in df (rows in dfclose) to observation in df2 (colums in dfclose). If close, the entry is TRUE.
Here is the repository of the result of the distance measures:
dfdist <- matrix(0, nrow = nrow(df), ncol = nrow(df2))
dfdist # have a look; it's the same format, but with numbers
We start with the assumption that all observations in df aare close to df2.
The total distance is zero. To that we add the Manhattan Distance. When the total Manhattan distance is greater than .1, they are no longer close. We needn't evaluate any more.
closeCriterion <- function(origin, dest) {
manhattanDistance <- abs(origin-dest)
#print(paste("manhattanDistance =", manhattanDistance))
if (manhattanDistance < .1) ret <- 0 else ret <- 1
}
convertScore <- function(x) if (x>0) FALSE else TRUE
for (j in 1:ncol(df)) {
print(paste("col =",j))
for (i in 1:nrow(df)) {
print(paste("df row =",i))
for (k in 1:nrow(df2)) {
# print(paste("df2 row (and dflist column) =", k))
distantScore <- closeCriterion(df[i,j], df2[k,j])
#print(paste("df and dfdist row =", i, " df2 row (and dflist column) =", k, " distantScore = ", distantScore))
dfdist[i,k] <- dfdist[i,k] + distantScore
}
}
}
dfdist # have a look at the numerical results
dfclose <- matrix(lapply(dfdist, convertScore), ncol = nrow(df2))
I wanted to see what the process would look like at scale.
set.seed(42)
df <- matrix(rnorm(3000), ncol = 30)
set.seed(42)
df2 <-matrix(rnorm(5580), ncol = 30)
dfdist <- matrix(0, nrow = nrow(df), ncol = nrow(df2))
Then I ran the code block to see what would happen.
~ ~ ~
You might consider the problem definition. I ran the model several times, changing the criterion for closeness. If the entry in each of three dozen columns in df2 has a 90% chance of matching its correspondent in df, the row only has a 2.2% chance of matching. The example data is not such a good test case for the algorithm.
Best of luck
Here's one way to calculate that column without fuzzyjoin
library(tidyverse)
found <-
expand.grid(row.df = seq(nrow(df)),
row.df2 = seq(nrow(df2))) %>%
mutate(in.range = pmap_lgl(., ~ all(abs(df[.x,] - df2[.y,]) <= 0.1))) %>%
group_by(row.df) %>%
summarise_at('in.range', any) %>%
select(in.range)

Fastest way to apply function to all pairwise combinations of columns

Given a data frame or matrix with arbitrary number of rows and columns, what is the fastest way to apply a function to all pairwise combinations of columns?
For example, if I have a data table:
N <- 3
K <- 3
data <- data.table(id=seq(N))
for(k in seq(K)) {
data[[k]] <- runif(N)
}
And I want to compute the simple difference between all pairs of columns, I could loop (or lapply) over columns:
differences = data.table(foo=seq(N))
for(var1 in names(data)) {
for(var2 in names(data)) {
if (var1==var2) next
if (which(names(data)==var1)>which(names(data)==var2)) next
combo <- paste0(var1, var2)
differences[[combo]] <- data[[var1]]-data[[var2]]
}
}
But as K gets larger, this becomes absurdly slow.
One solution I've considered is to make two new data tables using combn and subtract them:
a <- data[,combn(colnames(data),2)[1,],with=F]
b <- data[,combn(colnames(data),2)[2,],with=F]
differences <- a-b
But as N and K get larger, this becomes very memory intensive (though faster than looping).
It seems to me that the outer product of the matrix with itself is probably the best way to go, but I can't piece it together. This is especially hard if I want to apply an arbitrary function (RMSE for example), instead of just the difference.
What's the fastest way?
If it is necessary to have the data in a matrix first, you can do the following:
library(data.table)
data <- matrix(runif(300*500), nrow = 300, ncol = 500)
data.DT <- setkey(data.table(c(data), colId = rep(1:500, each = 300), rowId = rep(1:300, times = 500)), colId)
diff.DT <- data.DT[
, {
ccl <- unique(colId)
vv <- V1
data.DT[colId > ccl, .(col2 = colId, V1 - vv)]
}
, keyby = .(col1 = colId)
]

Resources