Matching patterns in a matrix - r

My data looks like this:
S
0101001010000000000000000100111100000000000011101100010101010
1001010000000001100000000100000000000100000010101110101010010
1101010101010010000000000100000000100101010010110101010101011
0000000000000000001000000111000110000000000000000000000000000
the S indicates the column from which I am talking. It is col 26. All four rows share a 1 at that position.
I would need to be able to count for each row from 2 to 4:
How many columns to the left and right are the same as row 1?
For row 2 it would be 3 to the right (as it reaches 1/0) and 8 to the left (as it reaches 0/1).
The result for every row should be entered into a matrix like this:
row2 8 3
row3 11 9
Is there a fast and efficient way to do that? The matrix I am dealing with is very large.

If you need something fast, you could use Rcpp:
mat <- as.matrix(read.fwf(textConnection("0101001010000000000000000100111100000000000011101100010101010
1001010000000001100000000100000000000100000010101110101010010
1101010101010010000000000100000000100101010010110101010101011
0000000000000000001000000111000110000000000000000000000000000"), widths = rep(1, 61)))
library(Rcpp)
cppFunction('
IntegerMatrix countLR(const LogicalMatrix& mat, const int S) {
const int nr(mat.nrow()), nc(mat.ncol());
IntegerMatrix res(nr - 1, 2);
for(int i=1; i<nr;i++){
for(int j=S-2; j>=0;j--) {
if (mat(0,j) != mat(i,j)) break;
else res(i-1,0)++;
}
for(int j=S; j<nc;j++) {
if (mat(0,j) != mat(i,j)) break;
else res(i-1,1)++;
}
}
return(res);
}' )
countLR(mat, 26)
# [,1] [,2]
#[1,] 8 2
#[2,] 10 2
#[3,] 6 0
I assumed that column 26 itself doesn't count for the result. I also assumed that the matrix can only contain 0/1 (i.e., boolean) values. Adjust as needed.

It's pretty easy with strsplit and rle to pull apart and assemble this data:
> S <- scan(what="") #input of character mode
1: 0101001010000000000000000100111100000000000011101100010101010
2: 1001010000000001100000000100000000000100000010101110101010010
3: 1101010101010010000000000100000000100101010010110101010101011
4: 0000000000000000001000000111000110000000000000000000000000000
5:
s2 <- strsplit(S, split="")
sapply(s2, "[[", 26) # verify the 26th position is all ones
#[1] "1" "1" "1" "1"
#length of strings from 26th postion to right
rtlen <- length(s2[[1]])-(26-1)
# Pick from the `rle` $values where values TRUE
rle( tail( s2[[1]] == s2[[2]], rtlen) )
Run Length Encoding
lengths: int [1:11] 3 4 5 1 7 1 4 1 1 6 ...
values : logi [1:11] TRUE FALSE TRUE FALSE TRUE FALSE ...
Now that you have an algorithm for a single instance, you can iterate of the rest of the items in s2. To do the backwards look I just did the same operation on a rev-ersed section of the strings.
m<-matrix(NA, 3,2);
for (i in 2:4) { m[i-1,2] <- rle(tail( s2[[1]] == s2[[i]], rtlen) )$lengths[1]
m[i-1, 1] <- rle( rev( head( s2[[1]] == s2[[i]], 26)) )$lengths[1] }
m
[,1] [,2]
[1,] 9 3 # I think you counted wrong
[2,] 11 3
[3,] 7 1
Notice that I was comparing each one to the first row and your results suggest you were doing something else...perhaps comparing to the row above. That could easily be done instead with only a very small mod to the code indices for choice of the comparison vector:
m<-matrix(NA, 3,2);
for (i in 2:4) { m[i-1,2] <- rle(tail( s2[[i-1]] == s2[[i]], rtlen) )$lengths[1]
m[i-1, 1] <- rle( rev( head( s2[[i-1]] == s2[[i]], 26)) )$lengths[1] }
m
[,1] [,2]
[1,] 9 3
[2,] 9 9 #Again I think you may have miscounted. Easy to do, eh?
[3,] 7 1

This problem intrigued me. Since the matrix is binary, it's far more efficient to pack the matrix into a raw matrix than it is to use sparse matrices. It means that the storage for a 1,000 x 21,000,000 pattern matrix is approx. 2.4 GiB (print(object.size(raw(1000 * 21000000 / 8)), units = "GB")).
The following should be a relatively efficient way to tackle the problem. The Rcpp code takes a raw matrix which indicates the differences between the first row of the original matrix and the other rows. For efficiency in the R code, it's actually arranged with the patterns in columns rather than rows. The other functions help to convert existing sparse or regular matrices into packed ones and to read a matrix directly from a file.
library("Rcpp")
library("Matrix")
writeLines("0101001010000000000000000100111100000000000011101100010101010
1001010000000001100000000100000000000100000010101110101010010
1101010101010010000000000100000000100101010010110101010101011
0000000000000000001000000111000110000000000000000000000000000", "example.txt")
cppFunction('
IntegerMatrix countLRPacked(IntegerMatrix mat, long S) {
long l = S - 2;
long r = S;
long i, cl, cr;
int nr(mat.nrow()), nc(mat.ncol());
IntegerMatrix res(nc, 2);
for(int i=0; i<nc;i++){
// First the left side
// Work out which byte is the first to have a 1 in it
long j = l >> 3;
int x = mat(j, i) & ((1 << ((l & 7) + 1)) - 1);
long cl = l & 7;
while(j > 0 && !x) {
j --;
x = mat(j, i);
cl += 8;
}
// Then work out where the 1 is in the byte
while (x >>= 1) --cl;
// Now the right side
j = r >> 3;
x = mat(j, i) & ~((1 << ((r & 7))) - 1);
cr = 8 - (r & 7);
while(j < (nr-1) && !x) {
j ++;
x = mat(j, i);
cr += 8;
}
cr--;
while (x = (x << 1) & 0xff) --cr;
res(i, 0) = cl;
res(i, 1) = cr;
}
return(res);
}')
# Reads a binary matrix from file or character vector
# Borrows the first bit of code from read.table
readBinaryMatrix <- function(file = NULL, text = NULL) {
if (missing(file) && !missing(text)) {
file <- textConnection(text)
on.exit(close(file))
}
if (is.character(file)) {
file <- file(file, "rt")
on.exit(close(file))
}
if (!inherits(file, "connection"))
stop("'file' must be a character string or connection")
if (!isOpen(file, "rt")) {
open(file, "rt")
on.exit(close(file))
}
lst <- list()
i <- 1
while(length(line <- readLines(file, n = 1)) > 0) {
lst[[i]] <- packRow(as.integer(strsplit(line, "", fixed = TRUE)[[1]]))
i <- i + 1
}
do.call("cbind", lst)
}
# Converts a binary integer vector into a packed raw vector,
# padding out at the end to make the input length a multiple of 8
packRow <- function(row) {
packBits(as.raw(c(row, rep(0, (8 - length(row)) %% 8 ))))
}
# Converts a binary integer matrix to a packed raw matrix
# Note the matrix is transposed (makes the subsequent xor more efficient)
packMatrix <- function(mat) {
stopifnot(class(mat) %in% c("matrix", "dgCMatrix"))
apply(mat, 1, packRow)
}
# Takes either a packed raw matrix or a binary integer matrix, uses xor to compare all the first row
# with the others and then hands it over to the Rcpp code for processing
countLR <- function(mat, S) {
stopifnot(class(mat) %in% c("matrix", "dgCMatrix"))
if (storage.mode(mat) != "raw") {
mat <- packMatrix(mat)
}
stopifnot(8 * nrow(mat) > S)
y <- xor(mat[, -1, drop = FALSE], mat[, 1, drop = TRUE])
countLRPacked(y, S)
}
sMat <- Matrix(as.matrix(read.fwf("example.txt", widths = rep(1, 61))))
pMat <- readBinaryMatrix("example.txt")
countLR(sMat, 26)
countLR(pMat, 26)
You should note that the width of the pattern matrix is right-padded to a multiple of 8, so if the patterns match all the way to the right hand side this will result in the right hand count being possibly a bit high. This could be corrected if need be.

Slow R version to do this (moved from duplicate):
countLR <- function(mat, S) {
mat2 <- mat[1, ] != t(mat[-1, , drop = FALSE])
l <- apply(mat2[(S - 1):1, ], 2, function(x) which(x)[1] - 1)
l[is.na(l)] <- S - 1
r <- apply(mat2[(S + 1):nrow(mat2), ], 2, function(x) which(x)[1] - 1)
r[is.na(l)] <- ncol(mat) - S
cbind(l, r)
}

Related

matrix index subsetting with another matrix

what's a fast way to match two matrices (one and two) together and to extract the index of matrix two for the matches. Matrix two is large (hundreds to thousands of rows).
one
[,1] [,2]
9 11
13 2
head(two)
[,1][,2]
[1,] 9 11
[2,] 11 9
[3,] 2 3
[4,] 13 2
[5,] 2 4
[6,] 3 3
The output should be (notice how index 2 is not an output value)
1 4
One way of doing this :
a = apply(one, 1, paste0, collapse = "-")
b = apply(two, 1, paste0, collapse = "-")
match(a, b)
#[1] 1 4
We paste all the columns together row-wise for both the matrices and then match them to get the rows which are same.
Just for reference,
a
#[1] "9-11" "13-2"
b
#[1] "9-11" "11-9" "2-3" "13-2" "2-4" "3-3"
You could write a C++ loop to do it fairly quick
library(Rcpp)
cppFunction('NumericVector matrixIndex(NumericMatrix m1, NumericMatrix m2){
int m1Rows = m1.nrow();
int m2Rows = m2.nrow();
NumericVector out;
for (int i = 0; i < m1Rows; i++){
for (int j = 0; j < m2Rows; j++){
if(m1(i, 0) == m2(j, 0) && m1(i, 1) == m2(j, 1)){
//out[j] = (j+1);
out.push_back(j + 1);
}
}
}
return out;
}')
matrixIndex(m1, m2)
[1] 1 4
Although I suspect it would be faster to pre-allocate the result vector first, something like
cppFunction('NumericVector matrixIndex(NumericMatrix m1, NumericMatrix m2){
int m1Rows = m1.nrow();
int m2Rows = m2.nrow();
NumericVector out(m2Rows);
for (int i = 0; i < m1Rows; i++){
for (int j = 0; j < m2Rows; j++){
if(m1(i, 0) == m2(j, 0) && m1(i, 1) == m2(j, 1)){
out[j] = (j+1);
//out.push_back(j + 1);
}
}
}
return out;
}')
matrixIndex(m1, m2)
[1] 1 0 0 4 0 0
## 0 == nomatch.
You don't say if by "fast" you mean compute time or person time. If it only needs doing once, the overall time is probably shortest if you optimize person time, and Ronak's answer is going to be hard to beat, it's clear and robust.
If the numbers are all less than a certain number (say, 100, as in your example data), you can do a similar thing but use arithmetic to combine the two columns together and then match. I suspect (but haven't tested) that this would be faster than converting to character vectors. There are of course other arithmetic options too depending on your circumstance.
a <- one[,1]*100 + one[,2]
b <- two[,1]*100 + two[,2]
match(a, b)
We can use %in%
which(do.call(paste, as.data.frame(two)) %in% do.call(paste, as.data.frame(one)))
#[1] 1 4

How to apply a function to a matrix in R

Write a function which takes a matrix that can be coerces into a matrix; the function should return a matrix which is the same as the function argument, but every even number is not changed and odd number is doubled.
I'm very new to R. Can someone help me complete my codes:
mx = matrix(c(1,1,3,5,2,6,-2,-1,-3), nrow = 3, byrow = TRUE)
fun = function(mx){
for(i in mx){
if(i %% 2 == 0){
return(i)
}
else if(i %% 2 > 0){
return(2*i)
}
}
}
Don't need a function, just use the built-in function ifelse:
mx <- ifelse(mx %% 2 == 0, mx, 2*mx)
Or, if you prefer to encapsulate it into a function:
fun = function(mx) {
ifelse(mx %% 2 == 0, mx, 2*mx)
}
res <- fun(mx)
## [,1] [,2] [,3]
##[1,] 2 2 6
##[2,] 10 2 6
##[3,] -2 -2 -6
Explanation:
ifelse performs a vectorized comparison over all elements of the matrix mx to see if each element is even (i.e., mx %% 2 == 0). For each element if this comparison condition is TRUE, the next argument is returned, which in this case is just the value from that element in mx. Otherwise, the last argument is returned, which is 2 times the value from that element in mx as you wish.
That's easy using indices :)
double_odd <- function(mx){
odds_idx <- (mx %% 2 != 0)
mx[odds_idx] <- 2 * mx[odds_idx]
mx # If it is the last statement, you don't need return
}
Cheers
Using your try:
fun = function(mx){
res <- matrix(data = NA, ncol = ncol(mx), nrow = nrow(mx))
for(i in 1:ncol(mx)){
for(j in 1:nrow(mx))
if(mx[j, i] %% 2 == 0){
res[j, i] <- mx[j, i]
}else{
res[j, i] <- 2 * mx[j, i]
}
}
return(res)
}
of course not the most elegant solution :)

printing matrices and vectors side by side

For tutorial purposes, I'd like to be able to print or display matrices and vectors side-by-side, often to illustrate the result of a matrix equation, like $A x = b$.
I could do this using SAS/IML, where the print statement takes an arbitrary collection of (space separated) expressions, evaluates them and prints the result, e.g.,
print A ' * ' x '=' (A * x) '=' b;
A X #TEM1001 B
1 1 -4 * 0.733 = 2 = 2
1 -2 1 -0.33 1 1
1 1 1 -0.4 0 0
Note that quoted strings are printed as is.
I've searched, but can find nothing like this in R. I imagine something like this could be done by a function showObj(object, ...) taking its list of arguments, formatting each to a block of characters, and joining them side-by-side.
Another use of this would be a compact way of displaying a 3D array as the side-by-side collection of its slices.
Does this ring a bell or does anyone have a suggestion for getting started?
I have created a very simple function that can print matrices and vectors with arbitrary character strings (typically operators) in between. It allows for matrices with different numbers of rows and treats vectors as column matrices. It is not very elaborate, so I fear there are many examples where it fails. But for an example as simple as the one in your question, it should be enough.
format() is used to convert the numbers to characters. This has the advantage that all the rows of the matrix have the same width and are thus nicely aligned when printed. If needed, you could add some of the arguments of format() also as arguments mat_op_print() to make the configurable. As an example, I have added the argument width that can be used to control the minimal width of the columns.
If the matrices and vectors are name in the function call, these names are printed as headers in the first line. Otherwise, only the numbers are printed.
So, this is the function:
mat_op_print <- function(..., width = 0) {
# get arguments
args <- list(...)
chars <- sapply(args, is.character)
# auxilliary function to create character of n spaces
spaces <- function(n) paste(rep(" ", n), collapse = "")
# convert vectors to row matrix
vecs <- sapply(args, is.vector)
args[vecs & !chars] <- lapply(args[vecs & !chars], function(v) matrix(v, ncol = 1))
# convert all non-characters to character with format
args[!chars] <- lapply(args[!chars], format, width = width)
# print names as the first line, if present
arg_names <- names(args)
if (!is.null(arg_names)) {
get_title <- function(x, name) {
if (is.matrix(x)) {
paste0(name, spaces(sum(nchar(x[1, ])) + ncol(x) - 1 - nchar(name)))
} else {
spaces(nchar(x))
}
}
cat(mapply(get_title, args, arg_names), "\n")
}
# auxiliary function to create the lines
get_line <- function(x, n) {
if (is.matrix(x)) {
if (nrow(x) < n) {
spaces(sum(nchar(x[1, ])) + ncol(x) - 1)
} else {
paste(x[n, ], collapse = " ")
}
} else if (n == 1) {
x
} else {
spaces(nchar(x))
}
}
# print as many lines as needed for the matrix with most rows
N <- max(sapply(args[!chars], nrow))
for (n in 1:N) {
cat(sapply(args, get_line, n), "\n")
}
}
And this is an example of how it works:
A = matrix(c(0.5, 1, 3, 0.75, 2.8, 4), nrow = 2)
x = c(0.5, 3.7, 2.3)
y = c(0.7, -1.2)
b = A %*% x - y
mat_op_print(A = A, " * ", x = x, " - ", y = y, " = ", b = b, width = 6)
## A x y b
## 0.50 3.00 2.80 * 0.5 - 0.7 = 17.090
## 1.00 0.75 4.00 3.7 -1.2 13.675
## 2.3
Also printing the slices of a 3-dimensional array side-by-side is possible:
A <- array(1:12, dim = c(2, 2, 3))
mat_op_print(A1 = A[, , 1], " | ", A2 = A[, , 2], " | ", A3 = A[, , 3])
## A1 A2 A3
## 1 3 | 5 7 | 9 11
## 2 4 6 8 10 12

Find vector overlap from the start

I am looking for an efficient way to get the first k elements that are the same between two vectors in R.
For example:
orderedIntersect(c(1,2,3,4), c(1,2,5,4))
# [1] 1 2
orderedIntersect(c(1,2,3), c(1,2,3,4))
# [1] 1 2 3
This is the same as the intersect behavior, but any values after the first mismatch should be dropped.
I also want this to work for strings.
So far, the solution that I have is this:
orderedIntersect <- function(a,b) {
a <- as.vector(a)
NAs <- is.na(match(a, as.vector(b)))
last <- ifelse(any(NAs), min(which(NAs)) - 1, length(a))
a[1:last]
}
I am troubled by the fact that I have to iterate over n input elements 6 times: match, is.na, any, which, min, and the subset [].
Clearly, it would be faster to write an external C function (with a for loop and a break), but I am wondering if there is any clever R trick I can use here.
You can compare the values of your vectors and drop elements when the first FALSE is reached:
orderedIntersect <- function(a,b) {
# check the lengths are equal and if not, "cut" the vectors so they are (to avoid warnings)
l_a <- length(a) ; l_b <- length(b)
if(l_a != l_b) {m_l <- min(l_a, l_b) ; a <- a[1:m_l] ; b <- b[1:m_l]}
# compare the elements : they are equal if both are not NA and have the same value or if both are NA
comp <- (!is.na(a) & !is.na(b) & a==b) | (is.na(a) & is.na(b))
# return the right vector : nothing if the first elements do not match, everything if all elements match or just the part that match
if(!comp[1]) return(c()) else if (all(comp)) return(a) else return(a[1:(which(!comp)[1]-1)])
}
orderedIntersect(c(1,2,3,4), c(1,2,5,4))
#[1] 1 2
orderedIntersect(c(1,2,3), c(1,2,3,4))
#[1] 1 2 3
orderedIntersect(c(1,2,3), c(2,3,4))
#NULL
The simple C solution (for integers) isn't really any longer than the R version, but it would be a little more work to extend to all the other classes.
library(inline)
orderedIntersect <- cfunction(
signature(x='integer', y='integer'),
body='
int i, l = length(x) > length(y) ? length(y) : length(x),
*xx = INTEGER(x), *yy = INTEGER(y);
SEXP res;
for (i = 0; i < l; i++) if (xx[i] != yy[i]) break;
PROTECT(res = allocVector(INTSXP, i));
for (l = 0; l < i; l++) INTEGER(res)[l] = xx[l];
UNPROTECT(1);
return res;'
)
## Tests
a <- c(1L,2L,3L,4L)
b <- c(1L,2L,5L,4L)
c <- c(1L,2L,8L,9L,9L,9L,9L,3L)
d <- c(9L,0L,0L,8L)
orderedIntersect(a,b)
# [1] 1 2
orderedIntersect(a,c)
# [1] 1 2
orderedIntersect(a,d)
# integer(0)
orderedIntersect(a, integer())
# integer(0)
This might work:
#test data
a <- c(1,2,3,4)
b <- c(1,2,5,4)
c <- c(1,2,8,9,9,9,9,3)
d <- c(9,0,0,8)
empty <- c()
string1 <- c("abc", "def", "ad","k")
string2 <- c("abc", "def", "c", "lds")
#function
orderedIntersect <- function(a, b) {
l <- min(length(a), length(b))
if (l == 0) return(numeric(0))
a1 <- a[1:l]
comp <- a1 != b[1:l]
if (all(!comp)) return(a1)
a1[ 0:(min(which(comp)) - 1) ]
}
#testing
orderedIntersect(a,b)
# [1] 1 2
orderedIntersect(a,c)
# [1] 1 2
orderedIntersect(a,d)
# numeric(0)
orderedIntersect(a, empty)
# numeric(0)
orderedIntersect(string1,string2)
# [1] "abc" "def"

Clustering a large, very sparse, binary matrix in R

I have a large, sparse binary matrix (roughly 39,000 x 14,000; most rows have only a single "1" entry). I'd like to cluster similar rows together, but my initial plan takes too long to complete:
d <- dist(inputMatrix, method="binary")
hc <- hclust(d, method="complete")
The first step doesn't finish, so I'm not sure how the second step would fare. What are some approaches to efficiently grouping similar rows of a large, sparse, binary matrix in R?
I've written some Rcpp code and R code which works out the binary/Jaccard distance of a binary matrix approx. 80x faster than dist(x, method = "binary"). It converts the input matrix into a raw matrix which is the transpose of the input (so that the bit patterns are in the correct order internally). This is then used in some C++ code which handles the data as 64 bit unsigned integers for speed. The Jaccard distance of two vectors x and y is equal to x ^ y / (x | y) where ^ is the xor operator. The Hamming Weight calculation is used to count the number of bits set if the result of the xor or or is non-zero.
I've put together the code on github at https://github.com/NikNakk/binaryDist/ and reproduced the two files below. I've confirmed that the results are the same as dist(x, method = "binary") for a few random datasets.
On a dataset of 39000 rows by 14000 columns with 1-5 ones per row, it took about 11 minutes. The output distance matrix was 5.7 GB.
bDist.cpp
#include <Rcpp.h>
using namespace Rcpp;
//countBits function taken from https://en.wikipedia.org/wiki/Hamming_weight#Efficient_implementation
const uint64_t m1 = 0x5555555555555555; //binary: 0101...
const uint64_t m2 = 0x3333333333333333; //binary: 00110011..
const uint64_t m4 = 0x0f0f0f0f0f0f0f0f; //binary: 4 zeros, 4 ones ...
const uint64_t h01 = 0x0101010101010101; //the sum of 256 to the power of 0,1,2,3...
int countBits(uint64_t x) {
x -= (x >> 1) & m1; //put count of each 2 bits into those 2 bits
x = (x & m2) + ((x >> 2) & m2); //put count of each 4 bits into those 4 bits
x = (x + (x >> 4)) & m4; //put count of each 8 bits into those 8 bits
return (x * h01)>>56; //returns left 8 bits of x + (x<<8) + (x<<16) + (x<<24) + ...
}
// [[Rcpp::export]]
int countBitsFromRaw(RawVector rv) {
uint64_t* x = (uint64_t*)RAW(rv);
return(countBits(*x));
}
// [[Rcpp::export]]
NumericVector bDist(RawMatrix mat) {
int nr(mat.nrow()), nc(mat.ncol());
int nw = nr / 8;
NumericVector res(nc * (nc - 1) / 2);
// Access the raw data as unsigned 64 bit integers
uint64_t* data = (uint64_t*)RAW(mat);
uint64_t a(0);
// Work through each possible combination of columns (rows in the original integer matrix)
for (int i = 0; i < nc - 1; i++) {
for (int j = i + 1; j < nc; j++) {
uint64_t sx = 0;
uint64_t so = 0;
// Work through each 64 bit integer and calculate the sum of (x ^ y) and (x | y)
for (int k = 0; k < nw; k++) {
uint64_t o = data[nw * i + k] | data[nw * j + k];
// If (x | y == 0) then (x ^ y) will also be 0
if (o) {
// Use Hamming weight method to calculate number of set bits
so = so + countBits(o);
uint64_t x = data[nw * i + k] ^ data[nw * j + k];
if (x) {
sx = sx + countBits(x);
}
}
}
res(a++) = (double)sx / so;
}
}
return (res);
}
R source
library("Rcpp")
library("plyr")
sourceCpp("bDist.cpp")
# Converts a binary integer vector into a packed raw vector,
# padding out at the end to make the input length a multiple of packWidth
packRow <- function(row, packWidth = 64L) {
packBits(as.raw(c(row, rep(0, (packWidth - length(row)) %% packWidth))))
}
as.PackedMatrix <- function(x, packWidth = 64L) {
UseMethod("as.PackedMatrix")
}
# Converts a binary integer matrix into a packed raw matrix
# padding out at the end to make the input length a multiple of packWidth
as.PackedMatrix.matrix <- function(x, packWidth = 64L) {
stopifnot(packWidth %% 8 == 0, class(x) %in% c("matrix", "Matrix"))
storage.mode(x) <- "raw"
if (ncol(x) %% packWidth != 0) {
x <- cbind(x, matrix(0L, nrow = nrow(x), ncol = packWidth - (ncol(x) %% packWidth)))
}
out <- packBits(t(x))
dim(out) <- c(ncol(x) %/% 8, nrow(x))
class(out) <- "PackedMatrix"
out
}
# Converts back to an integer matrix
as.matrix.PackedMatrix <- function(x) {
out <- rawToBits(x)
dim(out) <- c(nrow(x) * 8L, ncol(x))
storage.mode(out) <- "integer"
t(out)
}
# Generates random sparse data for testing the main function
makeRandomData <- function(nObs, nVariables, maxBits, packed = FALSE) {
x <- replicate(nObs, {
y <- integer(nVariables)
y[sample(nVariables, sample(maxBits, 1))] <- 1L
if (packed) {
packRow(y, 64L)
} else {
y
}
})
if (packed) {
class(x) <- "PackedMatrix"
x
} else {
t(x)
}
}
# Reads a binary matrix from file or character vector
# Borrows the first bit of code from read.table
readPackedMatrix <- function(file = NULL, text = NULL, packWidth = 64L) {
if (missing(file) && !missing(text)) {
file <- textConnection(text)
on.exit(close(file))
}
if (is.character(file)) {
file <- file(file, "rt")
on.exit(close(file))
}
if (!inherits(file, "connection"))
stop("'file' must be a character string or connection")
if (!isOpen(file, "rt")) {
open(file, "rt")
on.exit(close(file))
}
lst <- list()
i <- 1
while(length(line <- readLines(file, n = 1)) > 0) {
lst[[i]] <- packRow(as.integer(strsplit(line, "", fixed = TRUE)[[1]]), packWidth = packWidth)
i <- i + 1
}
out <- do.call("cbind", lst)
class(out) <- "PackedMatrix"
out
}
# Wrapper for the C++ code which
binaryDist <- function(x) {
if (class(x) != "PackedMatrix") {
x <- as.PackedMatrix(x)
}
dst <- bDist(x)
attr(dst, "Size") <- ncol(x)
attr(dst, "Diag") <- attr(dst, "Upper") <- FALSE
attr(dst, "method") <- "binary"
attr(dst, "call") <- match.call()
class(dst) <- "dist"
dst
}
x <- makeRandomData(2000, 400, maxBits = 5, packed = TRUE)
system.time(bd <- binaryDist(x))
From original answer:
Other things to consider would be doing some prefiltering of comparisons between two rows with single ones since the distance will either be 0 for duplicates or 1 for any other possibility.
A couple of relatively straightforward options that might be faster without needing much code are the vegdist function from the vegan package and the Dist function from the amap package. The latter will probably only be quicker if you have multiple cores and take advantage of the fact it supports parallelisation.
The reason this takes so long to compute is that the call to dist is computing and storing more than 760 million pairwise distances. If your data is stored sparsely, this will take a long time and huge amount of storage. If your data is not stored sparsely, then each distance computation requires at least 14,000 operations, for a total operation count exceeding 1 quadrillion!
An approach that will be much quicker is k-means clustering, since it doesn't require pre-computing a distance matrix; at each iteration you will need only 39000*k distance calculations, where k is the number of clusters. To get pairwise distances that are similar to the Jaccard index (0 if identical, 1 if no indices coincide, in between if some but not all indices coincide), you could divide each row x by sqrt(2*sum(x^2)). For instance, if you had the following input matrix:
(mat <- rbind(c(1, 0, 0, 0, 0), c(0, 0, 0, 1, 1)))
# [,1] [,2] [,3] [,4] [,5]
# [1,] 1 0 0 0 0
# [2,] 0 0 0 1 1
the normalized version would be (assuming binary values only in the matrix; if this were not the case you would use rowSums(mat^2)):
(mat.norm <- mat / sqrt(2*rowSums(mat)))
# [,1] [,2] [,3] [,4] [,5]
# [1,] 0.7071068 0 0 0.0 0.0
# [2,] 0.0000000 0 0 0.5 0.5
These two observations (which have no indices in common), have Euclidean distance 1, coinciding with the Jaccard distance for this case.
dist(mat.norm, "euclidean")
# 1
# 2 1
Additionally, identical observations will clearly have Euclidean distance 0, again corresponding to the Jaccard distance.
do you have duplicate rows? There is no need to compute their distances twice.
all rows with a single 1 will be 100% different from all rows with a single one in a different place.
Thus, it does not make sense to run clustering on such data. The output is rather predictable, and boils down to finding the 1.
Try restricting your data set to those objects that have more than one 1 only. Unless you can get interesting results on these only, no need to continue further. Binary data has too little information.

Resources