Selecting rows from data.table - r

I'm using the R package data.table to read large amounts of data and analyse them. What I was wondering is why is selecting rows from a data.table so much slower than from a matrix.
require(data.table)
## create some random data
n = 1000
p = 1000
set.seed(1)
data.raw <- matrix(rnorm(n*p), nrow = n, ncol = p)
rownames(data.raw) <- lapply(1:n, FUN = function(x, length)paste(sample(c(letters, LETTERS), length, replace=TRUE), collapse=""), length = 10)
colnames(data.raw) <- paste0("X", 1:n)
#do the same thing as data.table
data.t <- data.table(data.raw)
data.t[, id := rownames(data.raw)]
setkey(data.t, id)
## now select one row after the other in both matrix and data.table
system.time(for(r in rownames(data.raw)) y <- data.raw[r, ])
# user system elapsed
# 0.016 0.000 0.017
system.time(for(r in data.t$id) y <- data.t[r])
# user system elapsed
# 30.580 0.000 30.608
Even for this relatively small example, data.table is extremely slow even though using the setkey. Is there any way to improve the performance of this?

Related

R efficient way to sort a Matrix by row

I have a matrix "multiOrderPairsFlat" of 2m+ rows and 2 columns where each cell contains a SKU description (e.g. "Pipe2mSteel" or "Bushing1inS") and would like to sort every row alphabetically, so that in every row, e.g. "Bushings1inS" is in the first column and "Pipe2mSteel" in the second.
However, if I run:
for (i in 1:length(multiOrderPairsFlat)){
multiOrderPairsFlat[i,] <- sort(multiOrderPairsFlat[i,])
}
It takes forever and I doubt this is the quickest way of dealing with this problem. Do you have any advice on how to solve this more efficiently, e.g. by vectorizing the operation?
Thanks for helping out;)
Best
seulberg1
It may be better to use pmin/pmax after converting to data.frame (as there are only two columns)
system.time({
df1 <- as.data.frame(multiOrderPairsFlat, stringsAsFactors=FALSE)
res <- data.frame(First = do.call(pmin, df1), Second = do.call(pmax, df1))
})
# user system elapsed
# 0.49 0.02 0.50
system.time({
for (i in 1:nrow(multiOrderPairsFlat)){
multiOrderPairsFlat[i,] <- sort(multiOrderPairsFlat[i,])
}
})
# user system elapsed
# 11.99 0.00 12.00
all.equal(as.matrix(res), multiOrderPairsFlat, check.attributes=FALSE)
#[1] TRUE
Checking the memory allocation
library(profvis)
profvis({
df1 <- as.data.frame(multiOrderPairsFlat, stringsAsFactors=FALSE)
res <- data.frame(First = do.call(pmin, df1), Second = do.call(pmax, df1))
})
#3.3 MB
profvis({
for (i in 1:nrow(multiOrderPairsFlat)){
multiOrderPairsFlat[i,] <- sort(multiOrderPairsFlat[i,])
}
})
#12.8 MB
data
set.seed(24)
multiOrderPairsFlat <- cbind(sample(c("Pipe2mSteel" , "Bushing1inS"), 1e6, replace=TRUE),
sample(c("Pipe2mSteel" , "Bushing1inS"), 1e6, replace=TRUE))

Is there a faster way to subset a sparse Matrix than '['?

I'm the maintainer of the seqMeta package and looking for ideas on how to speed up the bottleneck of splitting a large matrix into smaller pieces a large number of times.
Background
The seqMeta package is used to analyze genetic data. So you have a group of subjects (n_subject) and a number of genetic markers (n_snps). This leads to a n_subject x n_snp matrix (Z). There is also a data frame that tells you which snps get grouped together for analysis (typically which snps comprise a given gene).
While Z may be large, it is quite sparse. Typically less than 10%, and sometimes around 2%, of the values are non-zero. A sprase matrix representation seems like an obvious choice to save space.
A current project: nsubjects ~15,000 and nsnps ~2 million, with over 200,000 splits.
As the data size continues to grow I've found the time limiting factor tends to be the number of groupings, not the actual size of the data. (See the below example runtime is a linear function of n_splits for the same data)
Simplified Example
library(Matrix)
seed(1)
n_subjects <- 1e3
n_snps <- 1e5
sparcity <- 0.05
n <- floor(n_subjects*n_snps*sparcity)
# create our simulated data matrix
Z <- Matrix(0, nrow = n_subjects, ncol = n_snps, sparse = TRUE)
pos <- sample(1:(n_subjects*n_snps), size = n, replace = FALSE)
vals <- rnorm(n)
Z[pos] <- vals
# create the data frame on how to split
# real data set the grouping size is between 1 and ~1500
n_splits <- 500
sizes <- sample(2:20, size = n_splits, replace = TRUE)
lkup <- data.frame(gene_name=rep(paste0("g", 1:n_splits), times = sizes),
snps = sample(n_snps, size = sum(sizes)))
# simple function that gets called on the split
# the real function creates a cols x cols dense upper triangular matrix
# similar to a covariance matrix
simple_fun <- function(Z, cols) {sum(Z[ , cols])}
# split our matrix based look up table
system.time(
res <- tapply(lkup[ , "snps"], lkup[ , "gene_name"], FUN=simple_fun, Z=Z, simplify = FALSE)
)
## user system elapsed
## 3.21 0.00 3.21
n_splits <- 1000
sizes <- sample(2:20, size = n_splits, replace = TRUE)
lkup <- data.frame(gene_name=rep(paste0("g", 1:n_splits), times = sizes),
snps = sample(n_snps, size = sum(sizes)))
# split our matrix based look up table
system.time(
res <- tapply(lkup[ , "snps"], lkup[ , "gene_name"], FUN=simple_fun, Z=Z, simplify = FALSE)
)
## user system elapsed
## 6.38 0.00 6.38
n_splits <- 5000
sizes <- sample(2:20, size = n_splits, replace = TRUE)
lkup <- data.frame(gene_name=rep(paste0("g", 1:n_splits), times = sizes),
snps = sample(n_snps, size = sum(sizes)))
# split our matrix based look up table
system.time(
res <- tapply(lkup[ , "snps"], lkup[ , "gene_name"], FUN=simple_fun, Z=Z, simplify = FALSE)
)
## user system elapsed
## 31.65 0.00 31.66
Question: Is there a faster way to subset a Matrix than '['? Or other approached I'm missing?

Return factor associated with a numeric range defined in two columns

Using a database with a numeric range defined by two columns start and end, I am trying to look up the factor, code, associated with a numeric value in a separate vector identityCodes.
database <- data.frame(start = seq(1, 150000000, 1000),
end = seq(1000, 150000000, 1000),
code = paste0(sample(LETTERS, 15000, replace = TRUE),
sample(LETTERS, 15000, replace = TRUE)))
identityCodes <- sample(1:15000000, 1000)
I've come up with a method for finding the corresponding codes using a for loop and subsetting:
fun <- function (x, y) {
z <- rep(NA, length(x))
for (i in 1:length(x)){
z[i] <- as.character(y[y["start"] <= x[i] & y["end"] >= x[i], "code"])
}
return(z)
}
a <- fun(identityCodes, database)
But the method is slow, especially if I am to scale it:
system.time(fun(identityCodes, database))
user system elapsed
15.36 0.00 15.50
How can I identify the factors associated with each identityCodes faster? Is there a better way to go about this than using a for loop and subsetting?
Here's my attempt using data.table. Very fast - even though I am sure I am not leveraging it efficiently.
Given function:
# method 1
system.time(result1 <- fun(identityCodes, database))
user system elapsed
8.99 0.00 8.98
Using data.table
# method 2
require(data.table)
# x: a data.frame with columns start, end, code
# y: a vector with lookup codes
dt_comb <- function(x, y) {
# convert x to a data.table and set 'start' and 'end' as keys
DT <- setDT(x)
setkey(DT, start, end)
# create a lookup data.table where start and end are the identityCodes
DT2 <- data.table(start=y, end=y)
# overlap join where DT2 start & end are within DT start and end
res <- foverlaps(DT2, DT[, .(start, end)], type="within")
# store i as row number and key (for sorting later)
res[, i:=seq_len(nrow(res))]
setkey(res, i)
# merge the joined table to the original to get codes
final <- merge(res, DT, by=c("start", "end"))[order(i), .(code)]
# export as character the codes
as.character(final[[1]])
}
system.time(result2 <- dt_comb(x=database, y=identityCodes))
user system elapsed
0.08 0.00 0.08
identical(result1, result2)
[1] TRUE
edit: trimmed a couple lines from the function
This is about 45% faster on my machine:
result = lapply(identityCodes, function(x) {
data.frame(identityCode=x,
code=database[database$start <= x & database$end >= x, "code"])
})
result = do.call(rbind, result)
Here's a sample of the output:
identityCode code
1 6836845 OK
2 14100352 RB
3 2313115 NK
4 8440671 XN
5 11349271 TI
6 14467193 VL

Speed up quantile calculation

I am using the Hmisc Package to calculate the quantiles of two continous variables and compare the results in a crosstable. You find my code below.
My problem is that the calculation of the quantiles takes a considerable amount of time if the number of observations increases.
Is there any possibility to speed up this procedure by using the data.table, ddply or any other package?
Thanks.
library(Hmisc)
# Set seed
set.seed(123)
# Generate some data
a <- sample(1:25, 1e7, replace=TRUE)
b <- sample(1:25, 1e7, replace=TRUE)
c <- data.frame(a,b)
# Calculate quantiles
c$a.quantile <- cut2(a, g=5)
c$b.quantile <- cut2(b, g=5)
# Output some descriptives
summaryM(a.quantile ~ b.quantile, data=c, overall=TRUE)
# Time spent for calculation:
# User System verstrichen
# 25.13 3.47 28.73
As stated by jlhoward and Ricardo Saporta data.table doesn't seem to speed up things too much in this case. The cut2 function is clearly the bottleneck here. I used another function to calculate the quantiles (see Is there a better way to create quantile "dummies" / factors in R?) and was able to decrease the calculation time by half:
qcut <- function(x, n) {
if(n<=2)
{
stop("The sample must be split in at least 3 parts.")
}
else{
break.values <- quantile(x, seq(0, 1, length = n + 1), type = 7)
break.labels <- c(
paste0(">=",break.values[1], " & <=", break.values[2]),
sapply(break.values[3:(n)], function(x){paste0(">",break.values[which(break.values == x)-1], " & <=", x)}),
paste0(">",break.values[(n)], " & <=", break.values[(n+1)]))
cut(x, break.values, labels = break.labels,include.lowest = TRUE)
}
}
c$a.quantile.2 <- qcut(c$a, 5)
c$b.quantile.2 <- qcut(c$b, 5)
summaryM(a.quantile.2 ~ b.quantile.2, data=c, overall=TRUE)
# Time spent for calculation:
# User System verstrichen
# 10.22 1.47 11.70
Using data.table would reduce the calculation time by another second, but I like the summary by the Hmisc package better.
You can use data.table's .N built in variable, to quickly tabulate.
library(data.table)
library(Hmisc)
DT <- data.table(a,b)
DT[, paste0(c("a", "b"), ".quantile") := lapply(.SD, cut2, g=5), .SDcols=c("a", "b")]
DT[, .N, keyby=list(b.quantile, a.quantile)][, setNames(as.list(N), as.character(b.quantile)), by=a.quantile]
You can break that last line down into two steps, to see what is going on. The second "[ " simply reshapes the data in a clean format.
DT.tabulated <- DT[, .N, keyby=list(b.quantile, a.quantile)]
DT.tabulated
DT.tabulated[, setNames(as.list(N), as.character(b.quantile)), by=a.quantile]
Data tables don't seem to improve things here:
library(Hmisc)
set.seed(123)
a <- sample(1:25, 1e7, replace=TRUE)
b <- sample(1:25, 1e7, replace=TRUE)
library(data.table)
# original approach
system.time({
c <- data.frame(a,b)
c$a.quantile <- cut2(a, g=5)
c$b.quantile <- cut2(b, g=5)
smry.1 <-summaryM(a.quantile ~ b.quantile, data=c, overall=TRUE)
})
user system elapsed
72.79 6.22 79.02
# original data.table approach
system.time({
DT <- data.table(a,b)
DT[, paste0(c("a", "b"), ".quantile") := lapply(.SD, cut2, g=5), .SDcols=c("a", "b")]
smry.2 <- DT[, .N, keyby=list(b.quantile, a.quantile)][, setNames(as.list(N), as.character(b.quantile)), by=a.quantile]
})
user system elapsed
66.86 5.11 71.98
# different data.table approach (simpler, and uses table(...))
system.time({
dt <- data.table(a,b)
smry.3 <- table(dt[,lapply(dt,cut2,g=5)])
})
user system elapsed
67.24 5.02 72.26

R: Tabulations and insertions with data.table

I am trying to take a very large set of records with multiple indices, calculate an aggregate statistic on groups determined by a subset of the indices, and then insert that into every row in the table. The issue here is that these are very large tables - over 10M rows each.
Code for reproducing the data is below.
The basic idea is that there are a set of indices, say ix1, ix2, ix3, ..., ixK. Generally, I am choosing only a couple of them, say ix1 and ix2. Then, I calculate an aggregation of all the rows with matching ix1 and ix2 values (over all combinations that appear), for a column called val. To keep it simple, I'll focus on a sum.
I have tried the following methods
Via sparse matrices: convert the values to a coordinate list, i.e. (ix1, ix2, val), then create a sparseMatrix - this nicely sums up everything, and then I need only convert back from the sparse matrix representation to the coordinate list. Speed: good, but it is doing more than is necessary and it doesn't generalize to higher dimensions (e.g. ix1, ix2, ix3) or more general functions than a sum.
Use of lapply and split: by creating a new index that is unique for all (ix1, ix2, ...) n-tuples, I can then use split and apply. The bad thing here is that the unique index is converted by split into a factor, and this conversion is terribly time consuming. Try system({zz <- as.factor(1:10^7)}).
I'm now trying data.table, via a command like sumDT <- DT[,sum(val),by = c("ix1","ix2")]. However, I don't yet see how I can merge sumDT with DT, other than via something like DT2 <- merge(DT, sumDT, by = c("ix1","ix2"))
Is there a faster method for this data.table join than via the merge operation I've described?
[I've also tried bigsplit from the bigtabulate package, and some other methods. Anything that converts to a factor is pretty much out - as far as I can tell, that conversion process is very slow.]
Code to generate data. Naturally, it's better to try a smaller N to see that something works, but not all methods scale very well for N >> 1000.
N <- 10^7
set.seed(2011)
ix1 <- 1 + floor(rexp(N, 0.01))
ix2 <- 1 + floor(rexp(N, 0.01))
ix3 <- 1 + floor(rexp(N, 0.01))
val <- runif(N)
DF <- data.frame(ix1 = ix1, ix2 = ix2, ix3 = ix3, val = val)
DF <- DF[order(DF[,1],DF[,2],DF[,3]),]
DT <- as.data.table(DF)
Well, it's possible you'll find that doing the merge isn't so bad as long as your keys are properly set.
Let's setup the problem again:
N <- 10^6 ## not 10^7 because RAM is tight right now
set.seed(2011)
ix1 <- 1 + floor(rexp(N, 0.01))
ix2 <- 1 + floor(rexp(N, 0.01))
ix3 <- 1 + floor(rexp(N, 0.01))
val <- runif(N)
DT <- data.table(ix1=ix1, ix2=ix2, ix3=ix3, val=val, key=c("ix1", "ix2"))
Now you can calculate your summary stats
info <- DT[, list(summary=sum(val)), by=key(DT)]
And merge the columns "the data.table way", or just with merge
m1 <- DT[info] ## the data.table way
m2 <- merge(DT, info) ## if you're just used to merge
identical(m1, m2)
[1] TRUE
If either of those ways of merging is too slow, you can try a tricky way to build info at the cost of memory:
info2 <- DT[, list(summary=rep(sum(val), length(val))), by=key(DT)]
m3 <- transform(DT, summary=info2$summary)
identical(m1, m3)
[1] TRUE
Now let's see the timing:
#######################################################################
## Using data.table[ ... ] or merge
system.time(info <- DT[, list(summary=sum(val)), by=key(DT)])
user system elapsed
0.203 0.024 0.232
system.time(DT[info])
user system elapsed
0.217 0.078 0.296
system.time(merge(DT, info))
user system elapsed
0.981 0.202 1.185
########################################################################
## Now the two parts of the last version done separately:
system.time(info2 <- DT[, list(summary=rep(sum(val), length(val))), by=key(DT)])
user system elapsed
0.574 0.040 0.616
system.time(transform(DT, summary=info2$summary))
user system elapsed
0.173 0.093 0.267
Or you can skip the intermediate info table building if the following doesn't seem too inscrutable for your tastes:
system.time(m5 <- DT[ DT[, list(summary=sum(val)), by=key(DT)] ])
user system elapsed
0.424 0.101 0.525
identical(m5, m1)
# [1] TRUE

Resources