Interpreting a profiling file (Rprof) - r

I have code which takes a lot of time executing:
dataRaw<-pblapply(femme,function (x) {
article<-user(x,date=FALSE,weight=FALSE)
names<-rep(x,length(article))
result<-matrix(c(names,article),ncol=2)
})
dataRaw<-do.call(rbind,dataRaw)
dataRaw[,3]<-vector(length=length(dataRaw[,2]))
dataRaw[,3]<-pbapply(dataRaw,1,function(x){
Rprof(filename = "profile.out")
revisions<-revisionsPage(x[2])
rank<-rankingContrib(revisions,50)
rank<-rank$contrib
x[1] %in% rank
Rprof(NULL)
})
result<-as.vector(dataRaw[dataRaw$ranking==TRUE,2])
Lanching the summaryRprof function, it give me this
$by.self
self.time self.pct total.time total.pct
".Call" 0.46 95.83 0.46 95.83
"as.data.frame.numeric" 0.02 4.17 0.02 4.17
$by.total
total.time total.pct self.time self.pct
"FUN" 0.48 100.00 0.00 0.00
"pbapply" 0.48 100.00 0.00 0.00
".Call" 0.46 95.83 0.46 95.83
"<Anonymous>" 0.46 95.83 0.00 0.00
"GET" 0.46 95.83 0.00 0.00
"request_fetch" 0.46 95.83 0.00 0.00
"request_fetch.write_memory" 0.46 95.83 0.00 0.00
"request_perform" 0.46 95.83 0.00 0.00
"revisionsPage" 0.46 95.83 0.00 0.00
"as.data.frame.numeric" 0.02 4.17 0.02 4.17
"as.data.frame" 0.02 4.17 0.00 0.00
"data.frame" 0.02 4.17 0.00 0.00
"rankingContrib" 0.02 4.17 0.00 0.00
$sample.interval
[1] 0.02
$sampling.time
[1] 0.48
Appears it is the ".Call" function which takes all the machine time. What is this .Call entry?

Related

Can timing or memory usage illustrate the hazard of growing objects in a loop?

Occasionally, we find novice R programmers build data frames in a for loop, usually by initializing an empty data frame and then iteratively calling rbind. To respond to this inefficient approach, we often cite Patrick Burns' R Inferno - Circle 2: Growing Objects who emphasizes the hazard of this situation.
In Python pandas (the other open-source data science tool), experts have asserted the quadratic copy and O(N^2) logic: (#unutbu here, #Alexander here). Additionally, docs (see section note) stress the copying problem of datasets and wiki explains Python's list.append does not have the copy problem. I wonder if similar constructs apply to R.
Specifically, my question:
Can timing alone illustrate or quantify the growing object in loop problem? See microbenchmark results below. Burns shows timings to illustrate the computational challenge to create a sequence.
Or does memory usage illustrate or quantify the growing object in loop problem? See RProf results below. Burns cites using RProf to show memory consumption within code.
Or is the growing object problem, context-specific, with general rule of thumb to avoid loops in building objects?
Consider following examples of growing a random data frame of 500 rows in a loop and using a list:
grow_df_loop <- function(n) {
final_df <- data.frame()
for(i in 1:n) {
df <- data.frame(
group = sample(c("sas", "stata", "spss", "python", "r", "julia"), 500, replace=TRUE),
int = sample(1:15, 500, replace=TRUE),
num = rnorm(500),
char = replicate(500, paste(sample(c(LETTERS, letters, c(0:9)), 3, replace=TRUE), collapse="")),
bool = sample(c(TRUE, FALSE), 500, replace=TRUE),
date = as.Date(sample(10957:as.integer(Sys.Date()), 500, replace=TRUE), origin="1970-01-01")
)
final_df <- rbind(final_df, df)
}
return(final_df)
}
grow_df_list <- function(n) {
df_list <- lapply(1:n, function(i)
df <- data.frame(
group = sample(c("sas", "stata", "spss", "python", "r", "julia"), 500, replace=TRUE),
int = sample(1:15, 500, replace=TRUE),
num = rnorm(500),
char = replicate(500, paste(sample(c(LETTERS, letters, c(0:9)), 3, replace=TRUE), collapse="")),
bool = sample(c(TRUE, FALSE), 500, replace=TRUE),
date = as.Date(sample(10957:as.integer(Sys.Date()), 500, replace=TRUE), origin="1970-01-01")
)
)
final_df <- do.call(rbind, df_list)
return(final_df)
}
Timing
Benchmarking by timing confirms the list approach is more efficient across the different number of iterations. But given reproducible, uniform data examples can timing results capture the difference of object growth?
library(microbenchmark)
microbenchmark(grow_df_loop(50), grow_df_list(50), times = 5L)
# Unit: milliseconds
# expr min lq mean median uq max neval cld
# grow_df_loop(50) 758.2412 762.3489 809.8988 793.3590 806.4191 929.1256 5 b
# grow_df_list(50) 554.3722 562.1949 577.6891 568.7658 589.8565 613.2560 5 a
microbenchmark(grow_df_loop(100), grow_df_list(100), times = 5L)
# Unit: seconds
# expr min lq mean median uq max neval cld
# grow_df_loop(100) 2.223617 2.225441 2.425668 2.233529 2.677309 2.768447 5 b
# grow_df_list(100) 1.211181 1.255191 1.325670 1.287821 1.396905 1.477252 5 a
microbenchmark(grow_df_loop(500), grow_df_list(500), times = 5L)
# Unit: seconds
# expr min lq mean median uq max neval cld
# grow_df_loop(500) 38.78245 39.74367 41.54976 40.10221 44.36565 44.75483 5 b
# grow_df_list(500) 13.37076 13.90227 14.67498 14.53042 15.49942 16.07203 5 a
Memory Usage
Additionally, profiling by memory shows "rbind" memory totals sizeably growing with iteration size but more pronounced with loop approach than list approach. Given a reproducible, uniform example can mem.total results capture the difference of object growth? Any other approach to use?
Loop Approach
n = 50
utils::Rprof(tmp <- tempfile(), memory.profiling = TRUE)
output_df1 <- grow_df_loop(50)
utils::Rprof(NULL)
summaryRprof(tmp, memory="both")
unlink(tmp)
# $by.total
# total.time total.pct mem.total self.time self.pct
# "grow_df_loop" 0.58 100.00 349.1 0.00 0.00
# "data.frame" 0.38 65.52 209.4 0.00 0.00
# "paste" 0.28 48.28 186.4 0.06 10.34
# "FUN" 0.26 44.83 150.8 0.02 3.45
# "lapply" 0.26 44.83 150.8 0.00 0.00
# "replicate" 0.26 44.83 150.8 0.00 0.00
# "sapply" 0.26 44.83 150.8 0.00 0.00
# "sample" 0.20 34.48 131.4 0.08 13.79
# "rbind" 0.20 34.48 139.7 0.00 0.00
# "[<-.factor" 0.12 20.69 66.0 0.10 17.24
# "[<-" 0.12 20.69 66.0 0.00 0.00
# "factor" 0.10 17.24 47.8 0.04 6.90
# "as.data.frame" 0.10 17.24 48.5 0.00 0.00
# "as.data.frame.character" 0.10 17.24 48.5 0.00 0.00
# "order" 0.06 10.34 12.9 0.06 10.34
# "as.vector" 0.04 6.90 38.7 0.04 6.90
# "sample.int" 0.04 6.90 18.7 0.02 3.45
# "as.vector.factor" 0.04 6.90 38.7 0.00 0.00
# "deparse" 0.04 6.90 35.6 0.00 0.00
# "!" 0.02 3.45 18.7 0.02 3.45
# ":" 0.02 3.45 0.0 0.02 3.45
# "anyNA" 0.02 3.45 19.0 0.02 3.45
# "as.POSIXlt.POSIXct" 0.02 3.45 10.1 0.02 3.45
# "c" 0.02 3.45 19.8 0.02 3.45
# "is.na" 0.02 3.45 18.9 0.02 3.45
# "length" 0.02 3.45 13.8 0.02 3.45
# "mode" 0.02 3.45 16.6 0.02 3.45
# "%in%" 0.02 3.45 16.6 0.00 0.00
# ".deparseOpts" 0.02 3.45 19.0 0.00 0.00
# "as.Date" 0.02 3.45 10.1 0.00 0.00
# "as.POSIXlt" 0.02 3.45 10.1 0.00 0.00
# "Sys.Date" 0.02 3.45 10.1 0.00 0.00
#
# $sample.interval
# [1] 0.02
#
# $sampling.time
# [1] 0.58
n = 100
# $by.total
# total.time total.pct mem.total self.time self.pct
# "grow_df_loop" 1.74 98.86 963.0 0.00 0.00
# "rbind" 1.06 60.23 599.3 0.06 3.41
# "data.frame" 0.68 38.64 363.7 0.02 1.14
# "lapply" 0.50 28.41 239.0 0.04 2.27
# "replicate" 0.50 28.41 239.0 0.00 0.00
# "sapply" 0.50 28.41 239.0 0.00 0.00
# "paste" 0.46 26.14 218.4 0.06 3.41
# "FUN" 0.46 26.14 218.4 0.00 0.00
# "factor" 0.44 25.00 249.2 0.24 13.64
# "sample" 0.40 22.73 179.2 0.10 5.68
# "[<-" 0.38 21.59 244.3 0.00 0.00
# "[<-.factor" 0.34 19.32 229.5 0.30 17.05
# "c" 0.26 14.77 136.6 0.26 14.77
# "as.vector" 0.24 13.64 101.2 0.24 13.64
# "as.vector.factor" 0.24 13.64 101.2 0.00 0.00
# "order" 0.14 7.95 87.3 0.14 7.95
# "as.data.frame" 0.14 7.95 87.3 0.00 0.00
# "as.data.frame.character" 0.14 7.95 87.3 0.00 0.00
# "sample.int" 0.10 5.68 28.2 0.10 5.68
# "unique" 0.10 5.68 64.9 0.00 0.00
# "is.na" 0.06 3.41 62.4 0.06 3.41
# "unique.default" 0.04 2.27 42.4 0.04 2.27
# "[<-.Date" 0.04 2.27 14.9 0.00 0.00
# ".Call" 0.02 1.14 0.0 0.02 1.14
# "Make.row.names" 0.02 1.14 0.0 0.02 1.14
# "NextMethod" 0.02 1.14 0.0 0.02 1.14
# "structure" 0.02 1.14 10.3 0.02 1.14
# "unclass" 0.02 1.14 14.9 0.02 1.14
# ".Date" 0.02 1.14 0.0 0.00 0.00
# ".rs.enqueClientEvent" 0.02 1.14 0.0 0.00 0.00
# "as.Date" 0.02 1.14 23.2 0.00 0.00
# "as.Date.character" 0.02 1.14 23.2 0.00 0.00
# "as.Date.numeric" 0.02 1.14 23.2 0.00 0.00
# "charToDate" 0.02 1.14 23.2 0.00 0.00
# "hook" 0.02 1.14 0.0 0.00 0.00
# "is.na.POSIXlt" 0.02 1.14 23.2 0.00 0.00
# "utils::Rprof" 0.02 1.14 0.0 0.00 0.00
#
# $sample.interval
# [1] 0.02
#
# $sampling.time
# [1] 1.76
n = 500
# $by.total
# total.time total.pct mem.total self.time self.pct
# "grow_df_loop" 28.12 100.00 15557.7 0.00 0.00
# "rbind" 25.30 89.97 13418.5 3.06 10.88
# "factor" 8.94 31.79 5026.5 6.98 24.82
# "[<-" 8.72 31.01 4486.9 0.02 0.07
# "[<-.factor" 7.62 27.10 3915.5 7.32 26.03
# "unique" 3.06 10.88 2060.9 0.00 0.00
# "as.vector" 2.96 10.53 1250.1 2.96 10.53
# "as.vector.factor" 2.96 10.53 1250.1 0.00 0.00
# "data.frame" 2.82 10.03 2139.1 0.02 0.07
# "unique.default" 2.30 8.18 1657.9 2.30 8.18
# "replicate" 1.88 6.69 1364.7 0.00 0.00
# "sapply" 1.88 6.69 1364.7 0.00 0.00
# "FUN" 1.84 6.54 1367.2 0.18 0.64
# "lapply" 1.84 6.54 1338.8 0.02 0.07
# "paste" 1.70 6.05 1281.3 0.38 1.35
# "sample" 1.36 4.84 1089.2 0.20 0.71
# "[<-.Date" 1.08 3.84 571.4 0.00 0.00
# "c" 1.04 3.70 688.7 1.04 3.70
# ".Date" 0.96 3.41 488.0 0.34 1.21
# "sample.int" 0.76 2.70 584.2 0.74 2.63
# "as.data.frame" 0.70 2.49 533.6 0.00 0.00
# "as.data.frame.character" 0.64 2.28 476.0 0.00 0.00
# "NextMethod" 0.62 2.20 424.7 0.62 2.20
# "order" 0.60 2.13 475.5 0.50 1.78
# "structure" 0.32 1.14 155.5 0.32 1.14
# "is.na" 0.28 1.00 150.5 0.26 0.92
# "Make.row.names" 0.12 0.43 153.8 0.12 0.43
# "unclass" 0.12 0.43 83.3 0.12 0.43
# "as.Date" 0.10 0.36 120.1 0.02 0.07
# "length" 0.06 0.21 79.2 0.06 0.21
# "seq.int" 0.06 0.21 57.0 0.06 0.21
# "vapply" 0.06 0.21 84.6 0.02 0.07
# ":" 0.04 0.14 1.1 0.04 0.14
# "as.POSIXlt.POSIXct" 0.04 0.14 57.7 0.04 0.14
# "is.factor" 0.04 0.14 0.0 0.04 0.14
# "deparse" 0.04 0.14 55.0 0.02 0.07
# "eval" 0.04 0.14 36.2 0.02 0.07
# "match.arg" 0.04 0.14 25.2 0.02 0.07
# "match.fun" 0.04 0.14 32.4 0.02 0.07
# "as.data.frame.integer" 0.04 0.14 55.0 0.00 0.00
# "as.POSIXlt" 0.04 0.14 57.7 0.00 0.00
# "force" 0.04 0.14 55.0 0.00 0.00
# "make.names" 0.04 0.14 42.1 0.00 0.00
# "Sys.Date" 0.04 0.14 57.7 0.00 0.00
# "!" 0.02 0.07 29.6 0.02 0.07
# "$" 0.02 0.07 2.6 0.02 0.07
# "any" 0.02 0.07 18.3 0.02 0.07
# "as.data.frame.numeric" 0.02 0.07 2.6 0.02 0.07
# "as.data.frame.vector" 0.02 0.07 21.6 0.02 0.07
# "as.list" 0.02 0.07 26.6 0.02 0.07
# "baseenv" 0.02 0.07 25.2 0.02 0.07
# "is.ordered" 0.02 0.07 14.5 0.02 0.07
# "lengths" 0.02 0.07 14.9 0.02 0.07
# "levels" 0.02 0.07 0.0 0.02 0.07
# "mode" 0.02 0.07 30.7 0.02 0.07
# "names" 0.02 0.07 0.0 0.02 0.07
# "rnorm" 0.02 0.07 29.6 0.02 0.07
# "%in%" 0.02 0.07 30.7 0.00 0.00
# "as.Date.character" 0.02 0.07 2.6 0.00 0.00
# "as.Date.numeric" 0.02 0.07 2.6 0.00 0.00
# "as.POSIXct" 0.02 0.07 2.6 0.00 0.00
# "as.POSIXct.POSIXlt" 0.02 0.07 2.6 0.00 0.00
# "charToDate" 0.02 0.07 2.6 0.00 0.00
# "eval.parent" 0.02 0.07 11.0 0.00 0.00
# "is.na.POSIXlt" 0.02 0.07 2.6 0.00 0.00
# "simplify2array" 0.02 0.07 14.9 0.00 0.00
#
# $sample.interval
# [1] 0.02
#
# $sampling.time
# [1] 28.12
List Approach
n = 50
# $by.total
# total.time total.pct mem.total self.time self.pct
# "grow_df_list" 0.40 100 257.0 0.00 0
# "data.frame" 0.32 80 175.6 0.02 5
# "lapply" 0.32 80 175.6 0.02 5
# "FUN" 0.32 80 175.6 0.00 0
# "replicate" 0.24 60 129.6 0.00 0
# "sapply" 0.24 60 129.6 0.00 0
# "paste" 0.22 55 119.2 0.10 25
# "sample" 0.12 30 49.4 0.00 0
# "sample.int" 0.08 20 39.1 0.08 20
# "<Anonymous>" 0.08 20 81.4 0.00 0
# "do.call" 0.08 20 81.4 0.00 0
# "rbind" 0.08 20 81.4 0.00 0
# "factor" 0.06 15 29.7 0.02 5
# "as.data.frame" 0.06 15 29.7 0.00 0
# "as.data.frame.character" 0.06 15 29.7 0.00 0
# "c" 0.04 10 10.3 0.04 10
# "order" 0.04 10 17.3 0.04 10
# "unique.default" 0.04 10 31.1 0.04 10
# "[<-" 0.04 10 50.3 0.00 0
# "unique" 0.04 10 31.1 0.00 0
# ".Date" 0.02 5 27.9 0.02 5
# "[<-.factor" 0.02 5 22.4 0.02 5
# "[<-.Date" 0.02 5 27.9 0.00 0
#
# $sample.interval
# [1] 0.02
#
# $sampling.time
# [1] 0.4
n = 100
# $by.total
# total.time total.pct mem.total self.time self.pct
# "grow_df_list" 1.00 100 620.4 0.00 0
# "data.frame" 0.66 66 401.8 0.00 0
# "FUN" 0.66 66 401.8 0.00 0
# "lapply" 0.66 66 401.8 0.00 0
# "paste" 0.42 42 275.3 0.14 14
# "replicate" 0.42 42 275.3 0.00 0
# "sapply" 0.42 42 275.3 0.00 0
# "rbind" 0.34 34 218.6 0.02 2
# "<Anonymous>" 0.34 34 218.6 0.00 0
# "do.call" 0.34 34 218.6 0.00 0
# "sample" 0.28 28 188.6 0.08 8
# "unique.default" 0.20 20 90.1 0.20 20
# "unique" 0.20 20 90.1 0.00 0
# "as.data.frame" 0.18 18 81.2 0.00 0
# "factor" 0.16 16 81.2 0.02 2
# "as.data.frame.character" 0.16 16 81.2 0.00 0
# "[<-.factor" 0.14 14 112.0 0.14 14
# "sample.int" 0.14 14 96.8 0.14 14
# "[<-" 0.14 14 112.0 0.00 0
# "order" 0.12 12 51.2 0.12 12
# "c" 0.06 6 45.8 0.06 6
# "as.Date" 0.04 4 28.3 0.02 2
# "length" 0.02 2 17.0 0.02 2
# "strptime" 0.02 2 11.2 0.02 2
# "structure" 0.02 2 0.0 0.02 2
# "as.data.frame.integer" 0.02 2 0.0 0.00 0
# "as.Date.character" 0.02 2 11.2 0.00 0
# "as.Date.numeric" 0.02 2 11.2 0.00 0
# "charToDate" 0.02 2 11.2 0.00 0
#
# $sample.interval
# [1] 0.02
#
# $sampling.time
# [1] 1
n = 500
# $by.total
# total.time total.pct mem.total self.time self.pct
# "grow_df_list" 9.40 100.00 5621.8 0.00 0.00
# "rbind" 6.12 65.11 3633.5 0.44 4.68
# "<Anonymous>" 6.12 65.11 3633.5 0.00 0.00
# "do.call" 6.12 65.11 3633.5 0.00 0.00
# "lapply" 3.28 34.89 1988.3 0.34 3.62
# "FUN" 3.28 34.89 1988.3 0.10 1.06
# "data.frame" 3.28 34.89 1988.3 0.02 0.21
# "[<-" 3.28 34.89 2118.4 0.00 0.00
# "[<-.factor" 3.00 31.91 1829.1 3.00 31.91
# "replicate" 2.36 25.11 1422.9 0.00 0.00
# "sapply" 2.36 25.11 1422.9 0.00 0.00
# "unique" 2.32 24.68 1189.9 0.00 0.00
# "paste" 1.98 21.06 1194.2 0.70 7.45
# "unique.default" 1.96 20.85 1017.8 1.96 20.85
# "sample" 1.20 12.77 707.4 0.44 4.68
# "as.data.frame" 0.88 9.36 540.5 0.02 0.21
# "as.data.frame.character" 0.78 8.30 496.2 0.00 0.00
# "factor" 0.72 7.66 444.2 0.06 0.64
# "c" 0.68 7.23 379.6 0.68 7.23
# "order" 0.64 6.81 385.1 0.64 6.81
# "sample.int" 0.40 4.26 233.0 0.38 4.04
# ".Date" 0.28 2.98 289.3 0.10 1.06
# "[<-.Date" 0.28 2.98 289.3 0.00 0.00
# "NextMethod" 0.18 1.91 171.2 0.18 1.91
# "deparse" 0.08 0.85 54.6 0.02 0.21
# "%in%" 0.08 0.85 54.6 0.00 0.00
# "mode" 0.08 0.85 54.6 0.00 0.00
# "length" 0.06 0.64 10.4 0.06 0.64
# "structure" 0.06 0.64 30.8 0.04 0.43
# ".deparseOpts" 0.06 0.64 49.1 0.02 0.21
# "[[" 0.06 0.64 34.2 0.02 0.21
# ":" 0.04 0.43 33.6 0.04 0.43
# "[[.data.frame" 0.04 0.43 22.6 0.04 0.43
# "force" 0.04 0.43 20.0 0.00 0.00
# "as.vector" 0.02 0.21 0.0 0.02 0.21
# "is.na" 0.02 0.21 0.0 0.02 0.21
# "levels" 0.02 0.21 14.6 0.02 0.21
# "make.names" 0.02 0.21 9.4 0.02 0.21
# "pmatch" 0.02 0.21 17.3 0.02 0.21
# "as.data.frame.Date" 0.02 0.21 5.5 0.00 0.00
# "as.data.frame.integer" 0.02 0.21 0.0 0.00 0.00
# "as.data.frame.logical" 0.02 0.21 14.5 0.00 0.00
# "as.data.frame.numeric" 0.02 0.21 13.5 0.00 0.00
# "as.data.frame.vector" 0.02 0.21 17.3 0.00 0.00
# "simplify2array" 0.02 0.21 0.0 0.00 0.00
#
# $sample.interval
# [1] 0.02
#
# $sampling.time
# [1] 9.4
Graphs (using a different call to save $by.total results)

Fast replicated random sampling from raster

I have to iterate 1000 times a random sampling of a point per grid polygon in a raster for grid sizes from 5 to 25.
With a raster 50 x 50 (2500 cells) the process takes more than 1h with the following code:
library(raster)
library(dplyr)
# This is the script for random sampling inside the grid cells
sample_grid <- function(r, w, n){
grid <- raster(extent(r))
res(grid) <- w
proj4string(grid) <- proj4string(r)
gridpolygon <- rasterToPolygons(grid)
pickpts <- sapply(gridpolygon#polygons, spsample, n = n, type = "random")
sapply(pickpts, FUN = extract, x = r)
}
# Let's make a raster
r <- raster(ncol = 50, nrow = 50, xmn = 0, xmx = 50, ymn = 0, ymx = 50)
values(r) <- runif(ncell(r))
# Repeat the random sampling process 1000 times for different grid sizes
sapply(5:25, function(x) replicate(1000, sample_grid(r, x, 1) %>%
mean(., na.rm = TRUE)))
I would like to make it faster. A reasonable target would be about 15 minutes.
Do you have any suggestions?
This is the output for Rprof
Rprof(tmp <- tempfile())
sample_grid(r, 10, 1) %>% mean(., na.rm = TRUE)
Rprof()
summaryRprof(tmp)
#################### summaryRprof output ####################
$by.self
self.time self.pct total.time total.pct
"eval" 0.02 14.29 0.14 100.00
"initialize" 0.02 14.29 0.06 42.86
"getClassDef" 0.02 14.29 0.04 28.57
".getClassFromCache" 0.02 14.29 0.02 14.29
"aperm" 0.02 14.29 0.02 14.29
"merge.data.frame" 0.02 14.29 0.02 14.29
"validityMethod" 0.02 14.29 0.02 14.29
$by.total
total.time total.pct self.time self.pct
"eval" 0.14 100.00 0.02 14.29
"%>%" 0.14 100.00 0.00 0.00
".local" 0.14 100.00 0.00 0.00
"FUN" 0.14 100.00 0.00 0.00
"lapply" 0.14 100.00 0.00 0.00
"sample_grid" 0.14 100.00 0.00 0.00
"sapply" 0.14 100.00 0.00 0.00
"standardGeneric" 0.14 100.00 0.00 0.00
"initialize" 0.06 42.86 0.02 14.29
"new" 0.06 42.86 0.00 0.00
"getClassDef" 0.04 28.57 0.02 14.29
".cellValues" 0.04 28.57 0.00 0.00
".readCells" 0.04 28.57 0.00 0.00
".xyValues" 0.04 28.57 0.00 0.00
"CRS" 0.04 28.57 0.00 0.00
"over" 0.04 28.57 0.00 0.00
"sample.Polygon" 0.04 28.57 0.00 0.00
"validObject" 0.04 28.57 0.00 0.00
".getClassFromCache" 0.02 14.29 0.02 14.29
"aperm" 0.02 14.29 0.02 14.29
"merge.data.frame" 0.02 14.29 0.02 14.29
"validityMethod" 0.02 14.29 0.02 14.29
".bboxCoords" 0.02 14.29 0.00 0.00
".uniqueNames" 0.02 14.29 0.00 0.00
"[" 0.02 14.29 0.00 0.00
"anyStrings" 0.02 14.29 0.00 0.00
"apply" 0.02 14.29 0.00 0.00
"as.matrix" 0.02 14.29 0.00 0.00
"identical" 0.02 14.29 0.00 0.00
"identicalCRS" 0.02 14.29 0.00 0.00
"is" 0.02 14.29 0.00 0.00
"match.arg" 0.02 14.29 0.00 0.00
"merge" 0.02 14.29 0.00 0.00
"merge.default" 0.02 14.29 0.00 0.00
"names" 0.02 14.29 0.00 0.00
"SpatialPolygons" 0.02 14.29 0.00 0.00
"stopifnot" 0.02 14.29 0.00 0.00
"t" 0.02 14.29 0.00 0.00
"table" 0.02 14.29 0.00 0.00
"validNames" 0.02 14.29 0.00 0.00
$sample.interval
[1] 0.02
$sampling.time
[1] 0.14
###############################################################

R profiling spending a lot of time using .External2

I am learning how to use R profiling, and have run the Rprof command on my code.
The summaryRprof function has shown that a lot of time is spent using .External2. What is this? Additionally, there is a large proportion of the total time spent on <Anonymous>, is there a way to find out what this is?
> summaryRprof("test")
$by.self
self.time self.pct total.time total.pct
".External2" 4.30 27.74 4.30 27.74
"format.POSIXlt" 2.70 17.42 2.90 18.71
"which.min" 2.38 15.35 4.12 26.58
"-" 1.30 8.39 1.30 8.39
"order" 1.16 7.48 1.16 7.48
"match" 0.58 3.74 0.58 3.74
"file" 0.44 2.84 0.44 2.84
"abs" 0.40 2.58 0.40 2.58
"scan" 0.30 1.94 0.30 1.94
"anyDuplicated.default" 0.20 1.29 0.20 1.29
"unique.default" 0.20 1.29 0.20 1.29
"unlist" 0.18 1.16 0.20 1.29
"c" 0.16 1.03 0.16 1.03
"data.frame" 0.14 0.90 0.22 1.42
"structure" 0.12 0.77 1.74 11.23
"as.POSIXct.POSIXlt" 0.12 0.77 0.12 0.77
"strptime" 0.12 0.77 0.12 0.77
"as.character" 0.08 0.52 0.90 5.81
"make.unique" 0.08 0.52 0.16 1.03
"[.data.frame" 0.06 0.39 1.54 9.94
"<Anonymous>" 0.04 0.26 4.34 28.00
"lapply" 0.04 0.26 1.70 10.97
"rbind" 0.04 0.26 0.94 6.06
"as.POSIXlt.POSIXct" 0.04 0.26 0.04 0.26
"ifelse" 0.04 0.26 0.04 0.26
"paste" 0.02 0.13 0.92 5.94
"merge.data.frame" 0.02 0.13 0.56 3.61
"[<-.factor" 0.02 0.13 0.52 3.35
"stopifnot" 0.02 0.13 0.04 0.26
".deparseOpts" 0.02 0.13 0.02 0.13
".External" 0.02 0.13 0.02 0.13
"close.connection" 0.02 0.13 0.02 0.13
"doTryCatch" 0.02 0.13 0.02 0.13
"is.na" 0.02 0.13 0.02 0.13
"is.na<-.default" 0.02 0.13 0.02 0.13
"mean" 0.02 0.13 0.02 0.13
"seq.int" 0.02 0.13 0.02 0.13
"sum" 0.02 0.13 0.02 0.13
"sys.function" 0.02 0.13 0.02 0.13
$by.total
total.time total.pct self.time self.pct
"write.table" 5.10 32.90 0.00 0.00
"<Anonymous>" 4.34 28.00 0.04 0.26
".External2" 4.30 27.74 4.30 27.74
"mapply" 4.22 27.23 0.00 0.00
"head" 4.16 26.84 0.00 0.00
"which.min" 4.12 26.58 2.38 15.35
"eval" 3.16 20.39 0.00 0.00
"eval.parent" 3.14 20.26 0.00 0.00
"write.csv" 3.14 20.26 0.00 0.00
"format" 2.92 18.84 0.00 0.00
"format.POSIXlt" 2.90 18.71 2.70 17.42
"do.call" 1.78 11.48 0.00 0.00
"structure" 1.74 11.23 0.12 0.77
"lapply" 1.70 10.97 0.04 0.26
"FUN" 1.66 10.71 0.00 0.00
"format.POSIXct" 1.62 10.45 0.00 0.00
"[.data.frame" 1.54 9.94 0.06 0.39
"[" 1.54 9.94 0.00 0.00
"-" 1.30 8.39 1.30 8.39
"order" 1.16 7.48 1.16 7.48
"rbind" 0.94 6.06 0.04 0.26
"paste" 0.92 5.94 0.02 0.13
"as.character" 0.90 5.81 0.08 0.52
"read.csv" 0.84 5.42 0.00 0.00
"read.table" 0.84 5.42 0.00 0.00
"as.character.POSIXt" 0.82 5.29 0.00 0.00
"match" 0.58 3.74 0.58 3.74
"merge.data.frame" 0.56 3.61 0.02 0.13
"merge" 0.56 3.61 0.00 0.00
"[<-.factor" 0.52 3.35 0.02 0.13
"[<-" 0.52 3.35 0.00 0.00
"strftime" 0.48 3.10 0.00 0.00
"file" 0.44 2.84 0.44 2.84
"weekdays" 0.42 2.71 0.00 0.00
"weekdays.POSIXt" 0.42 2.71 0.00 0.00
"abs" 0.40 2.58 0.40 2.58
"unique" 0.38 2.45 0.00 0.00
"scan" 0.30 1.94 0.30 1.94
"data.frame" 0.22 1.42 0.14 0.90
"cbind" 0.22 1.42 0.00 0.00
"anyDuplicated.default" 0.20 1.29 0.20 1.29
"unique.default" 0.20 1.29 0.20 1.29
"unlist" 0.20 1.29 0.18 1.16
"anyDuplicated" 0.20 1.29 0.00 0.00
"as.POSIXct" 0.18 1.16 0.00 0.00
"as.POSIXlt" 0.18 1.16 0.00 0.00
"c" 0.16 1.03 0.16 1.03
"make.unique" 0.16 1.03 0.08 0.52
"as.POSIXct.POSIXlt" 0.12 0.77 0.12 0.77
"strptime" 0.12 0.77 0.12 0.77
"as.POSIXlt.character" 0.12 0.77 0.00 0.00
"object.size" 0.12 0.77 0.00 0.00
"as.POSIXct.default" 0.10 0.65 0.00 0.00
"Ops.POSIXt" 0.08 0.52 0.00 0.00
"type.convert" 0.08 0.52 0.00 0.00
"!=" 0.06 0.39 0.00 0.00
"as.POSIXlt.factor" 0.06 0.39 0.00 0.00
"as.POSIXlt.POSIXct" 0.04 0.26 0.04 0.26
"ifelse" 0.04 0.26 0.04 0.26
"stopifnot" 0.04 0.26 0.02 0.13
"$" 0.04 0.26 0.00 0.00
"$.data.frame" 0.04 0.26 0.00 0.00
"[[" 0.04 0.26 0.00 0.00
"[[.data.frame" 0.04 0.26 0.00 0.00
"head.default" 0.04 0.26 0.00 0.00
".deparseOpts" 0.02 0.13 0.02 0.13
".External" 0.02 0.13 0.02 0.13
"close.connection" 0.02 0.13 0.02 0.13
"doTryCatch" 0.02 0.13 0.02 0.13
"is.na" 0.02 0.13 0.02 0.13
"is.na<-.default" 0.02 0.13 0.02 0.13
"mean" 0.02 0.13 0.02 0.13
"seq.int" 0.02 0.13 0.02 0.13
"sum" 0.02 0.13 0.02 0.13
"sys.function" 0.02 0.13 0.02 0.13
"%in%" 0.02 0.13 0.00 0.00
".rs.getSingleClass" 0.02 0.13 0.00 0.00
"[.POSIXlt" 0.02 0.13 0.00 0.00
"==" 0.02 0.13 0.00 0.00
"close" 0.02 0.13 0.00 0.00
"data.row.names" 0.02 0.13 0.00 0.00
"deparse" 0.02 0.13 0.00 0.00
"factor" 0.02 0.13 0.00 0.00
"is.na<-" 0.02 0.13 0.00 0.00
"match.arg" 0.02 0.13 0.00 0.00
"match.call" 0.02 0.13 0.00 0.00
"pushBack" 0.02 0.13 0.00 0.00
"seq" 0.02 0.13 0.00 0.00
"seq.POSIXt" 0.02 0.13 0.00 0.00
"simplify2array" 0.02 0.13 0.00 0.00
"tryCatch" 0.02 0.13 0.00 0.00
"tryCatchList" 0.02 0.13 0.00 0.00
"tryCatchOne" 0.02 0.13 0.00 0.00
"which" 0.02 0.13 0.00 0.00
$sample.interval
[1] 0.02
$sampling.time
[1] 15.5

R: `unlist` using a lot of time when summing a subset of a matrix

I have a program which pulls data out of a MySQL database, decodes a pair of
binary columns, and then sums together a subset of of the rows within the pair
of binary columns. Running the program on a sample data set takes 12-14 seconds,
with 9-10 of those taken up by unlist. I'm wondering if there is any way to
speed things up.
Structure of the table
The rows I'm getting from the database look like:
| array_length | mz_array | intensity_array |
|--------------+-----------------+-----------------|
| 98 | 00c077e66340... | 002091c37240... |
| 74 | c04a7c7340... | db87734000... |
where array_length is the number of little-endian doubles in the two arrays
(they are guaranteed to be the same length). So the first row has 98 doubles in
each of mz_array and intensity_array. array_length has a mean of 825 and a
median of 620 with 13,000 rows.
Decoding the binary arrays
Each row gets decoded by being passed to the following function. Once the binary
arrays have been decoded, array_length is no longer needed.
DecodeSpectrum <- function(array_length, mz_array, intensity_array) {
sapply(list(mz_array=mz_array, intensity_array=intensity_array),
readBin,
what="double",
endian="little",
n=array_length)
}
Summing the arrays
The next step is to sum the values in intensity_array, but only if their
corresponding entry in mz_array is within a certain window. The arrays are
ordered by mz_array, ascending. I am using the following function to sum up
the intensity_array values:
SumInWindow <- function(spectrum, lower, upper) {
sum(spectrum[spectrum[,1] > lower & spectrum[,1] < upper, 2])
}
Where spectrum is the output from DecodeSpectrum, a matrix.
Operating over list of rows
Each row is handled by:
ProcessSegment <- function(spectra, window_bounds) {
lower <- window_bounds[1]
upper <- window_bounds[2]
## Decode a single spectrum and sum the intensities within the window.
SumDecode <- function (...) {
SumInWindow(DecodeSpectrum(...), lower, upper)
}
do.call("mapply", c(SumDecode, spectra))
}
And finally, the rows are fetched and handed off to ProcessSegment with this
function:
ProcessAllSegments <- function(conn, window_bounds) {
nextSeg <- function() odbcFetchRows(conn, max=batchSize, buffsize=batchSize)
while ((res <- nextSeg())$stat == 1 && res$data[[1]] > 0) {
print(ProcessSegment(res$data, window_bounds))
}
}
I'm doing the fetches in segments so that R doesn't have to load the entire data
set into memory at once (it was causing out of memory errors). I'm using the
RODBC driver because the RMySQL driver isn't able to return unsullied binary
values (as far as I could tell).
Performance
For a sample data set of about 140MiB, the whole process takes around 14 seconds
to complete, which is not that bad for 13,000 rows. Still, I think there's room
for improvement, especially when looking at the Rprof output:
$by.self
self.time self.pct total.time total.pct
"unlist" 10.26 69.99 10.30 70.26
"SumInWindow" 1.06 7.23 13.92 94.95
"mapply" 0.48 3.27 14.44 98.50
"as.vector" 0.44 3.00 10.60 72.31
"array" 0.40 2.73 0.40 2.73
"FUN" 0.40 2.73 0.40 2.73
"list" 0.30 2.05 0.30 2.05
"<" 0.22 1.50 0.22 1.50
"unique" 0.18 1.23 0.36 2.46
">" 0.18 1.23 0.18 1.23
".Call" 0.16 1.09 0.16 1.09
"lapply" 0.14 0.95 0.86 5.87
"simplify2array" 0.10 0.68 11.48 78.31
"&" 0.10 0.68 0.10 0.68
"sapply" 0.06 0.41 12.36 84.31
"c" 0.06 0.41 0.06 0.41
"is.factor" 0.04 0.27 0.04 0.27
"match.fun" 0.04 0.27 0.04 0.27
"<Anonymous>" 0.02 0.14 13.94 95.09
"unique.default" 0.02 0.14 0.06 0.41
$by.total
total.time total.pct self.time self.pct
"ProcessAllSegments" 14.66 100.00 0.00 0.00
"do.call" 14.50 98.91 0.00 0.00
"ProcessSegment" 14.50 98.91 0.00 0.00
"mapply" 14.44 98.50 0.48 3.27
"<Anonymous>" 13.94 95.09 0.02 0.14
"SumInWindow" 13.92 94.95 1.06 7.23
"sapply" 12.36 84.31 0.06 0.41
"DecodeSpectrum" 12.36 84.31 0.00 0.00
"simplify2array" 11.48 78.31 0.10 0.68
"as.vector" 10.60 72.31 0.44 3.00
"unlist" 10.30 70.26 10.26 69.99
"lapply" 0.86 5.87 0.14 0.95
"array" 0.40 2.73 0.40 2.73
"FUN" 0.40 2.73 0.40 2.73
"unique" 0.36 2.46 0.18 1.23
"list" 0.30 2.05 0.30 2.05
"<" 0.22 1.50 0.22 1.50
">" 0.18 1.23 0.18 1.23
".Call" 0.16 1.09 0.16 1.09
"nextSeg" 0.16 1.09 0.00 0.00
"odbcFetchRows" 0.16 1.09 0.00 0.00
"&" 0.10 0.68 0.10 0.68
"c" 0.06 0.41 0.06 0.41
"unique.default" 0.06 0.41 0.02 0.14
"is.factor" 0.04 0.27 0.04 0.27
"match.fun" 0.04 0.27 0.04 0.27
$sample.interval
[1] 0.02
$sampling.time
[1] 14.66
I'm surprised to see unlist taking up so much time; this says to me that there
might be some redundant copying or rearranging going on. I'm new at R, so it's
entirely possible that this is normal, but I'd like to know if there's anything
glaringly wrong.
Update: sample data posted
I've posted the full version of the program
here and the sample data I use
here. The sample data is the
gziped output from mysqldump. You need to set the proper environment
variables for the script to connect to the database:
MZDB_HOST
MZDB_DB
MZDB_USER
MZDB_PW
To run the script, you must specify the run_id and the window boundaries. I
run the program like this:
Rscript ChromatoGen.R -i 1 -m 600 -M 1200
These window bounds are pretty arbitrary, but select roughly a half to a third
of the range. If you want to print the results, put a print() around the call
to ProcessSegment within ProcessAllSegments. Using those parameters, the
first 5 should be:
[1] 7139.682 4522.314 3435.512 5255.024 5947.999
You probably want want to limit the number of results, unless you want 13,000
numbers filling your screen :) The simplest way is just add LIMIT 5 at the end
of query.
I've figured it out!
The problem was in the sapply() call. sapply does a fair amount of
renaming and property setting which slows things down massively for arrays of
this size. Replacing DecodeSpectrum with the following code brought the sample
time from 14.66 seconds down to 3.36 seconds, a 4-fold increase!
Here's the new body of DecodeSpectrum:
DecodeSpectrum <- function(array_length, mz_array, intensity_array) {
## needed to tell `vapply` how long the result should be. No, there isn't an
## easier way to do this.
resultLength <- rep(1.0, array_length)
vapply(list(mz_array=mz_array, intensity_array=intensity_array),
readBin,
resultLength,
what="double",
endian="little",
n=array_length,
USE.NAMES=FALSE)
}
The Rprof output now looks like:
$by.self
self.time self.pct total.time total.pct
"<Anonymous>" 0.64 19.75 2.14 66.05
"DecodeSpectrum" 0.46 14.20 1.12 34.57
".Call" 0.42 12.96 0.42 12.96
"FUN" 0.38 11.73 0.38 11.73
"&" 0.16 4.94 0.16 4.94
">" 0.14 4.32 0.14 4.32
"c" 0.14 4.32 0.14 4.32
"list" 0.14 4.32 0.14 4.32
"vapply" 0.12 3.70 0.66 20.37
"mapply" 0.10 3.09 2.54 78.40
"simplify2array" 0.10 3.09 0.30 9.26
"<" 0.08 2.47 0.08 2.47
"t" 0.04 1.23 2.72 83.95
"as.vector" 0.04 1.23 0.08 2.47
"unlist" 0.04 1.23 0.08 2.47
"lapply" 0.04 1.23 0.04 1.23
"unique.default" 0.04 1.23 0.04 1.23
"NextSegment" 0.02 0.62 0.50 15.43
"odbcFetchRows" 0.02 0.62 0.46 14.20
"unique" 0.02 0.62 0.10 3.09
"array" 0.02 0.62 0.04 1.23
"attr" 0.02 0.62 0.02 0.62
"match.fun" 0.02 0.62 0.02 0.62
"odbcValidChannel" 0.02 0.62 0.02 0.62
"parent.frame" 0.02 0.62 0.02 0.62
$by.total
total.time total.pct self.time self.pct
"ProcessAllSegments" 3.24 100.00 0.00 0.00
"t" 2.72 83.95 0.04 1.23
"do.call" 2.68 82.72 0.00 0.00
"mapply" 2.54 78.40 0.10 3.09
"<Anonymous>" 2.14 66.05 0.64 19.75
"DecodeSpectrum" 1.12 34.57 0.46 14.20
"vapply" 0.66 20.37 0.12 3.70
"NextSegment" 0.50 15.43 0.02 0.62
"odbcFetchRows" 0.46 14.20 0.02 0.62
".Call" 0.42 12.96 0.42 12.96
"FUN" 0.38 11.73 0.38 11.73
"simplify2array" 0.30 9.26 0.10 3.09
"&" 0.16 4.94 0.16 4.94
">" 0.14 4.32 0.14 4.32
"c" 0.14 4.32 0.14 4.32
"list" 0.14 4.32 0.14 4.32
"unique" 0.10 3.09 0.02 0.62
"<" 0.08 2.47 0.08 2.47
"as.vector" 0.08 2.47 0.04 1.23
"unlist" 0.08 2.47 0.04 1.23
"lapply" 0.04 1.23 0.04 1.23
"unique.default" 0.04 1.23 0.04 1.23
"array" 0.04 1.23 0.02 0.62
"attr" 0.02 0.62 0.02 0.62
"match.fun" 0.02 0.62 0.02 0.62
"odbcValidChannel" 0.02 0.62 0.02 0.62
"parent.frame" 0.02 0.62 0.02 0.62
$sample.interval
[1] 0.02
$sampling.time
[1] 3.24
It's possible that some additional performance could be squeezed out of messing
with the do.call('mapply', ...) call, but I'm satisfied enough with the
performance as is that I'm not willing to waste time on that.

R: Is there a fast approximate correlation library for large time series?

I am trying to make a software that will, in real time, find the top N correlated time series windows (to a query series).
There are approximately 5000 windows, each 34 rows in length. With respect to the query series I need the 300 most correlated windows.
Currently I am using the cor function, but it is proving to be entirely way too slow. I need response times under a second. Under 250ms would be great, but anything in that vicinity would do.
Is there a "fast approximate correlation" library for R that I can use to reduce the size of my large "contestant list" (the 5000 windows)?
If not, is there another method to shrink this list somewhat?
Here is the function that I am running:
GetTopN<-function(n)
{
Rprof()
x<- LastBars()
x<-as.data.frame(cbind(x[-1,1],diff(x[,2])))
colnames(x)<-c('RPos','M1')
actionlist<-GetFiltered()
print(nrow(actionlist))
crs<-mat.or.vec(nrow(actionlist),2) #will hold correlations
for(i in 1:nrow(actionlist))
{
crs[i,2]<-cor(z[actionlist$RPos[i]+n1,2],x[,2])
}
crs[,1]<-actionlist$OpenTime
sorted <- crs[order(crs[,2], decreasing=T),1:2]
topx<- head(sorted,n)
bottomx <- tail(sorted,n)
rownames(bottomx)<-NULL
DF<-as.data.frame(rbind(topx,bottomx),row.names=NULL )
colnames(DF)<-c('ptime','weight')
sqlSave(channel,dat=DF,tablename='ReducedList',append=F,rownames=F,safer=F)
FillActionList()
Rprof(NULL)
summaryRprof()
}
And here is the output from summaryRprof:
$by.self
self.time self.pct total.time total.pct
[.data.frame 0.68 25.37 0.98 36.57
.Call 0.22 8.21 0.22 8.21
cor 0.16 5.97 2.30 85.82
is.data.frame 0.14 5.22 1.26 47.01
[ 0.14 5.22 1.12 41.79
stopifnot 0.14 5.22 0.30 11.19
sys.call 0.14 5.22 0.18 6.72
GetTopN 0.12 4.48 2.68 100.00
eval 0.10 3.73 0.46 17.16
deparse 0.10 3.73 0.34 12.69
%in% 0.10 3.73 0.22 8.21
$ 0.10 3.73 0.10 3.73
c 0.08 2.99 0.08 2.99
.deparseOpts 0.06 2.24 0.14 5.22
formals 0.06 2.24 0.08 2.99
pmatch 0.06 2.24 0.08 2.99
names 0.06 2.24 0.06 2.24
match 0.04 1.49 0.12 4.48
sys.parent 0.04 1.49 0.04 1.49
match.arg 0.02 0.75 0.58 21.64
length 0.02 0.75 0.02 0.75
matrix 0.02 0.75 0.02 0.75
mode 0.02 0.75 0.02 0.75
order 0.02 0.75 0.02 0.75
parent.frame 0.02 0.75 0.02 0.75
sys.function 0.02 0.75 0.02 0.75
$by.total
total.time total.pct self.time self.pct
GetTopN 2.68 100.00 0.12 4.48
cor 2.30 85.82 0.16 5.97
is.data.frame 1.26 47.01 0.14 5.22
[ 1.12 41.79 0.14 5.22
[.data.frame 0.98 36.57 0.68 25.37
match.arg 0.58 21.64 0.02 0.75
eval 0.46 17.16 0.10 3.73
deparse 0.34 12.69 0.10 3.73
stopifnot 0.30 11.19 0.14 5.22
.Call 0.22 8.21 0.22 8.21
%in% 0.22 8.21 0.10 3.73
sqlQuery 0.20 7.46 0.00 0.00
sys.call 0.18 6.72 0.14 5.22
odbcQuery 0.18 6.72 0.00 0.00
GetFiltered 0.16 5.97 0.00 0.00
match.call 0.16 5.97 0.00 0.00
.deparseOpts 0.14 5.22 0.06 2.24
match 0.12 4.48 0.04 1.49
$ 0.10 3.73 0.10 3.73
c 0.08 2.99 0.08 2.99
formals 0.08 2.99 0.06 2.24
pmatch 0.08 2.99 0.06 2.24
names 0.06 2.24 0.06 2.24
sys.parent 0.04 1.49 0.04 1.49
LastBars 0.04 1.49 0.00 0.00
length 0.02 0.75 0.02 0.75
matrix 0.02 0.75 0.02 0.75
mode 0.02 0.75 0.02 0.75
order 0.02 0.75 0.02 0.75
parent.frame 0.02 0.75 0.02 0.75
sys.function 0.02 0.75 0.02 0.75
mat.or.vec 0.02 0.75 0.00 0.00
odbcFetchRows 0.02 0.75 0.00 0.00
odbcUpdate 0.02 0.75 0.00 0.00
sqlGetResults 0.02 0.75 0.00 0.00
sqlSave 0.02 0.75 0.00 0.00
sqlwrite 0.02 0.75 0.00 0.00
$sample.interval
[1] 0.02
$sampling.time
[1] 2.68
Looking at the summaryRprofs output it seems that perhaps [.data.frame takes the longest. I do not see how to get around that though.
As Vincent points out in comments, computing (Pearson) correlation is itself pretty quick. Once you exhausted the basic R profiling and speeding up tricks, you can always go
multicore and/or parallel via appropriate R packages
use compiled code, and I can think of a package to facilitate that
even consider GPUs as e.g. my Intro to High-Performance Computing with R slides (on my presentations page) contained an example of computing the (more expensive Kendall) correlation for a large gain

Resources