I'm trying to plot a boxplot graph with my data, using 'ggplot' in R, but I just can't do it. Can anyone help me out?
The data is like the table below:
Paratio ShapeIdx FracD NNDis Core
-3.00 1.22 0.14 2.71 7.49
-1.80 0.96 0.16 0.00 7.04
-3.00 1.10 0.13 2.71 6.85
-1.80 0.83 0.16 0.00 6.74
-0.18 0.41 0.27 0.00 6.24
-1.66 0.12 0.11 2.37 6.19
-1.07 0.06 0.14 0.00 6.11
-0.32 0.18 0.23 0.00 5.93
-1.16 0.32 0.15 0.00 5.59
-0.94 0.14 0.15 1.96 5.44
-1.13 0.31 0.16 0.00 5.42
-1.35 0.40 0.15 0.00 5.38
-0.53 0.25 0.20 2.08 5.32
-1.96 0.36 0.12 0.00 5.27
-1.09 0.07 0.13 0.00 5.22
-1.35 0.27 0.14 0.00 5.21
-1.25 0.21 0.14 0.00 5.19
-1.02 0.25 0.16 0.00 5.19
-1.28 0.22 0.14 0.00 5.11
-1.44 0.32 0.14 0.00 5.00
And what I exactly want is a boxplot of each column, without any relation "column by column".
ggplot2 requires data in a specific format. Here, you need x= and y= where y will be the values and x will be the corresponding column ids. Use melt from reshape2 package to melt the data to get the data in this format and then plot.
require(reshape2)
ggplot(data = melt(dd), aes(x=variable, y=value)) + geom_boxplot(aes(fill=variable))
Related
I am learning how to use R profiling, and have run the Rprof command on my code.
The summaryRprof function has shown that a lot of time is spent using .External2. What is this? Additionally, there is a large proportion of the total time spent on <Anonymous>, is there a way to find out what this is?
> summaryRprof("test")
$by.self
self.time self.pct total.time total.pct
".External2" 4.30 27.74 4.30 27.74
"format.POSIXlt" 2.70 17.42 2.90 18.71
"which.min" 2.38 15.35 4.12 26.58
"-" 1.30 8.39 1.30 8.39
"order" 1.16 7.48 1.16 7.48
"match" 0.58 3.74 0.58 3.74
"file" 0.44 2.84 0.44 2.84
"abs" 0.40 2.58 0.40 2.58
"scan" 0.30 1.94 0.30 1.94
"anyDuplicated.default" 0.20 1.29 0.20 1.29
"unique.default" 0.20 1.29 0.20 1.29
"unlist" 0.18 1.16 0.20 1.29
"c" 0.16 1.03 0.16 1.03
"data.frame" 0.14 0.90 0.22 1.42
"structure" 0.12 0.77 1.74 11.23
"as.POSIXct.POSIXlt" 0.12 0.77 0.12 0.77
"strptime" 0.12 0.77 0.12 0.77
"as.character" 0.08 0.52 0.90 5.81
"make.unique" 0.08 0.52 0.16 1.03
"[.data.frame" 0.06 0.39 1.54 9.94
"<Anonymous>" 0.04 0.26 4.34 28.00
"lapply" 0.04 0.26 1.70 10.97
"rbind" 0.04 0.26 0.94 6.06
"as.POSIXlt.POSIXct" 0.04 0.26 0.04 0.26
"ifelse" 0.04 0.26 0.04 0.26
"paste" 0.02 0.13 0.92 5.94
"merge.data.frame" 0.02 0.13 0.56 3.61
"[<-.factor" 0.02 0.13 0.52 3.35
"stopifnot" 0.02 0.13 0.04 0.26
".deparseOpts" 0.02 0.13 0.02 0.13
".External" 0.02 0.13 0.02 0.13
"close.connection" 0.02 0.13 0.02 0.13
"doTryCatch" 0.02 0.13 0.02 0.13
"is.na" 0.02 0.13 0.02 0.13
"is.na<-.default" 0.02 0.13 0.02 0.13
"mean" 0.02 0.13 0.02 0.13
"seq.int" 0.02 0.13 0.02 0.13
"sum" 0.02 0.13 0.02 0.13
"sys.function" 0.02 0.13 0.02 0.13
$by.total
total.time total.pct self.time self.pct
"write.table" 5.10 32.90 0.00 0.00
"<Anonymous>" 4.34 28.00 0.04 0.26
".External2" 4.30 27.74 4.30 27.74
"mapply" 4.22 27.23 0.00 0.00
"head" 4.16 26.84 0.00 0.00
"which.min" 4.12 26.58 2.38 15.35
"eval" 3.16 20.39 0.00 0.00
"eval.parent" 3.14 20.26 0.00 0.00
"write.csv" 3.14 20.26 0.00 0.00
"format" 2.92 18.84 0.00 0.00
"format.POSIXlt" 2.90 18.71 2.70 17.42
"do.call" 1.78 11.48 0.00 0.00
"structure" 1.74 11.23 0.12 0.77
"lapply" 1.70 10.97 0.04 0.26
"FUN" 1.66 10.71 0.00 0.00
"format.POSIXct" 1.62 10.45 0.00 0.00
"[.data.frame" 1.54 9.94 0.06 0.39
"[" 1.54 9.94 0.00 0.00
"-" 1.30 8.39 1.30 8.39
"order" 1.16 7.48 1.16 7.48
"rbind" 0.94 6.06 0.04 0.26
"paste" 0.92 5.94 0.02 0.13
"as.character" 0.90 5.81 0.08 0.52
"read.csv" 0.84 5.42 0.00 0.00
"read.table" 0.84 5.42 0.00 0.00
"as.character.POSIXt" 0.82 5.29 0.00 0.00
"match" 0.58 3.74 0.58 3.74
"merge.data.frame" 0.56 3.61 0.02 0.13
"merge" 0.56 3.61 0.00 0.00
"[<-.factor" 0.52 3.35 0.02 0.13
"[<-" 0.52 3.35 0.00 0.00
"strftime" 0.48 3.10 0.00 0.00
"file" 0.44 2.84 0.44 2.84
"weekdays" 0.42 2.71 0.00 0.00
"weekdays.POSIXt" 0.42 2.71 0.00 0.00
"abs" 0.40 2.58 0.40 2.58
"unique" 0.38 2.45 0.00 0.00
"scan" 0.30 1.94 0.30 1.94
"data.frame" 0.22 1.42 0.14 0.90
"cbind" 0.22 1.42 0.00 0.00
"anyDuplicated.default" 0.20 1.29 0.20 1.29
"unique.default" 0.20 1.29 0.20 1.29
"unlist" 0.20 1.29 0.18 1.16
"anyDuplicated" 0.20 1.29 0.00 0.00
"as.POSIXct" 0.18 1.16 0.00 0.00
"as.POSIXlt" 0.18 1.16 0.00 0.00
"c" 0.16 1.03 0.16 1.03
"make.unique" 0.16 1.03 0.08 0.52
"as.POSIXct.POSIXlt" 0.12 0.77 0.12 0.77
"strptime" 0.12 0.77 0.12 0.77
"as.POSIXlt.character" 0.12 0.77 0.00 0.00
"object.size" 0.12 0.77 0.00 0.00
"as.POSIXct.default" 0.10 0.65 0.00 0.00
"Ops.POSIXt" 0.08 0.52 0.00 0.00
"type.convert" 0.08 0.52 0.00 0.00
"!=" 0.06 0.39 0.00 0.00
"as.POSIXlt.factor" 0.06 0.39 0.00 0.00
"as.POSIXlt.POSIXct" 0.04 0.26 0.04 0.26
"ifelse" 0.04 0.26 0.04 0.26
"stopifnot" 0.04 0.26 0.02 0.13
"$" 0.04 0.26 0.00 0.00
"$.data.frame" 0.04 0.26 0.00 0.00
"[[" 0.04 0.26 0.00 0.00
"[[.data.frame" 0.04 0.26 0.00 0.00
"head.default" 0.04 0.26 0.00 0.00
".deparseOpts" 0.02 0.13 0.02 0.13
".External" 0.02 0.13 0.02 0.13
"close.connection" 0.02 0.13 0.02 0.13
"doTryCatch" 0.02 0.13 0.02 0.13
"is.na" 0.02 0.13 0.02 0.13
"is.na<-.default" 0.02 0.13 0.02 0.13
"mean" 0.02 0.13 0.02 0.13
"seq.int" 0.02 0.13 0.02 0.13
"sum" 0.02 0.13 0.02 0.13
"sys.function" 0.02 0.13 0.02 0.13
"%in%" 0.02 0.13 0.00 0.00
".rs.getSingleClass" 0.02 0.13 0.00 0.00
"[.POSIXlt" 0.02 0.13 0.00 0.00
"==" 0.02 0.13 0.00 0.00
"close" 0.02 0.13 0.00 0.00
"data.row.names" 0.02 0.13 0.00 0.00
"deparse" 0.02 0.13 0.00 0.00
"factor" 0.02 0.13 0.00 0.00
"is.na<-" 0.02 0.13 0.00 0.00
"match.arg" 0.02 0.13 0.00 0.00
"match.call" 0.02 0.13 0.00 0.00
"pushBack" 0.02 0.13 0.00 0.00
"seq" 0.02 0.13 0.00 0.00
"seq.POSIXt" 0.02 0.13 0.00 0.00
"simplify2array" 0.02 0.13 0.00 0.00
"tryCatch" 0.02 0.13 0.00 0.00
"tryCatchList" 0.02 0.13 0.00 0.00
"tryCatchOne" 0.02 0.13 0.00 0.00
"which" 0.02 0.13 0.00 0.00
$sample.interval
[1] 0.02
$sampling.time
[1] 15.5
I have the following data frame:
1 8.03 0.37 0.55 1.03 1.58 2.03 15.08 2.69 1.63 3.84 1.26 1.9692516
2 4.76 0.70 NA 0.12 1.62 3.30 3.24 2.92 0.35 0.49 0.42 NA
3 6.18 3.47 3.00 0.02 0.19 16.70 2.32 69.78 3.72 5.51 1.62 2.4812459
4 1.06 45.22 0.81 1.07 8.30 196.23 0.62 118.51 13.79 22.80 9.77 8.4296220
5 0.15 0.10 0.07 1.52 1.02 0.50 0.91 1.75 0.02 0.20 0.48 0.3094169
7 0.27 0.68 0.09 0.15 0.26 1.54 0.01 0.21 0.04 0.28 0.31 0.1819510
I want to calculate the geometric mean for each row. My codes is
dat <- read.csv("MXreport.csv")
if(any(dat$X18S > 25)){ print("Fail!") } else { print("Pass!")}
datpass <- subset(dat, dat$X18S <= 25)
gene <- datpass[, 42:52]
gm_mean <- function(x){ prod(x)^(1/length(x))}
gene$score <- apply(gene, 1, gm_mean)
head(gene)
I got this output after typing this code:
1 8.03 0.37 0.55 1.03 1.58 2.03 15.08 2.69 1.63 3.84 1.26 1.9692516
2 4.76 0.70 NA 0.12 1.62 3.30 3.24 2.92 0.35 0.49 0.42 NA
3 6.18 3.47 3.00 0.02 0.19 16.70 2.32 69.78 3.72 5.51 1.62 2.4812459
4 1.06 45.22 0.81 1.07 8.30 196.23 0.62 118.51 13.79 22.80 9.77 8.4296220
5 0.15 0.10 0.07 1.52 1.02 0.50 0.91 1.75 0.02 0.20 0.48 0.3094169
7 0.27 0.68 0.09 0.15 0.26 1.54 0.01 0.21 0.04 0.28 0.31 0.1819510
The problem is I got NA after applying the geometric mean function to the row that has NA. How do I skip NA and calculate the geometric mean for the row that has NA
When I used gene<- na.exclude(datpass[, 42:52]). It skipped the row that has NA and not calculate the geometric mean at all. That is now what I want. I want to also calculate the geometric mean for the row that has NA also. How do I do this?
Monthly rainfall data is in a time series from 1983 Jan. to 2012 Dec.
One.Month.RainfallSJ.inch <- window(TS.RainfallSJ_inch, start=c(1983, 1), end=c(2012, 12))
One.Month.RainfallSJ.inch
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
1983 7.41 4.87 5.92 3.90 0.15 0.00 0.00 0.02 1.08 0.19 5.26 3.82
1984 0.17 1.44 0.90 0.54 0.00 0.01 0.00 0.00 0.02 1.75 3.94 1.73
1985 0.74 0.76 2.98 0.48 0.23 0.00 0.13 0.00 0.35 0.98 2.47 1.40
1986 2.41 6.05 3.99 0.66 0.16 0.00 0.00 0.00 1.02 0.08 0.17 0.85
1987 1.60 2.10 1.87 0.14 0.00 0.00 0.00 0.00 0.00 0.93 1.65 3.31
1988 2.08 0.62 0.06 1.82 0.66 0.00 0.00 0.00 0.00 0.06 1.42 2.14
1989 1.06 1.07 1.91 0.57 0.09 0.00 0.00 0.00 0.83 1.33 0.80 0.04
1990 1.93 1.61 0.89 0.22 2.38 0.00 0.15 0.00 0.24 0.25 0.24 2.03
1991 0.18 2.22 6.17 0.18 0.15 0.06 0.00 0.04 0.12 0.85 0.43 2.43
1992 1.73 6.59 3.37 0.42 0.00 0.25 0.00 0.00 0.00 0.66 0.05 4.51
1993 6.98 4.71 2.81 0.54 0.47 0.54 0.00 0.00 0.00 0.67 2.17 1.99
1994 1.33 3.03 0.44 1.47 1.21 0.01 0.00 0.00 0.07 0.27 2.37 1.76
1995 8.66 0.53 6.85 1.06 1.27 0.84 0.01 0.00 0.00 0.00 0.05 4.71
1996 3.03 4.85 2.62 0.75 1.42 0.00 0.00 0.00 0.01 1.08 1.65 4.78
1997 6.80 0.14 0.17 0.11 0.55 0.21 0.00 0.51 0.00 0.69 5.01 1.85
1998 4.81 10.23 2.40 1.46 1.93 0.00 0.00 0.00 0.05 0.60 1.77 0.72
1999 3.25 2.88 2.69 1.56 0.02 0.14 0.14 0.00 0.00 0.00 0.50 0.55
2000 3.57 4.56 1.69 0.74 0.40 0.30 0.00 0.01 0.12 2.16 0.44 0.31
2001 2.87 4.44 1.71 1.48 0.00 0.13 0.00 0.00 0.13 0.12 2.12 4.47
2002 0.75 0.81 1.80 0.35 0.68 0.00 0.00 0.00 0.00 0.00 1.99 6.60
2003 0.65 1.65 0.77 2.95 0.72 0.00 0.00 0.03 0.03 0.00 1.91 4.91
2004 1.61 4.28 0.49 0.40 0.08 0.00 0.00 0.00 0.15 3.04 0.73 4.32
2005 3.47 5.31 3.55 2.52 0.00 0.00 0.01 0.00 0.00 0.10 0.45 5.47
2006 2.94 2.39 6.55 4.55 0.45 0.00 0.00 0.00 0.00 0.39 1.38 1.77
2007 0.93 3.49 0.46 0.96 0.08 0.00 0.01 0.00 0.26 1.13 0.55 1.18
2008 5.81 1.81 0.15 0.03 0.00 0.00 0.00 0.00 0.00 0.19 1.33 1.53
2009 1.30 5.16 1.89 0.30 0.09 0.01 0.00 0.02 0.19 2.41 0.41 2.16
2010 4.58 2.12 2.05 3.03 0.35 0.00 0.00 0.00 0.00 0.25 1.76 2.53
2011 0.96 3.15 4.32 0.20 0.40 1.51 0.00 0.00 0.00 0.77 0.08 0.08
2012 0.90 0.63 1.98 1.88 0.00 0.15 0.00 0.00 0.01 0.35 2.59 4.24
How can I code Jan. average value from 1983 to 2012 and so on?
Thanks,
Nahm
Try maybe colMeans
colMeans(One.Month.RainfallSJ.inch)
# Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov
# 2.8170000 3.1166667 2.4483333 1.1756667 0.4646667 0.1386667 0.0150000 0.0210000 0.1560000 0.7100000 1.5230000
# Dec
# 2.6063333
I have a program which pulls data out of a MySQL database, decodes a pair of
binary columns, and then sums together a subset of of the rows within the pair
of binary columns. Running the program on a sample data set takes 12-14 seconds,
with 9-10 of those taken up by unlist. I'm wondering if there is any way to
speed things up.
Structure of the table
The rows I'm getting from the database look like:
| array_length | mz_array | intensity_array |
|--------------+-----------------+-----------------|
| 98 | 00c077e66340... | 002091c37240... |
| 74 | c04a7c7340... | db87734000... |
where array_length is the number of little-endian doubles in the two arrays
(they are guaranteed to be the same length). So the first row has 98 doubles in
each of mz_array and intensity_array. array_length has a mean of 825 and a
median of 620 with 13,000 rows.
Decoding the binary arrays
Each row gets decoded by being passed to the following function. Once the binary
arrays have been decoded, array_length is no longer needed.
DecodeSpectrum <- function(array_length, mz_array, intensity_array) {
sapply(list(mz_array=mz_array, intensity_array=intensity_array),
readBin,
what="double",
endian="little",
n=array_length)
}
Summing the arrays
The next step is to sum the values in intensity_array, but only if their
corresponding entry in mz_array is within a certain window. The arrays are
ordered by mz_array, ascending. I am using the following function to sum up
the intensity_array values:
SumInWindow <- function(spectrum, lower, upper) {
sum(spectrum[spectrum[,1] > lower & spectrum[,1] < upper, 2])
}
Where spectrum is the output from DecodeSpectrum, a matrix.
Operating over list of rows
Each row is handled by:
ProcessSegment <- function(spectra, window_bounds) {
lower <- window_bounds[1]
upper <- window_bounds[2]
## Decode a single spectrum and sum the intensities within the window.
SumDecode <- function (...) {
SumInWindow(DecodeSpectrum(...), lower, upper)
}
do.call("mapply", c(SumDecode, spectra))
}
And finally, the rows are fetched and handed off to ProcessSegment with this
function:
ProcessAllSegments <- function(conn, window_bounds) {
nextSeg <- function() odbcFetchRows(conn, max=batchSize, buffsize=batchSize)
while ((res <- nextSeg())$stat == 1 && res$data[[1]] > 0) {
print(ProcessSegment(res$data, window_bounds))
}
}
I'm doing the fetches in segments so that R doesn't have to load the entire data
set into memory at once (it was causing out of memory errors). I'm using the
RODBC driver because the RMySQL driver isn't able to return unsullied binary
values (as far as I could tell).
Performance
For a sample data set of about 140MiB, the whole process takes around 14 seconds
to complete, which is not that bad for 13,000 rows. Still, I think there's room
for improvement, especially when looking at the Rprof output:
$by.self
self.time self.pct total.time total.pct
"unlist" 10.26 69.99 10.30 70.26
"SumInWindow" 1.06 7.23 13.92 94.95
"mapply" 0.48 3.27 14.44 98.50
"as.vector" 0.44 3.00 10.60 72.31
"array" 0.40 2.73 0.40 2.73
"FUN" 0.40 2.73 0.40 2.73
"list" 0.30 2.05 0.30 2.05
"<" 0.22 1.50 0.22 1.50
"unique" 0.18 1.23 0.36 2.46
">" 0.18 1.23 0.18 1.23
".Call" 0.16 1.09 0.16 1.09
"lapply" 0.14 0.95 0.86 5.87
"simplify2array" 0.10 0.68 11.48 78.31
"&" 0.10 0.68 0.10 0.68
"sapply" 0.06 0.41 12.36 84.31
"c" 0.06 0.41 0.06 0.41
"is.factor" 0.04 0.27 0.04 0.27
"match.fun" 0.04 0.27 0.04 0.27
"<Anonymous>" 0.02 0.14 13.94 95.09
"unique.default" 0.02 0.14 0.06 0.41
$by.total
total.time total.pct self.time self.pct
"ProcessAllSegments" 14.66 100.00 0.00 0.00
"do.call" 14.50 98.91 0.00 0.00
"ProcessSegment" 14.50 98.91 0.00 0.00
"mapply" 14.44 98.50 0.48 3.27
"<Anonymous>" 13.94 95.09 0.02 0.14
"SumInWindow" 13.92 94.95 1.06 7.23
"sapply" 12.36 84.31 0.06 0.41
"DecodeSpectrum" 12.36 84.31 0.00 0.00
"simplify2array" 11.48 78.31 0.10 0.68
"as.vector" 10.60 72.31 0.44 3.00
"unlist" 10.30 70.26 10.26 69.99
"lapply" 0.86 5.87 0.14 0.95
"array" 0.40 2.73 0.40 2.73
"FUN" 0.40 2.73 0.40 2.73
"unique" 0.36 2.46 0.18 1.23
"list" 0.30 2.05 0.30 2.05
"<" 0.22 1.50 0.22 1.50
">" 0.18 1.23 0.18 1.23
".Call" 0.16 1.09 0.16 1.09
"nextSeg" 0.16 1.09 0.00 0.00
"odbcFetchRows" 0.16 1.09 0.00 0.00
"&" 0.10 0.68 0.10 0.68
"c" 0.06 0.41 0.06 0.41
"unique.default" 0.06 0.41 0.02 0.14
"is.factor" 0.04 0.27 0.04 0.27
"match.fun" 0.04 0.27 0.04 0.27
$sample.interval
[1] 0.02
$sampling.time
[1] 14.66
I'm surprised to see unlist taking up so much time; this says to me that there
might be some redundant copying or rearranging going on. I'm new at R, so it's
entirely possible that this is normal, but I'd like to know if there's anything
glaringly wrong.
Update: sample data posted
I've posted the full version of the program
here and the sample data I use
here. The sample data is the
gziped output from mysqldump. You need to set the proper environment
variables for the script to connect to the database:
MZDB_HOST
MZDB_DB
MZDB_USER
MZDB_PW
To run the script, you must specify the run_id and the window boundaries. I
run the program like this:
Rscript ChromatoGen.R -i 1 -m 600 -M 1200
These window bounds are pretty arbitrary, but select roughly a half to a third
of the range. If you want to print the results, put a print() around the call
to ProcessSegment within ProcessAllSegments. Using those parameters, the
first 5 should be:
[1] 7139.682 4522.314 3435.512 5255.024 5947.999
You probably want want to limit the number of results, unless you want 13,000
numbers filling your screen :) The simplest way is just add LIMIT 5 at the end
of query.
I've figured it out!
The problem was in the sapply() call. sapply does a fair amount of
renaming and property setting which slows things down massively for arrays of
this size. Replacing DecodeSpectrum with the following code brought the sample
time from 14.66 seconds down to 3.36 seconds, a 4-fold increase!
Here's the new body of DecodeSpectrum:
DecodeSpectrum <- function(array_length, mz_array, intensity_array) {
## needed to tell `vapply` how long the result should be. No, there isn't an
## easier way to do this.
resultLength <- rep(1.0, array_length)
vapply(list(mz_array=mz_array, intensity_array=intensity_array),
readBin,
resultLength,
what="double",
endian="little",
n=array_length,
USE.NAMES=FALSE)
}
The Rprof output now looks like:
$by.self
self.time self.pct total.time total.pct
"<Anonymous>" 0.64 19.75 2.14 66.05
"DecodeSpectrum" 0.46 14.20 1.12 34.57
".Call" 0.42 12.96 0.42 12.96
"FUN" 0.38 11.73 0.38 11.73
"&" 0.16 4.94 0.16 4.94
">" 0.14 4.32 0.14 4.32
"c" 0.14 4.32 0.14 4.32
"list" 0.14 4.32 0.14 4.32
"vapply" 0.12 3.70 0.66 20.37
"mapply" 0.10 3.09 2.54 78.40
"simplify2array" 0.10 3.09 0.30 9.26
"<" 0.08 2.47 0.08 2.47
"t" 0.04 1.23 2.72 83.95
"as.vector" 0.04 1.23 0.08 2.47
"unlist" 0.04 1.23 0.08 2.47
"lapply" 0.04 1.23 0.04 1.23
"unique.default" 0.04 1.23 0.04 1.23
"NextSegment" 0.02 0.62 0.50 15.43
"odbcFetchRows" 0.02 0.62 0.46 14.20
"unique" 0.02 0.62 0.10 3.09
"array" 0.02 0.62 0.04 1.23
"attr" 0.02 0.62 0.02 0.62
"match.fun" 0.02 0.62 0.02 0.62
"odbcValidChannel" 0.02 0.62 0.02 0.62
"parent.frame" 0.02 0.62 0.02 0.62
$by.total
total.time total.pct self.time self.pct
"ProcessAllSegments" 3.24 100.00 0.00 0.00
"t" 2.72 83.95 0.04 1.23
"do.call" 2.68 82.72 0.00 0.00
"mapply" 2.54 78.40 0.10 3.09
"<Anonymous>" 2.14 66.05 0.64 19.75
"DecodeSpectrum" 1.12 34.57 0.46 14.20
"vapply" 0.66 20.37 0.12 3.70
"NextSegment" 0.50 15.43 0.02 0.62
"odbcFetchRows" 0.46 14.20 0.02 0.62
".Call" 0.42 12.96 0.42 12.96
"FUN" 0.38 11.73 0.38 11.73
"simplify2array" 0.30 9.26 0.10 3.09
"&" 0.16 4.94 0.16 4.94
">" 0.14 4.32 0.14 4.32
"c" 0.14 4.32 0.14 4.32
"list" 0.14 4.32 0.14 4.32
"unique" 0.10 3.09 0.02 0.62
"<" 0.08 2.47 0.08 2.47
"as.vector" 0.08 2.47 0.04 1.23
"unlist" 0.08 2.47 0.04 1.23
"lapply" 0.04 1.23 0.04 1.23
"unique.default" 0.04 1.23 0.04 1.23
"array" 0.04 1.23 0.02 0.62
"attr" 0.02 0.62 0.02 0.62
"match.fun" 0.02 0.62 0.02 0.62
"odbcValidChannel" 0.02 0.62 0.02 0.62
"parent.frame" 0.02 0.62 0.02 0.62
$sample.interval
[1] 0.02
$sampling.time
[1] 3.24
It's possible that some additional performance could be squeezed out of messing
with the do.call('mapply', ...) call, but I'm satisfied enough with the
performance as is that I'm not willing to waste time on that.
I am trying to make a software that will, in real time, find the top N correlated time series windows (to a query series).
There are approximately 5000 windows, each 34 rows in length. With respect to the query series I need the 300 most correlated windows.
Currently I am using the cor function, but it is proving to be entirely way too slow. I need response times under a second. Under 250ms would be great, but anything in that vicinity would do.
Is there a "fast approximate correlation" library for R that I can use to reduce the size of my large "contestant list" (the 5000 windows)?
If not, is there another method to shrink this list somewhat?
Here is the function that I am running:
GetTopN<-function(n)
{
Rprof()
x<- LastBars()
x<-as.data.frame(cbind(x[-1,1],diff(x[,2])))
colnames(x)<-c('RPos','M1')
actionlist<-GetFiltered()
print(nrow(actionlist))
crs<-mat.or.vec(nrow(actionlist),2) #will hold correlations
for(i in 1:nrow(actionlist))
{
crs[i,2]<-cor(z[actionlist$RPos[i]+n1,2],x[,2])
}
crs[,1]<-actionlist$OpenTime
sorted <- crs[order(crs[,2], decreasing=T),1:2]
topx<- head(sorted,n)
bottomx <- tail(sorted,n)
rownames(bottomx)<-NULL
DF<-as.data.frame(rbind(topx,bottomx),row.names=NULL )
colnames(DF)<-c('ptime','weight')
sqlSave(channel,dat=DF,tablename='ReducedList',append=F,rownames=F,safer=F)
FillActionList()
Rprof(NULL)
summaryRprof()
}
And here is the output from summaryRprof:
$by.self
self.time self.pct total.time total.pct
[.data.frame 0.68 25.37 0.98 36.57
.Call 0.22 8.21 0.22 8.21
cor 0.16 5.97 2.30 85.82
is.data.frame 0.14 5.22 1.26 47.01
[ 0.14 5.22 1.12 41.79
stopifnot 0.14 5.22 0.30 11.19
sys.call 0.14 5.22 0.18 6.72
GetTopN 0.12 4.48 2.68 100.00
eval 0.10 3.73 0.46 17.16
deparse 0.10 3.73 0.34 12.69
%in% 0.10 3.73 0.22 8.21
$ 0.10 3.73 0.10 3.73
c 0.08 2.99 0.08 2.99
.deparseOpts 0.06 2.24 0.14 5.22
formals 0.06 2.24 0.08 2.99
pmatch 0.06 2.24 0.08 2.99
names 0.06 2.24 0.06 2.24
match 0.04 1.49 0.12 4.48
sys.parent 0.04 1.49 0.04 1.49
match.arg 0.02 0.75 0.58 21.64
length 0.02 0.75 0.02 0.75
matrix 0.02 0.75 0.02 0.75
mode 0.02 0.75 0.02 0.75
order 0.02 0.75 0.02 0.75
parent.frame 0.02 0.75 0.02 0.75
sys.function 0.02 0.75 0.02 0.75
$by.total
total.time total.pct self.time self.pct
GetTopN 2.68 100.00 0.12 4.48
cor 2.30 85.82 0.16 5.97
is.data.frame 1.26 47.01 0.14 5.22
[ 1.12 41.79 0.14 5.22
[.data.frame 0.98 36.57 0.68 25.37
match.arg 0.58 21.64 0.02 0.75
eval 0.46 17.16 0.10 3.73
deparse 0.34 12.69 0.10 3.73
stopifnot 0.30 11.19 0.14 5.22
.Call 0.22 8.21 0.22 8.21
%in% 0.22 8.21 0.10 3.73
sqlQuery 0.20 7.46 0.00 0.00
sys.call 0.18 6.72 0.14 5.22
odbcQuery 0.18 6.72 0.00 0.00
GetFiltered 0.16 5.97 0.00 0.00
match.call 0.16 5.97 0.00 0.00
.deparseOpts 0.14 5.22 0.06 2.24
match 0.12 4.48 0.04 1.49
$ 0.10 3.73 0.10 3.73
c 0.08 2.99 0.08 2.99
formals 0.08 2.99 0.06 2.24
pmatch 0.08 2.99 0.06 2.24
names 0.06 2.24 0.06 2.24
sys.parent 0.04 1.49 0.04 1.49
LastBars 0.04 1.49 0.00 0.00
length 0.02 0.75 0.02 0.75
matrix 0.02 0.75 0.02 0.75
mode 0.02 0.75 0.02 0.75
order 0.02 0.75 0.02 0.75
parent.frame 0.02 0.75 0.02 0.75
sys.function 0.02 0.75 0.02 0.75
mat.or.vec 0.02 0.75 0.00 0.00
odbcFetchRows 0.02 0.75 0.00 0.00
odbcUpdate 0.02 0.75 0.00 0.00
sqlGetResults 0.02 0.75 0.00 0.00
sqlSave 0.02 0.75 0.00 0.00
sqlwrite 0.02 0.75 0.00 0.00
$sample.interval
[1] 0.02
$sampling.time
[1] 2.68
Looking at the summaryRprofs output it seems that perhaps [.data.frame takes the longest. I do not see how to get around that though.
As Vincent points out in comments, computing (Pearson) correlation is itself pretty quick. Once you exhausted the basic R profiling and speeding up tricks, you can always go
multicore and/or parallel via appropriate R packages
use compiled code, and I can think of a package to facilitate that
even consider GPUs as e.g. my Intro to High-Performance Computing with R slides (on my presentations page) contained an example of computing the (more expensive Kendall) correlation for a large gain