I would expect gsub and stringr::str_replace_all to return the same result in the following, but only gsub returns the intended result. I am developing a lesson to demonstrate str_replace_all so I would like to know why it returns a different result here.
txt <- ".72 2.51\n2015** 2.45 2.30 2.00 1.44 1.20 1.54 1.84 1.56 1.94 1.47 0.86 1.01\n2016** 1.53 1.75 2.40 2.62 2.35 2.03 1.25 0.52 0.45 0.56 1.88 1.17\n2017** 0.77 0.70 0.74 1.12 0.88 0.79 0.10 0.09 0.32 0.05 0.15 0.50\n2018** 0.70 0"
gsub(".*2017|2018.*", "", txt)
stringr::str_replace_all(txt, ".*2017|2018.*", "")
gsub returns the intended output (everything before and including 2017, and after and including 2018, has been removed).
output of gsub (intended)
[1] "** 0.77 0.70 0.74 1.12 0.88 0.79 0.10 0.09 0.32 0.05 0.15 0.50\n"
However str_replace_all only replaces the 2017 and 2018 but leaves the rest, even though the same pattern is used for both.
output of str_replace_all (not intended)
[1] ".72 2.51\n2015** 2.45 2.30 2.00 1.44 1.20 1.54 1.84 1.56 1.94 1.47 0.86 1.01\n2016** 1.53 1.75 2.40 2.62 2.35 2.03 1.25 0.52 0.45 0.56 1.88 1.17\n** 0.77 0.70 0.74 1.12 0.88 0.79 0.10 0.09 0.32 0.05 0.15 0.50\n"
Why is this the case?
Base R relies on two regex libraries.
As default R uses TRE.
We can specify perl = TRUE to use PCRE (perl like regular expressions).
The {stringr} package uses ICU (Java like regular expressions).
In your case the problem is that the dot . doesn’t match line breaks in PCRE and ICU, while it does match line breaks in TRE:
library(stringr)
txt <- ".72 2.51\n2015** 2.45 2.30 2.00 1.44 1.20 1.54 1.84 1.56 1.94 1.47 0.86 1.01\n2016** 1.53 1.75 2.40 2.62 2.35 2.03 1.25 0.52 0.45 0.56 1.88 1.17\n2017** 0.77 0.70 0.74 1.12 0.88 0.79 0.10 0.09 0.32 0.05 0.15 0.50\n2018** 0.70 0"
(base_tre <- gsub(".*2017|2018.*", "", txt))
#> [1] "** 0.77 0.70 0.74 1.12 0.88 0.79 0.10 0.09 0.32 0.05 0.15 0.50\n"
(base_perl <- gsub(".*2017|2018.*", "", txt, perl = TRUE))
#> [1] ".72 2.51\n2015** 2.45 2.30 2.00 1.44 1.20 1.54 1.84 1.56 1.94 1.47 0.86 1.01\n2016** 1.53 1.75 2.40 2.62 2.35 2.03 1.25 0.52 0.45 0.56 1.88 1.17\n** 0.77 0.70 0.74 1.12 0.88 0.79 0.10 0.09 0.32 0.05 0.15 0.50\n"
(string_r <- str_replace_all(txt, ".*2017|2018.*", ""))
#> [1] ".72 2.51\n2015** 2.45 2.30 2.00 1.44 1.20 1.54 1.84 1.56 1.94 1.47 0.86 1.01\n2016** 1.53 1.75 2.40 2.62 2.35 2.03 1.25 0.52 0.45 0.56 1.88 1.17\n** 0.77 0.70 0.74 1.12 0.88 0.79 0.10 0.09 0.32 0.05 0.15 0.50\n"
identical(base_perl, string_r)
#> [1] TRUE
We can use modifiers
to change the behavior of PCRE and ICU regex so that line breaks are matched
by .. This will produce the same output as with base R TRE:
(base_perl <- gsub("(?s).*2017|2018(?s).*", "", txt, perl = TRUE))
#> [1] "** 0.77 0.70 0.74 1.12 0.88 0.79 0.10 0.09 0.32 0.05 0.15 0.50\n"
(string_r <- str_replace_all(txt, "(?s).*2017|2018(?s).*", ""))
#> [1] "** 0.77 0.70 0.74 1.12 0.88 0.79 0.10 0.09 0.32 0.05 0.15 0.50\n"
identical(base_perl, string_r)
#> [1] TRUE
Finally, unlike TRE, PCRE and ICU allow us to use look arounds which are also
an option to solve the problem
str_match(txt, "(?<=2017).*.(?=\\n2018)")
#> [,1]
#> [1,] "** 0.77 0.70 0.74 1.12 0.88 0.79 0.10 0.09 0.32 0.05 0.15 0.50"
Created on 2021-08-10 by the reprex package (v0.3.0)
Related
I would like to remove all rows if any value of the row is less than 0.05. Any suggestions? I need dplyr and base R simple subset solutions.
library(magrittr)
text = '
INNO RISK PRO AMB MKT IP
1 0.00 0.01 0.00 0.00 0.19 0.24
2 1.00 0.83 0.04 0.48 0.60 0.03
3 0.01 0.07 0.79 0.05 0.19 0.00
4 0.99 0.99 0.92 0.86 0.01 0.10
5 0.72 0.93 0.28 0.48 1.00 0.90
6 0.96 1.00 1.00 0.86 1.00 0.75
7 0.02 0.07 0.01 0.86 0.60 0.00
8 0.02 0.01 0.01 0.12 0.60 0.24
9 0.02 0.93 0.92 0.02 0.19 0.90
10 0.99 0.97 0.92 0.86 0.99 0.90'
d10 = textConnection(text) %>% read.table(header = T)
Created on 2020-11-28 by the reprex package (v0.3.0)
We can use rowSums
d10[!rowSums(d10 < 0.05),]
# INNO RISK PRO AMB MKT IP
#5 0.72 0.93 0.28 0.48 1.00 0.90
#6 0.96 1.00 1.00 0.86 1.00 0.75
#10 0.99 0.97 0.92 0.86 0.99 0.90
Or with dplyr
library(dplyr)
d10 %>%
filter(across(everything(), ~ . >= 0.05))
# INNO RISK PRO AMB MKT IP
#5 0.72 0.93 0.28 0.48 1.00 0.90
#6 0.96 1.00 1.00 0.86 1.00 0.75
#10 0.99 0.97 0.92 0.86 0.99 0.90
I am working on EFA and would like to customize my tables. There is a function, psych.print to suppress factor loadings of a certain value to make the table easier to read. When I run this function, it produces this data and the summary stats in the console (in an .RMD document, it produces console text and a separate data frame of the factor loadings with loadings suppressed). However, if I attempt to save this as an object, it does not keep this data.
Here is an example:
library(psych)
bfi_data=bfi
bfi_data=bfi_data[complete.cases(bfi_data),]
bfi_cor <- cor(bfi_data)
factors_data <- fa(r = bfi_cor, nfactors = 6)
print.psych(fa_ml_oblimin_2, cut=.32, sort="TRUE")
In an R script, it produces this:
item MR2 MR3 MR1 MR5 MR4 MR6 h2 u2 com
N2 17 0.83 0.654 0.35 1.0
N1 16 0.82 0.666 0.33 1.1
N3 18 0.69 0.549 0.45 1.1
N5 20 0.47 0.376 0.62 2.2
N4 19 0.44 0.43 0.506 0.49 2.4
C4 9 -0.67 0.555 0.45 1.3
C2 7 0.66 0.475 0.53 1.4
C5 10 -0.56 0.433 0.57 1.4
C3 8 0.56 0.317 0.68 1.1
C1 6 0.54 0.344 0.66 1.3
In R Markdown, it produces this:
How can I save that data.frame as an object?
Looking at the str of the object it doesn't look that what you want is built-in. An ugly way would be to use capture.output and try to convert the character vector to dataframe using string manipulation. Else since the data is being displayed it means that the data is present somewhere in the object itself. I could find out vectors of same length which can be combined to form the dataframe.
loadings <- unclass(factors_data$loadings)
h2 <- factors_data$communalities
#There is also factors_data$communality which has same values
u2 <- factors_data$uniquenesses
com <- factors_data$complexity
data <- cbind(loadings, h2, u2, com)
data
This returns :
# MR2 MR3 MR1 MR5 MR4 MR6 h2 u2 com
#A1 0.11 0.07 -0.07 -0.56 -0.01 0.35 0.38 0.62 1.85
#A2 0.03 0.09 -0.08 0.64 0.01 -0.06 0.47 0.53 1.09
#A3 -0.04 0.04 -0.10 0.60 0.07 0.16 0.51 0.49 1.26
#A4 -0.07 0.19 -0.07 0.41 -0.13 0.13 0.29 0.71 2.05
#A5 -0.17 0.01 -0.16 0.47 0.10 0.22 0.47 0.53 2.11
#C1 0.05 0.54 0.08 -0.02 0.19 0.05 0.34 0.66 1.32
#C2 0.09 0.66 0.17 0.06 0.08 0.16 0.47 0.53 1.36
#C3 0.00 0.56 0.07 0.07 -0.04 0.05 0.32 0.68 1.09
#C4 0.07 -0.67 0.10 -0.01 0.02 0.25 0.55 0.45 1.35
#C5 0.15 -0.56 0.17 0.02 0.10 0.01 0.43 0.57 1.41
#E1 -0.14 0.09 0.61 -0.14 -0.08 0.09 0.41 0.59 1.34
#E2 0.06 -0.03 0.68 -0.07 -0.08 -0.01 0.56 0.44 1.07
#E3 0.02 0.01 -0.32 0.17 0.38 0.28 0.51 0.49 3.28
#E4 -0.07 0.03 -0.49 0.25 0.00 0.31 0.56 0.44 2.26
#E5 0.16 0.27 -0.39 0.07 0.24 0.04 0.41 0.59 3.01
#N1 0.82 -0.01 -0.09 -0.09 -0.03 0.02 0.67 0.33 1.05
#N2 0.83 0.02 -0.07 -0.07 0.01 -0.07 0.65 0.35 1.04
#N3 0.69 -0.03 0.13 0.09 0.02 0.06 0.55 0.45 1.12
#N4 0.44 -0.14 0.43 0.09 0.10 0.01 0.51 0.49 2.41
#N5 0.47 -0.01 0.21 0.21 -0.17 0.09 0.38 0.62 2.23
#O1 -0.05 0.07 -0.01 -0.04 0.57 0.09 0.36 0.64 1.11
#O2 0.12 -0.09 0.01 0.12 -0.43 0.28 0.30 0.70 2.20
#O3 0.01 0.00 -0.10 0.05 0.65 0.04 0.48 0.52 1.06
#O4 0.10 -0.05 0.34 0.15 0.37 -0.04 0.24 0.76 2.55
#O5 0.04 -0.04 -0.02 -0.01 -0.50 0.30 0.33 0.67 1.67
#gender 0.20 0.09 -0.12 0.33 -0.21 -0.15 0.18 0.82 3.58
#education -0.03 0.01 0.05 0.11 0.12 -0.22 0.07 0.93 2.17
#age -0.06 0.07 -0.02 0.16 0.03 -0.26 0.10 0.90 2.05
Ronak Shaw answered my question above, and I used his answer to help create the following function, which nearly reproduces the psych.print data.frame of fa.sort output
fa_table <- function(x, cut) {
#get sorted loadings
loadings <- fa.sort(fa_ml_oblimin)$loadings %>% round(3)
#cut loadings
loadings[loadings < cut] <- ""
#get additional info
add_info <- cbind(x$communalities,
x$uniquenesses,
x$complexity) %>%
as.data.frame() %>%
rename("commonality" = V1,
"uniqueness" = V2,
"complexity" = V3) %>%
rownames_to_column("item")
#build table
loadings %>%
unclass() %>%
as.data.frame() %>%
rownames_to_column("item") %>%
left_join(add_info) %>%
mutate(across(where(is.numeric), round, 3))
}
I have the following data frame:
1 8.03 0.37 0.55 1.03 1.58 2.03 15.08 2.69 1.63 3.84 1.26 1.9692516
2 4.76 0.70 NA 0.12 1.62 3.30 3.24 2.92 0.35 0.49 0.42 NA
3 6.18 3.47 3.00 0.02 0.19 16.70 2.32 69.78 3.72 5.51 1.62 2.4812459
4 1.06 45.22 0.81 1.07 8.30 196.23 0.62 118.51 13.79 22.80 9.77 8.4296220
5 0.15 0.10 0.07 1.52 1.02 0.50 0.91 1.75 0.02 0.20 0.48 0.3094169
7 0.27 0.68 0.09 0.15 0.26 1.54 0.01 0.21 0.04 0.28 0.31 0.1819510
I want to calculate the geometric mean for each row. My codes is
dat <- read.csv("MXreport.csv")
if(any(dat$X18S > 25)){ print("Fail!") } else { print("Pass!")}
datpass <- subset(dat, dat$X18S <= 25)
gene <- datpass[, 42:52]
gm_mean <- function(x){ prod(x)^(1/length(x))}
gene$score <- apply(gene, 1, gm_mean)
head(gene)
I got this output after typing this code:
1 8.03 0.37 0.55 1.03 1.58 2.03 15.08 2.69 1.63 3.84 1.26 1.9692516
2 4.76 0.70 NA 0.12 1.62 3.30 3.24 2.92 0.35 0.49 0.42 NA
3 6.18 3.47 3.00 0.02 0.19 16.70 2.32 69.78 3.72 5.51 1.62 2.4812459
4 1.06 45.22 0.81 1.07 8.30 196.23 0.62 118.51 13.79 22.80 9.77 8.4296220
5 0.15 0.10 0.07 1.52 1.02 0.50 0.91 1.75 0.02 0.20 0.48 0.3094169
7 0.27 0.68 0.09 0.15 0.26 1.54 0.01 0.21 0.04 0.28 0.31 0.1819510
The problem is I got NA after applying the geometric mean function to the row that has NA. How do I skip NA and calculate the geometric mean for the row that has NA
When I used gene<- na.exclude(datpass[, 42:52]). It skipped the row that has NA and not calculate the geometric mean at all. That is now what I want. I want to also calculate the geometric mean for the row that has NA also. How do I do this?
I have a program which pulls data out of a MySQL database, decodes a pair of
binary columns, and then sums together a subset of of the rows within the pair
of binary columns. Running the program on a sample data set takes 12-14 seconds,
with 9-10 of those taken up by unlist. I'm wondering if there is any way to
speed things up.
Structure of the table
The rows I'm getting from the database look like:
| array_length | mz_array | intensity_array |
|--------------+-----------------+-----------------|
| 98 | 00c077e66340... | 002091c37240... |
| 74 | c04a7c7340... | db87734000... |
where array_length is the number of little-endian doubles in the two arrays
(they are guaranteed to be the same length). So the first row has 98 doubles in
each of mz_array and intensity_array. array_length has a mean of 825 and a
median of 620 with 13,000 rows.
Decoding the binary arrays
Each row gets decoded by being passed to the following function. Once the binary
arrays have been decoded, array_length is no longer needed.
DecodeSpectrum <- function(array_length, mz_array, intensity_array) {
sapply(list(mz_array=mz_array, intensity_array=intensity_array),
readBin,
what="double",
endian="little",
n=array_length)
}
Summing the arrays
The next step is to sum the values in intensity_array, but only if their
corresponding entry in mz_array is within a certain window. The arrays are
ordered by mz_array, ascending. I am using the following function to sum up
the intensity_array values:
SumInWindow <- function(spectrum, lower, upper) {
sum(spectrum[spectrum[,1] > lower & spectrum[,1] < upper, 2])
}
Where spectrum is the output from DecodeSpectrum, a matrix.
Operating over list of rows
Each row is handled by:
ProcessSegment <- function(spectra, window_bounds) {
lower <- window_bounds[1]
upper <- window_bounds[2]
## Decode a single spectrum and sum the intensities within the window.
SumDecode <- function (...) {
SumInWindow(DecodeSpectrum(...), lower, upper)
}
do.call("mapply", c(SumDecode, spectra))
}
And finally, the rows are fetched and handed off to ProcessSegment with this
function:
ProcessAllSegments <- function(conn, window_bounds) {
nextSeg <- function() odbcFetchRows(conn, max=batchSize, buffsize=batchSize)
while ((res <- nextSeg())$stat == 1 && res$data[[1]] > 0) {
print(ProcessSegment(res$data, window_bounds))
}
}
I'm doing the fetches in segments so that R doesn't have to load the entire data
set into memory at once (it was causing out of memory errors). I'm using the
RODBC driver because the RMySQL driver isn't able to return unsullied binary
values (as far as I could tell).
Performance
For a sample data set of about 140MiB, the whole process takes around 14 seconds
to complete, which is not that bad for 13,000 rows. Still, I think there's room
for improvement, especially when looking at the Rprof output:
$by.self
self.time self.pct total.time total.pct
"unlist" 10.26 69.99 10.30 70.26
"SumInWindow" 1.06 7.23 13.92 94.95
"mapply" 0.48 3.27 14.44 98.50
"as.vector" 0.44 3.00 10.60 72.31
"array" 0.40 2.73 0.40 2.73
"FUN" 0.40 2.73 0.40 2.73
"list" 0.30 2.05 0.30 2.05
"<" 0.22 1.50 0.22 1.50
"unique" 0.18 1.23 0.36 2.46
">" 0.18 1.23 0.18 1.23
".Call" 0.16 1.09 0.16 1.09
"lapply" 0.14 0.95 0.86 5.87
"simplify2array" 0.10 0.68 11.48 78.31
"&" 0.10 0.68 0.10 0.68
"sapply" 0.06 0.41 12.36 84.31
"c" 0.06 0.41 0.06 0.41
"is.factor" 0.04 0.27 0.04 0.27
"match.fun" 0.04 0.27 0.04 0.27
"<Anonymous>" 0.02 0.14 13.94 95.09
"unique.default" 0.02 0.14 0.06 0.41
$by.total
total.time total.pct self.time self.pct
"ProcessAllSegments" 14.66 100.00 0.00 0.00
"do.call" 14.50 98.91 0.00 0.00
"ProcessSegment" 14.50 98.91 0.00 0.00
"mapply" 14.44 98.50 0.48 3.27
"<Anonymous>" 13.94 95.09 0.02 0.14
"SumInWindow" 13.92 94.95 1.06 7.23
"sapply" 12.36 84.31 0.06 0.41
"DecodeSpectrum" 12.36 84.31 0.00 0.00
"simplify2array" 11.48 78.31 0.10 0.68
"as.vector" 10.60 72.31 0.44 3.00
"unlist" 10.30 70.26 10.26 69.99
"lapply" 0.86 5.87 0.14 0.95
"array" 0.40 2.73 0.40 2.73
"FUN" 0.40 2.73 0.40 2.73
"unique" 0.36 2.46 0.18 1.23
"list" 0.30 2.05 0.30 2.05
"<" 0.22 1.50 0.22 1.50
">" 0.18 1.23 0.18 1.23
".Call" 0.16 1.09 0.16 1.09
"nextSeg" 0.16 1.09 0.00 0.00
"odbcFetchRows" 0.16 1.09 0.00 0.00
"&" 0.10 0.68 0.10 0.68
"c" 0.06 0.41 0.06 0.41
"unique.default" 0.06 0.41 0.02 0.14
"is.factor" 0.04 0.27 0.04 0.27
"match.fun" 0.04 0.27 0.04 0.27
$sample.interval
[1] 0.02
$sampling.time
[1] 14.66
I'm surprised to see unlist taking up so much time; this says to me that there
might be some redundant copying or rearranging going on. I'm new at R, so it's
entirely possible that this is normal, but I'd like to know if there's anything
glaringly wrong.
Update: sample data posted
I've posted the full version of the program
here and the sample data I use
here. The sample data is the
gziped output from mysqldump. You need to set the proper environment
variables for the script to connect to the database:
MZDB_HOST
MZDB_DB
MZDB_USER
MZDB_PW
To run the script, you must specify the run_id and the window boundaries. I
run the program like this:
Rscript ChromatoGen.R -i 1 -m 600 -M 1200
These window bounds are pretty arbitrary, but select roughly a half to a third
of the range. If you want to print the results, put a print() around the call
to ProcessSegment within ProcessAllSegments. Using those parameters, the
first 5 should be:
[1] 7139.682 4522.314 3435.512 5255.024 5947.999
You probably want want to limit the number of results, unless you want 13,000
numbers filling your screen :) The simplest way is just add LIMIT 5 at the end
of query.
I've figured it out!
The problem was in the sapply() call. sapply does a fair amount of
renaming and property setting which slows things down massively for arrays of
this size. Replacing DecodeSpectrum with the following code brought the sample
time from 14.66 seconds down to 3.36 seconds, a 4-fold increase!
Here's the new body of DecodeSpectrum:
DecodeSpectrum <- function(array_length, mz_array, intensity_array) {
## needed to tell `vapply` how long the result should be. No, there isn't an
## easier way to do this.
resultLength <- rep(1.0, array_length)
vapply(list(mz_array=mz_array, intensity_array=intensity_array),
readBin,
resultLength,
what="double",
endian="little",
n=array_length,
USE.NAMES=FALSE)
}
The Rprof output now looks like:
$by.self
self.time self.pct total.time total.pct
"<Anonymous>" 0.64 19.75 2.14 66.05
"DecodeSpectrum" 0.46 14.20 1.12 34.57
".Call" 0.42 12.96 0.42 12.96
"FUN" 0.38 11.73 0.38 11.73
"&" 0.16 4.94 0.16 4.94
">" 0.14 4.32 0.14 4.32
"c" 0.14 4.32 0.14 4.32
"list" 0.14 4.32 0.14 4.32
"vapply" 0.12 3.70 0.66 20.37
"mapply" 0.10 3.09 2.54 78.40
"simplify2array" 0.10 3.09 0.30 9.26
"<" 0.08 2.47 0.08 2.47
"t" 0.04 1.23 2.72 83.95
"as.vector" 0.04 1.23 0.08 2.47
"unlist" 0.04 1.23 0.08 2.47
"lapply" 0.04 1.23 0.04 1.23
"unique.default" 0.04 1.23 0.04 1.23
"NextSegment" 0.02 0.62 0.50 15.43
"odbcFetchRows" 0.02 0.62 0.46 14.20
"unique" 0.02 0.62 0.10 3.09
"array" 0.02 0.62 0.04 1.23
"attr" 0.02 0.62 0.02 0.62
"match.fun" 0.02 0.62 0.02 0.62
"odbcValidChannel" 0.02 0.62 0.02 0.62
"parent.frame" 0.02 0.62 0.02 0.62
$by.total
total.time total.pct self.time self.pct
"ProcessAllSegments" 3.24 100.00 0.00 0.00
"t" 2.72 83.95 0.04 1.23
"do.call" 2.68 82.72 0.00 0.00
"mapply" 2.54 78.40 0.10 3.09
"<Anonymous>" 2.14 66.05 0.64 19.75
"DecodeSpectrum" 1.12 34.57 0.46 14.20
"vapply" 0.66 20.37 0.12 3.70
"NextSegment" 0.50 15.43 0.02 0.62
"odbcFetchRows" 0.46 14.20 0.02 0.62
".Call" 0.42 12.96 0.42 12.96
"FUN" 0.38 11.73 0.38 11.73
"simplify2array" 0.30 9.26 0.10 3.09
"&" 0.16 4.94 0.16 4.94
">" 0.14 4.32 0.14 4.32
"c" 0.14 4.32 0.14 4.32
"list" 0.14 4.32 0.14 4.32
"unique" 0.10 3.09 0.02 0.62
"<" 0.08 2.47 0.08 2.47
"as.vector" 0.08 2.47 0.04 1.23
"unlist" 0.08 2.47 0.04 1.23
"lapply" 0.04 1.23 0.04 1.23
"unique.default" 0.04 1.23 0.04 1.23
"array" 0.04 1.23 0.02 0.62
"attr" 0.02 0.62 0.02 0.62
"match.fun" 0.02 0.62 0.02 0.62
"odbcValidChannel" 0.02 0.62 0.02 0.62
"parent.frame" 0.02 0.62 0.02 0.62
$sample.interval
[1] 0.02
$sampling.time
[1] 3.24
It's possible that some additional performance could be squeezed out of messing
with the do.call('mapply', ...) call, but I'm satisfied enough with the
performance as is that I'm not willing to waste time on that.
I am trying to make a software that will, in real time, find the top N correlated time series windows (to a query series).
There are approximately 5000 windows, each 34 rows in length. With respect to the query series I need the 300 most correlated windows.
Currently I am using the cor function, but it is proving to be entirely way too slow. I need response times under a second. Under 250ms would be great, but anything in that vicinity would do.
Is there a "fast approximate correlation" library for R that I can use to reduce the size of my large "contestant list" (the 5000 windows)?
If not, is there another method to shrink this list somewhat?
Here is the function that I am running:
GetTopN<-function(n)
{
Rprof()
x<- LastBars()
x<-as.data.frame(cbind(x[-1,1],diff(x[,2])))
colnames(x)<-c('RPos','M1')
actionlist<-GetFiltered()
print(nrow(actionlist))
crs<-mat.or.vec(nrow(actionlist),2) #will hold correlations
for(i in 1:nrow(actionlist))
{
crs[i,2]<-cor(z[actionlist$RPos[i]+n1,2],x[,2])
}
crs[,1]<-actionlist$OpenTime
sorted <- crs[order(crs[,2], decreasing=T),1:2]
topx<- head(sorted,n)
bottomx <- tail(sorted,n)
rownames(bottomx)<-NULL
DF<-as.data.frame(rbind(topx,bottomx),row.names=NULL )
colnames(DF)<-c('ptime','weight')
sqlSave(channel,dat=DF,tablename='ReducedList',append=F,rownames=F,safer=F)
FillActionList()
Rprof(NULL)
summaryRprof()
}
And here is the output from summaryRprof:
$by.self
self.time self.pct total.time total.pct
[.data.frame 0.68 25.37 0.98 36.57
.Call 0.22 8.21 0.22 8.21
cor 0.16 5.97 2.30 85.82
is.data.frame 0.14 5.22 1.26 47.01
[ 0.14 5.22 1.12 41.79
stopifnot 0.14 5.22 0.30 11.19
sys.call 0.14 5.22 0.18 6.72
GetTopN 0.12 4.48 2.68 100.00
eval 0.10 3.73 0.46 17.16
deparse 0.10 3.73 0.34 12.69
%in% 0.10 3.73 0.22 8.21
$ 0.10 3.73 0.10 3.73
c 0.08 2.99 0.08 2.99
.deparseOpts 0.06 2.24 0.14 5.22
formals 0.06 2.24 0.08 2.99
pmatch 0.06 2.24 0.08 2.99
names 0.06 2.24 0.06 2.24
match 0.04 1.49 0.12 4.48
sys.parent 0.04 1.49 0.04 1.49
match.arg 0.02 0.75 0.58 21.64
length 0.02 0.75 0.02 0.75
matrix 0.02 0.75 0.02 0.75
mode 0.02 0.75 0.02 0.75
order 0.02 0.75 0.02 0.75
parent.frame 0.02 0.75 0.02 0.75
sys.function 0.02 0.75 0.02 0.75
$by.total
total.time total.pct self.time self.pct
GetTopN 2.68 100.00 0.12 4.48
cor 2.30 85.82 0.16 5.97
is.data.frame 1.26 47.01 0.14 5.22
[ 1.12 41.79 0.14 5.22
[.data.frame 0.98 36.57 0.68 25.37
match.arg 0.58 21.64 0.02 0.75
eval 0.46 17.16 0.10 3.73
deparse 0.34 12.69 0.10 3.73
stopifnot 0.30 11.19 0.14 5.22
.Call 0.22 8.21 0.22 8.21
%in% 0.22 8.21 0.10 3.73
sqlQuery 0.20 7.46 0.00 0.00
sys.call 0.18 6.72 0.14 5.22
odbcQuery 0.18 6.72 0.00 0.00
GetFiltered 0.16 5.97 0.00 0.00
match.call 0.16 5.97 0.00 0.00
.deparseOpts 0.14 5.22 0.06 2.24
match 0.12 4.48 0.04 1.49
$ 0.10 3.73 0.10 3.73
c 0.08 2.99 0.08 2.99
formals 0.08 2.99 0.06 2.24
pmatch 0.08 2.99 0.06 2.24
names 0.06 2.24 0.06 2.24
sys.parent 0.04 1.49 0.04 1.49
LastBars 0.04 1.49 0.00 0.00
length 0.02 0.75 0.02 0.75
matrix 0.02 0.75 0.02 0.75
mode 0.02 0.75 0.02 0.75
order 0.02 0.75 0.02 0.75
parent.frame 0.02 0.75 0.02 0.75
sys.function 0.02 0.75 0.02 0.75
mat.or.vec 0.02 0.75 0.00 0.00
odbcFetchRows 0.02 0.75 0.00 0.00
odbcUpdate 0.02 0.75 0.00 0.00
sqlGetResults 0.02 0.75 0.00 0.00
sqlSave 0.02 0.75 0.00 0.00
sqlwrite 0.02 0.75 0.00 0.00
$sample.interval
[1] 0.02
$sampling.time
[1] 2.68
Looking at the summaryRprofs output it seems that perhaps [.data.frame takes the longest. I do not see how to get around that though.
As Vincent points out in comments, computing (Pearson) correlation is itself pretty quick. Once you exhausted the basic R profiling and speeding up tricks, you can always go
multicore and/or parallel via appropriate R packages
use compiled code, and I can think of a package to facilitate that
even consider GPUs as e.g. my Intro to High-Performance Computing with R slides (on my presentations page) contained an example of computing the (more expensive Kendall) correlation for a large gain