Measuring processing time in R and then accessing it through a variable - r

I am a beginner and I would like to measure time of a spatial process and input it in a variable. Is there a way to do that with R. I have tried using library(tictoc) but I think the measurements are inaccurate when inputting them in a variable, because my time is 2 secs and using the toc() function I get the value 8320.

The microbenchmark package is pretty good if you just want to measure time for simple expressions. It is measuring time units much smaller than seconds and it will give you a data frame of times.
For example
> library(microbenchmark)
> (bench <- microbenchmark(mean(1:100), sum(1:100)/length(1:100)))
Unit: microseconds
expr min lq mean median uq max
mean(1:100) 3.771 3.9225 4.49793 4.013 4.1515 40.636
sum(1:100)/length(1:100) 1.023 1.1380 1.43525 1.217 1.3280 18.373
neval
100
100
will give you bench which is a data frame
> class(bench)
[1] "microbenchmark" "data.frame"
It contains the time measurements for all the runs. Use bench$expr to get the expression that was measured -- in this example the mean(1:100) or sum(1:100)/length(1:100) expression -- and bench$time gives you the time.

Related

Why is the computation faster with a loop?

Update:
As pointed out by #Dave2e, moving the start_time statement (in code 2) out of the for loop will make the running time comparable to code 1. Namely:
start_time <- Sys.time()
for (j in 1:1) {
n <- 100
result <- 1
for (i in 1:n) {
result <- result * i
}
result
## [1] 9.332622e+157
end_time <- Sys.time()
}
end_time - start_time
Does the for loop really improve the performance or is it fake?
Original Post:
I have two pieces of code as follows:
Code 1:
start_time <- Sys.time()
n <- 100
result <- 1
for (i in 1:n) {
result <- result * i
}
result
## [1] 9.332622e+157
end_time <- Sys.time()
end_time - start_time
Code 2:
for (j in 1:1) {
start_time <- Sys.time()
n <- 100
result <- 1
for (i in 1:n){
result <- result * i}
result
## [1] 9.332622e+157
end_time <- Sys.time()
}
end_time - start_time
I was expecting these two codes run similarly, but code 2 constantly runs significantly faster than code 1. On my computer, code 1 takes about 10^-2 seconds, while code 2 takes about 5*10^-6 seconds. Any insight on how this could happen? If just adding for loop to the entire codes can decrease program running time, I will use it on all my codes in the future.
I don't think your comparison is very robust. It's very hard to tell anything about the relative timing of very fast code without running it many times to get an average - too many uncontrollable factors can change the running time slightly.
The conclusion I would draw from the benchmarks below is that encapsulating a fairly trivial computation in a redundant for loop doesn't hurt very much, but that any apparent advantage is trivial and probably just an effect of noise.
I encapsulated each of your code chunks in a function (with_loop and without_loop) by putting function() { ... } around each of them. (Note that this means I'm not basing the timings on your Sys.time() comparisons, but on the built-in timing in the microbenchmark package.)
The microbenchmark package is more suitable for benchmarking, especially for very short computational tasks: from ?microbenchmark::microbenchmark:
‘microbenchmark’ serves as a more accurate replacement of the
often seen ‘system.time(replicate(1000, expr))’ expression. It
tries hard to accurately measure only the time it takes to
evaluate ‘expr’. To achieved this, the sub-millisecond (supposedly
nanosecond) accurate timing functions most modern operating
systems provide are used. Additionally all evaluations of the
expressions are done in C code to minimize any overhead.
library(microbenchmark)
m1 <- microbenchmark(with_loop, without_loop)
library(ggplot2)
autoplot(m1)+scale_y_log10()
The quantiles (lq, median, uq) are practically identical.
Unit: nanoseconds
expr min lq mean median uq max neval cld
with_loop 36 38 48.56 39 40 972 100 a
without_loop 36 39 177.81 40 41 13363 100 a
The code without the loop is indeed slower on average (i.e. it has a larger mean), but this is almost entirely driven by a couple of outliers.
Now focus just on values less than 50 nanoseconds:
autoplot(m1)+scale_y_log10(limits=c(NA,50))
If we do this again with times=1e6 (a million iterations), we get almost identical results: the mean with the loop is 3 nanoseconds faster (again probably almost entirely driven by small fluctuations in the upper tail).
Unit: nanoseconds
expr min lq mean median uq max neval cld
with_loop 32 39 86.44248 41 61 2474675 1e+06 a
without_loop 35 39 89.86294 41 61 2915836 1e+06 a
If you need to run your loop a billion times, this will correspond to a 3-second difference in run time. Probably not worth worrying about.

Extraction speed in Matrix package is very slow compared to regular matrix class

This is an example of comparing row extraction from large matrices, sparse and dense, using the Matrix package versus the regular R base-matrix class.
For dense matrices the speed is almost 395 times faster for the base class matrix:
library(Matrix)
library(microbenchmark)
## row extraction in dense matrices
D1<-matrix(rnorm(2000^2), 2000, 2000)
D2<-Matrix(D1)
> microbenchmark(D1[1,], D2[1,])
Unit: microseconds
expr min lq mean median uq max neval
D1[1, ] 14.437 15.9205 31.72903 31.4835 46.907 75.101 100
D2[1, ] 5730.730 5744.0130 5905.11338 5777.3570 5851.083 7447.118 100
For sparse matrices it is almost 63 times in favor of matrix again.
## row extraction in sparse matrices
S1<-matrix(1*(runif(2000^2)<0.1), 2000, 2000)
S2<-Matrix(S1, sparse = TRUE)
microbenchmark(S1[1,], S2[1,])
Unit: microseconds
expr min lq mean median uq max neval
S1[1, ] 15.225 16.417 28.15698 17.7655 42.9905 45.692 100
S2[1, ] 1652.362 1670.507 1771.51695 1774.1180 1787.0410 5241.863 100
Why the speed discrepancy, and is there a way to speed up extraction in Matrix package?
I don't know exactly what the trouble is, possibly S4 dispatch (which could potentially be a big piece of a small call like this). I was able to get performance equivalent to matrix (which has a pretty easy job, indexing + accessing a contiguous chunk of memory) by (1) switching to a row-major format and (2) writing my own special-purpose accessor function. I don't know exactly what you want to do or if it will be worth the trouble ...
Set up example:
set.seed(101)
S1 <- matrix(1*(runif(2000^2)<0.1), 2000, 2000)
Convert to column-major (dgCMatrix) and row-major (dgRMatrix) forms:
library(Matrix)
S2C <- Matrix(S1, sparse = TRUE)
S2R <- as(S1,"dgRMatrix")
Custom accessor:
my_row_extract <- function(m,i=1) {
r <- numeric(ncol(m)) ## set up zero vector for results
## suggested by #OttToomet, handles empty rows
inds <- seq(from=m#p[i]+1,
to=m#p[i+1], length.out=max(0, m#p[i+1] - m#p[i]))
r[m#j[inds]+1] <- m#x[inds] ## set values
return(r)
}
Check equality of results across methods (all TRUE):
all.equal(S2C[1,],S1[1,])
all.equal(S2C[1,],S2R[1,])
all.equal(my_row_extract(S2R,1),S2R[1,])
all.equal(my_row_extract(S2R,17),S2R[17,])
Benchmark:
benchmark(S1[1,], S2C[1,], S2R[1,], my_row_extract(S2R,1),
columns=c("test","elapsed","relative"))
## test elapsed relative
## 4 my_row_extract(S2R, 1) 0.015 1.154
## 1 S1[1, ] 0.013 1.000
## 2 S2C[1, ] 0.563 43.308
## 3 S2R[1, ] 4.113 316.385
The special-purpose extractor is competitive with base matrices. S2R is super-slow, even for row extraction (surprisingly); however, ?"dgRMatrix-class" does say
Note: The column-oriented sparse classes, e.g., ‘dgCMatrix’, are preferred and better supported in the ‘Matrix’ package.

element in R dataframe good practice

Accessing the ith element in the column bar in the dataframe foo in R can be done in two different ways:
foo[i,"bar"]
and
foo$bar[i].
Is there any difference between them? If so, which one should be used in terms of efficiency, readability, etc.?
Apologies if this has already been asked, but [] and $ characters are very elusive.
I tend to think this is an opinion based question, and therefore inappropriate for SO. But since you ask for speed considerations, I won't flag it as such. Note: There are more than the two methods you describe for indexing...
data(mtcars)
library(microbenchmark)
microbenchmark(opt_a= mtcars$disp[12],
opt_b= mtcars[12, "disp"],
opt_c= mtcars[["disp"]][12])
Unit: microseconds
expr min lq mean median uq max neval cld
opt_a 5.322 6.4620 8.34029 6.8425 7.603 56.640 100 a
opt_b 9.503 10.0735 15.41463 10.6435 11.024 354.285 100 b
opt_c 4.181 4.942 7.77386 5.322 6.082 84.009 100 a
using foo$bar[i] appears to be considerably faster than foo[i, "bar"] but not the fastest alternative

rowwise operation with dplyr

I am working on a large dataframe in R of 2,3 Million records that contain transactions of users at locations with starting and stop times. My goal is to create a new dataframe that contains the amount of time connected per user/per location. Let's call this hourly connected.
Transaction can differ from 8 minutes to 48 hours, thus the goal dataframe will be around 100 Million records and will grow each month.
The code underneath shows how the final dataframe is developed, although the total code is much complexer. Running the total code takes ~ 9 hours on a Intel(R) Xeon(R) CPU E5-2630 v3 # 2.40GHz, 16 cores 128GB RAM.
library(dplyr)
numsessions<-1000000
startdate <-as.POSIXlt(runif(numsessions,1,365*60*60)*24,origin="2015-1-1")
df.Sessions<-data.frame(userID = round(runif(numsessions,1,500)),
postalcode = round(runif(numsessions,1,100)),
daynr = format(startdate,"%w"),
start =startdate ,
end= startdate + runif(1,1,60*60*10)
)
dfhourly.connected <-df.Sessions %>% rowwise %>% do(data.frame(userID=.$userID,
hourlydate=as.Date(seq(.$start,.$end,by=60*60)),
hournr=format(seq(.$start,.$end,by=60*60),"%H")
)
)
We want to parallelize this procedure over (some of) the 16 cores to speed up the procedure. A first attempt was to use the multidplyr package. The partition is made based on daynr
df.hourlyconnected<-df.Sessions %>%
partition(daynr,cluster=init_cluster(6)) %>%
rowwise %>% do(data.frame(userID=.$userID,
hourlydate=as.Date(seq(.$start,.$end,by=60*60)),
hournr=format(seq(.$start,.$end,by=60*60),"%H")
)
) %>% collect()
Now, the rowwise function appears to require a dataframe as input instead of a partition.
My questions are
Is there a workaround to perform a rowwise calculation on partitions per core?
Has anyone got a suggestion to perform this calculation with a different R package and methods?
(I think posting this as an answer could benefit future readers who have interest in efficient coding.)
R is a vectorized language, thus operations by row are one of the most costly operations; Especially if you are evaluating lots of functions, dispatching methods, converting classes and creating new data set while you at it.
Hence, the first step is to reduce the "by" operations. By looking at your code, it seems that you are enlarging the size of your data set according to userID, start and end - all the rest of the operations could come afterwords (and hence be vectorized). Also, running seq (which isn't a very efficient function by itself) twice by row adds nothing. Lastly, calling explicitly seq.POSIXt on a POSIXt class will save you the overhead of method dispatching.
I'm not sure how to do this efficiently with dplyr, because mutate can't handle it and the do function (IIRC) always proved it self to be highly inefficient. Hence, let's try the data.table package that can handle this task easily
library(data.table)
res <- setDT(df.Sessions)[, seq.POSIXt(start, end, by = 3600), by = .(userID, start, end)]
Again, please note that I minimized "by row" operations to a single function call while avoiding methods dispatch
Now that we have the data set ready, we don't need any by row operations any more, everything can be vectorized from now on.
Though, vectorizing isn't the end of story. We also need to take into consideration classes conversions, method dispatching, etc. For instance, we can create both the hourlydate and hournr using either different Date class functions or using format or maybe even substr. The trade off that needs to be taken in account is that, for instance, substr will be the fastest, but the result will be a character vector rather a Date one - it's up to you to decide if you prefer the speed or the quality of the end product. Sometimes you can win both, but first you should check your options. Lets benchmark 3 different vectorized ways of calculating the hournr variable
library(microbenchmark)
set.seed(123)
N <- 1e5
test <- as.POSIXlt(runif(N, 1, 1e5), origin = "1900-01-01")
microbenchmark("format" = format(test, "%H"),
"substr" = substr(test, 12L, 13L),
"data.table::hour" = hour(test))
# Unit: microseconds
# expr min lq mean median uq max neval cld
# format 273874.784 274587.880 282486.6262 275301.78 286573.71 384505.88 100 b
# substr 486545.261 503713.314 529191.1582 514249.91 528172.32 667254.27 100 c
# data.table::hour 5.121 7.681 23.9746 27.84 33.44 55.36 100 a
data.table::hour is the clear winner by both speed and quality (results are in an integer vector rather a character one), while improving the speed of your previous solution by factor of ~x12,000 (and I haven't even tested it against your by row implementation).
Now lets try 3 different ways for data.table::hour
microbenchmark("as.Date" = as.Date(test),
"substr" = substr(test, 1L, 10L),
"data.table::as.IDate" = as.IDate(test))
# Unit: milliseconds
# expr min lq mean median uq max neval cld
# as.Date 19.56285 20.09563 23.77035 20.63049 21.16888 50.04565 100 a
# substr 492.61257 508.98049 525.09147 515.58955 525.20586 663.96895 100 b
# data.table::as.IDate 19.91964 20.44250 27.50989 21.34551 31.79939 145.65133 100 a
Seems like the first and third options are pretty much the same speed-wise, while I prefer as.IDate because of the integer storage mode.
Now that we know where both efficiency and quality lies, we could simply finish the task by running
res[, `:=`(hourlydate = as.IDate(V1), hournr = hour(V1))]
(You can then easily remove the unnecessary columns using a similar syntax of res[, yourcolname := NULL] which I'll leave to you)
There could be probably more efficient ways of solving this, but this demonstrates a possible way of how to make your code more efficient.
As a side note, if you want further to investigate data.table syntax/features, here's a good read
https://github.com/Rdatatable/data.table/wiki/Getting-started

Efficiently extract frequency of signal from FFT

I am using R and attempting to recover frequencies (really, just a number close to the actual frequency) from a large number of sound waves (1000s of audio files) by applying Fast Fourier transforms to each of them and identifying the frequency with the highest magnitude for each file. I'd like to be able to recover these peak frequencies as quickly as possible. The FFT method is one method that I've learned about recently and I think it should work for this task, but I am open to answers that do not rely on FFTs. I have tried a few ways of applying the FFT and getting the frequency of highest magnitude, and I have seen significant performance gains since my first method, but I'd like to speed up the execution time much more if possible.
Here is sample data:
s.rate<-44100 # sampling frequency
t <- 2 # seconds, for my situation, I've got 1000s of 1 - 5 minute files to go through
ind <- seq(s.rate*t)/s.rate # time indices for each step
# let's add two sin waves together to make the sound wave
f1 <- 600 # Hz: freq of sound wave 1
y <- 100*sin(2*pi*f1*ind) # sine wave 1
f2 <- 1500 # Hz: freq of sound wave 2
z <- 500*sin(2*pi*f2*ind+1) # sine wave 2
s <- y+z # the sound wave: my data isn't this nice, but I think this is an OK example
The first method I tried was using the fpeaks and spec functions from the seewave package, and it seems to work. However, it is prohibitively slow.
library(seewave)
fpeaks(spec(s, f=s.rate), nmax=1, plot=F) * 1000 # *1000 in order to recover freq in Hz
[1] 1494
# pretty close, quite slow
After doing a bit more reading, I tried this next approach, wherein
spec(s, f=s.rate, plot=F)[which(spec(s, f=s.rate, plot=F)[,2]==max(spec(s, f=s.rate, plot=F)[,2])),1] * 1000 # again need to *1000 to get Hz
x
1494
# pretty close, definitely faster
After a bit more looking around, I found this approach to work reasonably well.
which(Mod(fft(s)) == max(abs(Mod(fft(s))))) * s.rate / length(s)
[1] 1500
# recovered the exact frequency, and quickly!
Here is some performance data:
library(microbenchmark)
microbenchmark(
WHICH.MOD = which(Mod(fft(s))==max(abs(Mod(fft(s))))) * s.rate / length(s),
SPEC.WHICH = spec(s,f=s.rate,plot=F)[which(spec(s,f=s.rate,plot=F)[,2] == max(spec(s,f=s.rate,plot=F)[,2])),1] * 1000, # this is spec from the seewave package
# to recover a number around 1500, you have to multiply by 1000
FPEAKS.SPEC = fpeaks(spec(s,f=s.rate),nmax=1,plot=F)[,1] * 1000, # fpeaks is from the seewave package... again, need to multiply by 1000
times=10)
Unit: milliseconds
expr min lq median uq max neval
WHICH.MOD 10.78 10.81 11.07 11.43 12.33 10
SPEC.WHICH 64.68 65.83 66.66 67.18 78.74 10
FPEAKS.SPEC 100297.52 100648.50 101056.05 101737.56 102927.06 10
Good solutions will be the ones that recover a frequency close (± 10 Hz) to the real frequency the fastest.
More Context
I've got many files (several GBs), each containing a tone that gets modulated several times a second, and sometimes the signal actually disappears altogether so that there is just silence. I want to identify the frequency of the unmodulated tone. I know they should all be somewhere less than 6000 Hz, but I don't know more precisely than that. If (big if) I understand correctly, I've got an OK approach here, it's just a matter of making it faster. Just fyi, I have no previous experience in digital signal processing, so I appreciate any tips and pointers related to the mathematics / methods in addition to advice on how better to approach this programmatically.
After coming to a better understanding of this task and some of the terminology involved, I came across some additional approaches, which I'll present here. These additional approaches allow for window functions and a lot more, really, and the fastest approach in my question does not. I also just sped things up a bit by assigning the result of some of the functions to an object and indexing the object instead of running the function again
#i.e.
(ms<-meanspec(s,f=s.rate,wl=1024,plot=F))[which.max(ms[,2]),1]*1000
# instead of
meanspec(s,f=s.rate,wl=1024,plot=F)[which.max(meanspec(s,f=s.rate,wl=1024,plot=F)[,2]),1]*1000
I have my favorite approach, but I welcome constructive warnings, feedback, and opinions.
microbenchmark(
WHICH.MOD = which((mfft<-Mod(fft(s)))[1:(length(s)/2)] == max(abs(mfft[1:(length(s)/2)]))) * s.rate / length(s),
MEANSPEC = (ms<-meanspec(s,f=s.rate,wl=1024,plot=F))[which.max(ms[,2]),1]*1000,
DFREQ.HIST = (h<-hist(dfreq(s,f=s.rate,wl=1024,plot=F)[,2],200,plot=F))$mids[which.max(h$density)]*1000,
DFREQ.DENS = (dens <- density(dfreq(s,f=s.rate,wl=1024,plot=F)[,2],na.rm=T))$x[which.max(dens$y)]*1000,
FPEAKS.MSPEC = fpeaks(meanspec(s,f=s.rate,wl=1024,plot=F),nmax=1,plot=F)[,1]*1000 ,
times=100)
Unit: milliseconds
expr min lq median uq max neval
WHICH.MOD 8.119499 8.394254 8.513992 8.631377 10.81916 100
MEANSPEC 7.748739 7.985650 8.069466 8.211654 10.03744 100
DFREQ.HIST 9.720990 10.186257 10.299152 10.492016 12.07640 100
DFREQ.DENS 10.086190 10.413116 10.555305 10.721014 12.48137 100
FPEAKS.MSPEC 33.848135 35.441716 36.302971 37.089605 76.45978 100
DFREQ.DENS returns a frequency value farthest from the real value. The other approaches return values close to the real value.
With one of my audio files (i.e. real data) the performance results are a bit different (see below). One potentially relevant difference between the data being used above and the real data used for the performance data below is that above the data is just a vector of numerics and my real data is stored in a Wave object, an S4 object from the tuneR package.
library(Rmpfr) # to avoid an integer overflow problem in `WHICH.MOD`
microbenchmark(
WHICH.MOD = which((mfft<-Mod(fft(d#left)))[1:(length(d#left)/2)] == max(abs(mfft[1:(length(d#left)/2)]))) * mpfr(s.rate,100) / length(d#left),
MEANSPEC = (ms<-meanspec(d,f=s.rate,wl=1024,plot=F))[which.max(ms[,2]),1]*1000,
DFREQ.HIST = (h<-hist(dfreq(d,f=s.rate,wl=1024,plot=F)[,2],200,plot=F))$mids[which.max(h$density)]*1000,
DFREQ.DENS = (dens <- density(dfreq(d,f=s.rate,wl=1024,plot=F)[,2],na.rm=T))$x[which.max(dens$y)]*1000,
FPEAKS.MSPEC = fpeaks(meanspec(d,f=s.rate,wl=1024,plot=F),nmax=1,plot=F)[,1]*1000 ,
times=25)
Unit: seconds
expr min lq median uq max neval
WHICH.MOD 3.249395 3.320995 3.361160 3.421977 3.768885 25
MEANSPEC 1.180119 1.234359 1.263213 1.286397 1.315912 25
DFREQ.HIST 1.468117 1.519957 1.534353 1.563132 1.726012 25
DFREQ.DENS 1.432193 1.489323 1.514968 1.553121 1.713296 25
FPEAKS.MSPEC 1.207205 1.260006 1.277846 1.308961 1.390722 25
WHICH.MOD actually has to run twice to account for the left and right audio channels (i.e. my data is stereo), so it takes longer than the output indicates. Note: I needed to use the Rmpfr library in order for the WHICH.MOD approach to work with my real data, as I was having problems with integer overflow.
Interestingly, FPEAKS.MSPEC performed really well with my data, and it seems to return a pretty accurate frequency (based on my visual inspection of a spectrogram). DFREQ.HIST and DFREQ.DENS are quick, but the output frequency isn't as close to what I judge is the real value, and both are relatively ugly solutions. My favorite solution so far MEANSPEC uses the meanspec and which.max. I'll mark this as the answer since I haven't had any other answers, but feel free to provide an other answer. I'll vote for it and maybe select it as the answer if it provides a better solution.

Resources