Convert Date to year month representation - r

I have a Date, and am interested in representing it as an integer of yyyymm form. Currently, I do:
get_year_month <- function(d) { return(as.integer(format(d, "%Y%m")))}
mydate = seq.Date(from = as.Date("2012-01-01"), to = as.Date("5012-01-01"), by = 1)
system.time(ym <- get_year_month(mydate))
# user system elapsed
# 5.972 0.974 6.951
This is very slow for large datasets. Is there a faster way? Please provide timings for your answers so they can be easily compared. Use the above example.

Using functions from the lubridate package can be almost twice as fast as your function :
mydate = as.Date(rep("2012-01-01",1000))
library(lubridate)
library(microbenchmark)
microbenchmark(get_year_month(mydate),
year(mydate)*100+month(mydate))
gives :
R> Unit: milliseconds
expr min lq median uq
get_year_month(mydate) 2.150296 2.188370 2.218176 2.285973
year(mydate) * 100 + month(mydate) 1.220016 1.228129 1.239704 1.284568

You can try using yearmon class from zoo package. In general if you are doing timeseries manipulation and analysis, I would suggest using xts or atleast zoo class. xts has lot of functionality for analysis of very huge timeseries data.
Here is quick benchmark against other suggested solutions.
get_year_month <- function(d) {
return(as.integer(format(d, "%Y%m")))
}
mydate = as.Date(rep("2012-01-01", 1e+06))
microbenchmark(get_year_month(mydate), year(mydate) * 100 + month(mydate), as.yearmon(mydate, format = "%Y-%m-%d"), times = 1)
## Unit: milliseconds
## expr min lq median uq max neval
## get_year_month(mydate) 1049.8813 1049.8813 1049.8813 1049.8813 1049.8813 1
## year(mydate) * 100 + month(mydate) 434.1765 434.1765 434.1765 434.1765 434.1765 1
## as.yearmon(mydate, format = "%Y-%m-%d") 249.6704 249.6704 249.6704 249.6704 249.6704 1

It would be best to keep your Dates in POSIXlt format if you want to manipulate them like that:
> system.time(ym <- get_year_month(mydate))
user system elapsed
4.039 0.025 4.079
> system.time(mydatep <- as.POSIXlt(mydate))
user system elapsed
3.576 0.016 3.603
> system.time(ym <- (1900 + mydatep$year)*100 + (mydatep$mon + 1))
user system elapsed
0.010 0.005 0.015
It's still a little faster, and you get subsequent similar operations for free, in terms of time.

There may not be a faster way for a single item. However you can make a version of the function that operates on collections run much faster than linearly by using builtin replicate e.g.
function mydate(D) {
x <- replicate(dim(D)[0], get_year_month(..)
return(x)
}

Related

How to find longest sequence of values above threshold efficiently in R

I am working on spatiotemporal observations of temperatures, stored in arrays of size 100*100*504 (100*100 grid, for 504 different hours representing 21 days). I am computing various indicators from those observations, for different periods (3 to 21 days), which obviously require some time, and I'm looking at improving computation efficiency. I am not really accustomed with R so I am not sure if what I am doing is the most efficient way.
One of the things I want to do is to find (for each cell) the longest continuous period of time where temperature is above a certain threshold. This is what I'm doing at the moment :
First I compute a boolean array based on the threshold using the following function.
utci_test = array(runif(100*100*504, min = 18, max = 42), c(100,100,504))
to_hs = function(utci, period=1:length(utci[1,1,]), hs_threshold){
utci_hs = utci*0
utci_hs[which(utci > hs_threshold)] = 1
utci_hs[is.na(utci)] = 0
return(utci_hs)
}
Then I transform each vector representing the hourly value for each cell into an rle object, and I return the maximum length of the 1's sequences (representing a continuous period over threshold).
max_duration_hs = function(utci_hs, period=1:length(utci_hs[1,1,]) ){
apply(utci_hs, MARGIN=c(1,2), FUN=function(x){
r = rle(x)
max(r$lengths[as.logical(r$values)], fill = 0)
})
}
Looking at the time required I noticed the second step is taking some time (bear in mind that I have to repeat this operation ~8000 times in total)
system.time(to_hs(utci_test, hs_threshold=32.0))
# utilisateur système écoulé
# 0.051 0.004 0.055
system.time(to_hs(utci_test, hs_threshold=32.0))
# utilisateur système écoulé
# 0.053 0.000 0.052
utci_test_sh = to_hs(utci_test, hs_threshold=32.0)
system.time(max_duration_hs(utci_test_sh))
# utilisateur système écoulé
# 0.456 0.012 0.468
So, I'm wondering if there is a more efficient way to do this as I guess transforming into rle object might be inefficient ?
You can get a bit of a speed bump by writing your own version of the rle() function that works because you know you want runs of 1's, and does a little less comparison. This gets you about 2x faster, down to a median time of about 250 milliseconds or so on my machine (a generic macbook).
If you have to do this 8,000 times you'll save yourself the most time by parallelizing the code to run on a multicore machine, which is straightforward to do in R (check out e.g. the parallel package).
Below the code for the speedup.
# generate data
set.seed(123)
utci_test <- array(runif(100*100*504, min = 18, max = 42), c(100,100,504))
# original functions
to_hs = function(utci, period=1:length(utci[1,1,]), hs_threshold){
utci_hs = utci*0
utci_hs[which(utci > hs_threshold)] = 1
utci_hs[is.na(utci)] = 0
return(utci_hs)
}
max_duration_hs = function(utci_hs, period=1:length(utci_hs[1,1,]) ){
apply(utci_hs, MARGIN=c(1,2), FUN=function(x){
r = rle(x)
max(r$lengths[as.logical(r$values)], fill = 0)
})
}
# helper func for rle
rle_max <- function(v) {
max(diff(c(0L, which(v==0), length(v)+1))) - 1
}
max_dur_hs_2 <- function(utci_hs) {
apply(utci_hs, MARGIN=c(1,2), FUN= rle_max)
}
# Check equivalence
utci_hs <- to_hs(utci = utci_test, hs_threshold = 32)
all.equal(max_dur_hs_2(utci_hs),
max_duration_hs(utci_hs))
#> [1] TRUE
# Test speed
library(microbenchmark)
microbenchmark(max_dur_hs_2(utci_hs),
max_duration_hs(utci_hs))
#> Unit: milliseconds
#> expr min lq mean median uq max
#> max_dur_hs_2(utci_hs) 216.1481 236.7825 250.9277 247.9918 262.4369 296.0146
#> max_duration_hs(utci_hs) 454.5740 476.5710 501.5119 489.9536 509.8750 774.9963
#> neval cld
#> 100 a
#> 100 b
Created on 2020-05-07 by the reprex package (v0.3.0)

Convert integers to decimal values

I have a set of integer data between 1:10000. I need to bring them in range 0:1.
For example, converting
12 --> 0.12
123 --> 0.123
1234 --> 0.1234
etc. (note that I don't want to scale the values).
Any suggestions how to do this on all the data at once?
I would simply do
x <- c(2, 14, 128, 1940, 140, 20000)
x/10^nchar(x)
## [1] 0.200 0.140 0.128 0.194 0.140 0.200
But a much faster approach (which avoids to character conversion) offered by #Frank
x/10^ceiling(log10(x))
Benchmark
library(microbenchmark)
set.seed(123)
x <- sample(1e8, 1e6)
microbenchmark(
david = x/10^nchar(x),
davidfrank = x/10^ceiling(log10(x)),
richard1 = as.numeric(paste0(".", x)),
richard2 = as.numeric(sprintf(".%d", x))
)
# Unit: milliseconds
# expr min lq mean median uq max neval cld
# david 691.0513 822.6482 1052.2473 956.5541 1153.4779 2391.7856 100 b
# davidfrank 130.0522 164.3227 255.8397 197.3158 339.3224 576.2255 100 a
# richard1 1130.5160 1429.8314 1972.2624 1689.8454 2473.6409 4791.0558 100 c
# richard2 712.8357 926.8013 1181.5349 1103.1661 1315.4459 2753.6795 100 b
The non-mathy way would be to add the decimal with paste() then coerce back to numeric.
x <- c(2, 14, 128, 1940, 140, 20000)
as.numeric(paste0(".", x))
# [1] 0.200 0.140 0.128 0.194 0.140 0.200
Update 1: There was some interest about the timings of the two originally posted methods. According to the following benchmarks, they seem to be about the same.
library(microbenchmark)
x <- 1:1e5
microbenchmark(
david = { david <- x/10^nchar(x) },
richard = { richard <- as.numeric(paste0(".", x)) }
)
# Unit: milliseconds
# expr min lq mean median uq max neval
# david 88.94391 89.18379 89.70962 89.40736 89.71012 99.68126 100
# richard 87.89776 88.17234 89.38383 88.44439 88.77052 105.06066 100
identical(richard, david)
# [1] TRUE
Update 2: I have also remembered that sprintf() is often faster than paste0(). We can also use the following.
as.numeric(sprintf(".%d", x))
Now using the same x from above, and only comparing these two choices, we have a good improvement in the timing of sprintf() versus paste(), as shown below.
microbenchmark(
paste0 = as.numeric(paste0(".", x)),
sprintf = as.numeric(sprintf(".%d", x))
)
# Unit: milliseconds
# expr min lq mean median uq max neval
# paste0 87.89413 88.41606 90.25795 88.82484 89.65674 107.8080 100
# sprintf 61.16524 61.23328 62.26202 61.29192 61.48316 79.1202 100

Why is mean() so slow?

Everything is in the question! I just tried to do a bit of optimization, and nailing down the bottle necks, out of curiosity, I tried that:
t1 <- rnorm(10)
microbenchmark(
mean(t1),
sum(t1)/length(t1),
times = 10000)
and the result is that mean() is 6+ times slower than the computation "by hand"!
Does it stem from the overhead in the code of mean() before the call to the Internal(mean) or is it the C code itself which is slower? Why? Is there a good reason and thus a good use case?
It is due to the s3 look up for the method, and then the necessary parsing of arguments in mean.default. (and also the other code in mean)
sum and length are both Primitive functions. so will be fast (but how are you handling NA values?)
t1 <- rnorm(10)
microbenchmark(
mean(t1),
sum(t1)/length(t1),
mean.default(t1),
.Internal(mean(t1)),
times = 10000)
Unit: nanoseconds
expr min lq median uq max neval
mean(t1) 10266 10951 11293 11635 1470714 10000
sum(t1)/length(t1) 684 1027 1369 1711 104367 10000
mean.default(t1) 2053 2396 2738 2739 1167195 10000
.Internal(mean(t1)) 342 343 685 685 86574 10000
The internal bit of mean is faster even than sum/length.
See http://rwiki.sciviews.org/doku.php?id=packages:cran:data.table#method_dispatch_takes_time (mirror) for more details (and a data.table solution that avoids .Internal).
Note that if we increase the length of the vector, then the primitive approach is fastest
t1 <- rnorm(1e7)
microbenchmark(
mean(t1),
sum(t1)/length(t1),
mean.default(t1),
.Internal(mean(t1)),
+ times = 100)
Unit: milliseconds
expr min lq median uq max neval
mean(t1) 25.79873 26.39242 26.56608 26.85523 33.36137 100
sum(t1)/length(t1) 15.02399 15.22948 15.31383 15.43239 19.20824 100
mean.default(t1) 25.69402 26.21466 26.44683 26.84257 33.62896 100
.Internal(mean(t1)) 25.70497 26.16247 26.39396 26.63982 35.21054 100
Now method dispatch is only a fraction of the overall "time" required.
mean is slower than computing "by hand" for several reasons:
S3 Method dispatch
NA handling
Error correction
Points 1 and 2 have already been covered. Point 3 is discussed in What algorithm is R using to calculate mean?. Basically, mean makes 2 passes over the vector in order to correct for floating point errors. sum only makes 1 pass over the vector.
Notice that identical(sum(t1)/length(t1), mean(t1)) may be FALSE, due to these precision issues.
> set.seed(21); t1 <- rnorm(1e7,,21)
> identical(sum(t1)/length(t1), mean(t1))
[1] FALSE
> sum(t1)/length(t1) - mean(t1)
[1] 2.539201e-16

R - streamline subsampling procedure

I have a dataset composed of values obtained from studies and experiments. Experiments are nested within studies. I want to subsample the dataset so that only 1 experiment is represented for each study. I want to repeat this procedure 10,000 times, randomly drawing the 1 experiment each time, and then calculate some summary statistics for the values. Here is an example dataset:
df=data.frame(study=c(1,1,2,2,2,3,4,4),expt=c(1,2,1,2,3,1,1,2),value=runif(8))
I wrote the following function to do the above, but it is taking forever. Does anyone have any suggestions for streamlining this code? Thanks!
subsample=function(x,A) {
subsample.list=sapply(1:A,function(m) {
idx=ddply(x,c("study"),function(i) sample(1:nrow(i),1)) #Sample one experiment from each study
x[paste(x$study,x$expt,sep="-") %in% paste(idx$study,idx$V1,sep="-"),"value"] } ) #Match the study-experiment combinations and retrieve values
means.list=ldply(subsample.list,mean) #Calculate the mean of 'values' for each iteration
c(quantile(means.list$V1,0.025),mean(means.list$V1),upper=quantile(means.list$V1,0.975)) } #Calculate overall means and 95% CIs
You can vectorise this way more (even using plyr), and go much much faster:
function=yoursummary(x)c(quantile(x,0.025),mean(x),upper=quantile(x,0.975))
subsampleX=function(x,M)
yoursummary(
aaply(
daply(.drop_o=F,df,.(study),
function(x)sample(x$value,M,replace=T)
),1,mean
)
)
The trick here is to do all the sampling up front. If we want to sample M times, why not do all that while you have access to the study.
Original code:
> system.time(subsample(df,20000))
user system elapsed
123.23 0.06 124.74
New vectorised code:
> system.time(subsampleX(df,20000))
user system elapsed
0.24 0.00 0.25
That's about 500x faster.
Here's a base R solution which avoids ddply for speed reasons:
df=data.frame(study=c(1,1,2,2,2,3,4,4),expt=c(1,2,1,2,3,1,1,2),value=runif(8))
sample.experiments <- function(df) {
r <- rle(df$study)
samp <- sapply( r$lengths , function(x) sample(seq(x),1) )
start.idx <- c(0,cumsum(r$lengths)[1:(length(r$lengths)-1)] )
df[samp + start.idx,]
}
> sample.experiments(df)
study expt value
1 1 1 0.6113196
4 2 2 0.5026527
6 3 1 0.2803080
7 4 1 0.9824377
Benchmarks
> m <- microbenchmark(
+ ddply(df,.(study),function(i) i[sample(1:nrow(i),1),]) ,
+ sample.experiments(df)
+ )
> m
Unit: microseconds
expr min lq median uq max
1 ddply(df, .(study), function(i) i[sample(1:nrow(i), 1), ]) 3808.652 3883.632 3936.805 4022.725 6530.506
2 sample.experiments(df) 337.327 350.734 357.644 365.915 580.097

Equivalent to rowMeans() for min()

I have seen this question being asked multiple times on the R mailing list, but still could not find a satisfactory answer.
Suppose I a matrix m
m <- matrix(rnorm(10000000), ncol=10)
I can get the mean of each row by:
system.time(rowMeans(m))
user system elapsed
0.100 0.000 0.097
But obtaining the minimum value of each row by
system.time(apply(m,1,min))
user system elapsed
16.157 0.400 17.029
takes more than 100 times as long, is there a way to speed this up?
You could use pmin, but you would have to get each column of your matrix into a separate vector. One way to do that is to convert it to a data.frame then call pmin via do.call (since data.frames are lists).
system.time(do.call(pmin, as.data.frame(m)))
# user system elapsed
# 0.940 0.000 0.949
system.time(apply(m,1,min))
# user system elapsed
# 16.84 0.00 16.95
Quite late to the party, but as the author of matrixStats and in case someone spots this, please note that matrixStats::rowMins() is very fast these days, e.g.
library(microbenchmark)
library(Biobase) # rowMin()
library(matrixStats) # rowMins()
options(digits=3)
m <- matrix(rnorm(10000000), ncol=10)
stats <- microbenchmark(
rowMeans(m), ## A benchmark by OP
rowMins(m),
rowMin(m),
do.call(pmin, as.data.frame(m)),
apply(m, MARGIN=1L, FUN=min),
times=10
)
> stats
Unit: milliseconds
expr min lq mean median uq max
rowMeans(m) 77.7 82.7 85.7 84.4 90.3 98.2
rowMins(m) 72.9 74.1 88.0 79.0 90.2 147.4
rowMin(m) 341.1 347.1 395.9 383.4 395.1 607.7
do.call(pmin, as.data.frame(m)) 326.4 357.0 435.4 401.0 437.6 657.9
apply(m, MARGIN = 1L, FUN = min) 3761.9 3963.8 4120.6 4109.8 4198.7 4567.4
If you want to stick to CRAN packages, then both the matrixStats and the fBasics packages have the function rowMins [note the s which is not in the Biobase function] and a variety of other row and column statistics.
library("sos")
findFn("rowMin")
gets a hit in the Biobase package, from Bioconductor ...
source("http://bioconductor.org/biocLite.R")
biocLite("Biobase")
m <- matrix(rnorm(10000000), ncol=10)
system.time(rowMeans(m))
## user system elapsed
## 0.132 0.148 0.279
system.time(apply(m,1,min))
## user system elapsed
## 11.825 1.688 13.603
library(Biobase)
system.time(rowMin(m))
## user system elapsed
## 0.688 0.172 0.864
Not as fast as rowMeans, but a lot faster than apply(...,1,min)
I've been meaning to try out the new compiler package in R 2.13.0. This essentially follows the post outlined by Dirk here.
library(compiler)
library(rbenchmark)
rowMin <- function(x, ind) apply(x, ind, min)
crowMin <- cmpfun(rowMin)
benchmark(
rowMin(m,1)
, crowMin(m,1)
, columns=c("test", "replications","elapsed","relative")
, order="relative"
, replications=10)
)
And the results:
test replications elapsed relative
2 crowMin(m, 1) 10 120.091 1.0000
1 rowMin(m, 1) 10 122.745 1.0221
Anticlimatic to say the least, though looks like you've gotten some other good options.
Not particularly R-idiosyncratic, but surely the fastest method is just to use pmin and loop over columns:
x <- m[,1]
for (i in 2:ncol(m)) x <- pmin(x, m[,i])
On my machine that takes just 3 times longer than rowMeans for the 1e+07x10 matrix, and is slightly faster than the do.call method via data.frame.

Resources