R xts of factors - r

I would like to have unique numeric factors as part of an xts, so that over time...each number refers to a specific factor, independent of time.
To give an example, imagine a stock index that changes its constituents every day. We can simulate this if I have the following universe of two letter stock tickers
universe <- apply(as.data.frame(expand.grid(letters,letters)),1,paste0,collapse="")
and each day an index is created that is a random subsample of 20 of the stock tickers from the universe.
subsample.list <- lapply(1:50, function(y){
sort(sample(universe,20,replace=FALSE))
})
the key of unique stocks over the 50 days is:
uni.subsample <- sort(unique(unlist(subsample.list)))
I would like to basically be able to see which stocks were in the index each day if i had the xts object and unique factors.
Although it is not meant to be used this way....I was thinking something like:
tmp <- xts(do.call(rbind,subsample.list),Sys.Date()-c(50:1))
to create the xts.
however I would like to covert the coredata into a numeric matrix, where each number is the ticker from uni.subsample
so if tmp.adjusted['20130716'][1,] would be the numeric vector of numbers of length 20 that represents the numerical values of uni.subsample for the 16th July 2013, so I would expect that I would be able to get all of 2013-07-16's index members by using the xts objecting the following way uni.subsample[tmp.adjusted['20130716'][1,]]...i.e. the adjustment from tmp to tmp.adjusted converts the strings into factors, with unique levels associated with uni.subsample
I hope this makes sense...its kinda hard to explain....

Here a vectorized solution:
tmp.int <- xts(matrix(as.integer(factor(tmp,levels=uni.subsample,ordered=TRUE)),
ncol=ncol(tmp)),index(tmp))
You are basically trying to code a matrix of ordered factor by their levels order.
EDIT adding some benchmarking :
set.seed(1233)
N <- 5000
subsample.list <- lapply(seq(N), function(y){
sort(sample(universe,20,replace=FALSE))
})
uni.subsample <- sort(unique(unlist(subsample.list)))
tmp <- xts(do.call(rbind,subsample.list),Sys.Date()-seq(N))
ag <- function() xts(matrix(as.integer(factor(tmp,levels=uni.subsample,ordered=TRUE)),
ncol=ncol(tmp)),index(tmp))
no <- function()xts(apply(X=tmp,
MARGIN=c(1,2), function(x) which(uni.subsample == x)),
index(tmp))
library(microbenchmark)
microbenchmark(ag(),no(),times=1)
## N = 50 ag 24 faster
microbenchmark(ag(),no(),times=1)
Unit: milliseconds
expr min lq median uq max neval
ag() 1.126405 1.126405 1.126405 1.126405 1.126405 1
## N = 500 ag 135 fatser
microbenchmark(ag(),no(),times=10)
Unit: milliseconds
expr min lq median uq max neval
ag() 23.38484 26.19744 31.13428 35.51057 44.96251 10
no() 3115.24902 3220.04940 3250.63773 3288.66867 3470.35053 10
no() 24.000003 24.000003 24.000003 24.000003 24.000003 1

How about:
tmp.int <- xts(apply(X=tmp, MARGIN=c(1,2), function(x) which(uni.subsample == x)),
index(tmp))
# to perform the lookup (e.g., 'find the name of the first value on May 27, 2013'):
uni.subsample[tmp.int['2013-05-27'][,1]]

Related

Extract column from data.frame faster than from matrix - why?

I'm running a simulation where I need to repeatedly extract 1 column from a matrix and check each of its values against some condition (e.g. < 10). However, doing so with a matrix is 3 times slower than doing the same thing with a data.frame. Why is this the case?
I'd like to to use matrixes to store the simulation data because they are faster for some other operations (e.g. updating columns by adding/subtracting values). How can I extract columns / subset a matrix in a faster way?
Extract column from data.frame vs matrix:
df <- data.frame(a = 1:1e4)
m <- as.matrix(df)
library(microbenchmark)
microbenchmark(
df$a,
m[ , "a"])
# Results; Unit: microseconds
# expr min lq mean median uq max neval cld
# df$a 5.463 5.8315 8.03997 6.612 8.0275 57.637 100 a
# m[ , "a"] 64.699 66.6265 72.43631 73.759 75.5595 117.922 100 b
Extract single value from data.frame vs matrix:
microbenchmark(
df[1, 1],
df$a[1],
m[1, 1],
m[ , "a"][1])
# Results; Unit: nanoseconds
# expr min lq mean median uq max neval cld
# df[1, 1] 8248 8753.0 10198.56 9818.5 10689.5 48159 100 c
# df$a[1] 4072 4416.0 5247.67 5057.5 5754.5 17993 100 b
# m[1, 1] 517 708.5 828.04 810.0 920.5 2732 100 a
# m[ , "a"][1] 45745 47884.0 51861.90 49100.5 54831.5 105323 100 d
I expected the matrix column extraction to be faster, but it was slower. However, extracting a single value from a matrix (i.e. m[1, 1]) was faster than both of the ways of doing so with a data.frame. I'm lost as to why this is.
Extract row vs column, data.frame vs matrix:
The above is only true for selecting columns. When selecting rows, matrices are much faster than data.frames. Still don't know why.
microbenchmark(
df[1, ],
m[1, ],
df[ , 1],
m[ , 1])
# Result: Unit: nanoseconds
# expr min lq mean median uq max neval cld
# df[1, ] 16359 17243.5 18766.93 17860.5 19849.5 42973 100 c
# m[1, ] 718 999.5 1175.95 1181.0 1327.0 3595 100 a
# df[ , 1] 7664 8687.5 9888.57 9301.0 10535.5 42312 100 b
# m[ , 1] 64874 66218.5 72074.93 73717.5 74084.5 97827 100 d
data.frame
Consider the builtin data frame BOD. data frames are stored as a list of columns and the inspect output shown below shows the address of each of the two columns of BOD. We then assign its second column to BOD2. Note that the address of BOD2 is the same memory location as the second column shown in the inspect output for BOD. That is, all R did was have BOD2 point to memory within BOD in order to create BOD2. There was no data movement at all. Another way to see this is to compare the size of BOD, BOD2 and both together and we see that both together take up the same amount of memory as BOD so there must have been no copying. (Continued after code.)
library(pryr)
BOD2 <- BOD[[2]]
inspect(BOD)
## <VECSXP 0x507c278>
## <REALSXP 0x4f81f48>
## <REALSXP 0x4f81ed8> <--- compare this address to address shown below
## ...snip...
BOD2 <- BOD[,2]
address(BOD2)
## [1] "0x4f81ed8"
object_size(BOD)
## 1.18 kB
object_size(BOD2)
## 96 B
object_size(BOD, BOD2) # same as object_size(BOD) above
## 1.18 kB
matrix
Matrices are stored as one long vector with dimensions rather than as a list of columns so the strategy for extraction of a column is different. If we look at the memory used by a matrix m, an extracted column m2 and both together we see below that both together use the sum of the memories of the individual objects showing that there was data copying.
set.seed(123)
n <- 10000L
m <- matrix(rnorm(2*n), n, 2)
m2 <- m[, 2]
object_size(m)
## 160 kB
object_size(m2)
## 80 kB
object_size(m, m2)
## 240 kB <-- unlike for data.frames this equals sum of above
what to do
If your program is such that it uses column extraction up to a point only you could use a data frame for that portion and then do a one time conversion to matrix and process it like that for the rest.
I suppose it is about the data structure of R in the memory.
A matrix in R is a 2-d array, which is the same of 1-d array. A variable is a point directly to the memory, so it would be very faster to extract a single value. To extract a column in the matrix, it would take some computation and ask for new memory address and save it. As for dataframe, it is actually a list of columns, so it would be faster to return a column.
That's what i guess, hope to be proved.

Is there a way to speed up subsetting of smaller data.frames

I have to subset a sequence of data.frames frequently (millions of times each run). The data.frames are of approximate size 200 rows x 30 columns. Depending on the state, the values in the data.frame change from one iteration to the next. Thus, doing one subset in the beginning is not working.
In contrast to the question, when a data.table starts to be faster than a data.frame, I am looking for a speed-up of subsetting for a given size of the data.frame/data.table
The following minimum reproducible example shows, that data.frame seems to be the fastest:
library(data.table)
nmax <- 1e2 # for 1e7 the results look as expected: data.table is really fast!
set.seed(1)
x<-runif(nmax,min=0,max=10)
y<-runif(nmax,min=0,max=10)
DF<-data.frame(x,y)
DT<-data.table(x,y)
summary(microbenchmark::microbenchmark(
setkey(DT,x,y),
times = 10L, unit = "us"))
# expr min lq mean median uq max neval
# 1 setkey(DT, x, y) 70.326 72.606 105.032 80.3985 126.586 212.877 10
summary(microbenchmark::microbenchmark(
DF[DF$x>5, ],
`[.data.frame`(DT,DT$x < 5,),
DT[x>5],
times = 100L, unit = "us"))
# expr min lq mean median uq max neval
# 1 DF[DF$x > 5, ] 41.815 45.426 52.40197 49.9885 57.4010 82.110 100
# 2 `[.data.frame`(DT, DT$x < 5, ) 43.716 47.707 58.06979 53.5995 61.2020 147.873 100
# 3 DT[x > 5] 205.273 214.777 233.09221 222.0000 231.6935 900.164 100
Is there anything I can do to improve performance?
Edit after input:
I am running a discrete event simulation and for each event I have to search in a list (I don't mind whether it is a data.frame or data.table). Most likely, I could implement a different approach, but then I have to re-write the code which was developed over more than 3 years. At the moment, this is not an option. But if there is no way to get it faster this might become an option in the future.
Technically, it is not a sequence of data.frames but just one data.frame, which changes with each iteration. However, this has no impact on "how to get the subset faster" and I hope that the question is now more comprehensive.
You will see a performance boost by converting to matrices. This is a viable alternative if the whole content of your data.frame is numerical (or can be converted without too much trouble).
Here we go. First I modified the data to have it with size 200x30:
library(data.table)
nmax = 200
cmax = 30
set.seed(1)
x<-runif(nmax,min=0,max=10)
DF = data.frame(x)
for (i in 2:cmax) {
DF = cbind(DF, runif(nmax,min=0,max=10))
colnames(DF)[ncol(DF)] = paste0('x',i)
}
DT = data.table(DF)
DM = as.matrix(DF) # # # or data.matrix(DF) if you have factors
And the comparison, ranked from quickest to slowest:
summary(microbenchmark::microbenchmark(
DM[DM[, 'x']>5, ], # # # # Quickest
as.matrix(DF)[DF$x>5, ], # # # # Still quicker with conversion
DF[DF$x>5, ],
`[.data.frame`(DT,DT$x < 5,),
DT[x>5],
times = 100L, unit = "us"))
# expr min lq mean median uq max neval
# 1 DM[DM[, "x"] > 5, ] 13.883 19.8700 22.65164 22.4600 24.9100 41.107 100
# 2 as.matrix(DF)[DF$x > 5, ] 141.100 181.9140 196.02329 195.7040 210.2795 304.989 100
# 3 DF[DF$x > 5, ] 198.846 238.8085 260.07793 255.6265 278.4080 377.982 100
# 4 `[.data.frame`(DT, DT$x < 5, ) 212.342 268.2945 346.87836 289.5885 304.2525 5894.712 100
# 5 DT[x > 5] 322.695 396.3675 465.19192 428.6370 457.9100 4186.487 100
If your use-case involves querying multiple times the data, then you can do the conversion only once and increase the speed by one order of magnitude.

Efficient dataframe iteration in R

Suppose I have a a 5 million row data frame, with two columns, as such (this data frame only has ten rows for simplicity):
df <- data.frame(start=c(11,21,31,41,42,54,61,63), end=c(20,30,40,50,51,63,70,72))
I want to be able to produce the following numbers in a numeric vector:
11 to 20, 21 to 30, 31 to 40, 41 to 50, 51, 54-63, 64-70, 71-72
And then take the length of the new vector (in this case, 10+10+10+10+1+10+7+2) = 60
*NOTE, I do not need the vector itself, just it's length will suffice. So if someone has a more intelligent logical approach to obtain the length, that is welcomed.
Essentially, what was done, was the for each row in the dataframe, the sequence from the start to end was taken, and all these sequences were combined, and then filtered for UNIQUE values.
So I used an approach as such:
length(unique(c(apply(df, 1, function(x) {
return(as.numeric(x[1]):as.numeric(x[2]))
}))))
which proves incredibly slow on my five million row data frame.
Any quicker more efficient solutions? Bonus, please try to add system time.
user system elapsed
19.946 0.620 20.477
This should work, assuming your data is sorted.
library(dplyr) # for the lag function
with(df, sum(end - pmax(start, lag(end, 1, default = 0)+1) + 1))
#[1] 60
library(microbenchmark)
microbenchmark(
beginneR={with(df, sum(end - pmax(start, lag(end, 1, default = 0)+1) + 1))},
r2evans={vec <- pmax(mm[,1], c(0,1+head(mm[,2],n=-1))); sum(mm[,2]-vec+1);},
times = 1000
)
Unit: microseconds
expr min lq median uq max neval
beginneR 37.398 41.4455 42.731 44.0795 74.349 1000
r2evans 31.788 35.2470 36.827 38.3925 9298.669 1000
So matrix is still faster, but not much (and the conversion step is still not included here). And I wonder why the max duration in #r2evans's answer is so high compared to all other values (which are really fast)
Another method:
mm <- as.matrix(df) ## critical for performance/scalability
(vec <- pmax(mm[,1], c(0,1+head(mm[,2],n=-1))))
## [1] 11 21 31 41 51 54 64 71
sum(mm[,2] - vec + 1)
## [1] 60
(This should scale reasonable well, certainly better than data.frames.)
Edit: after I updated my code to use matrices and no apply calls, I did a quick benchmark of my implementation compared with the other answer (which is also correct):
library(microbenchmark)
library(dplyr)
microbenchmark(
beginneR={
df <- data.frame(start=c(11,21,31,41,42,54,61,63),
end=c(20,30,40,50,51,63,70,72))
with(df, sum(end - pmax(start, lag(end, 1, default = 0)+1) + 1))
},
r2evans={
mm <- matrix(c(11,21,31,41,42,54,61,63,
20,30,40,50,51,63,70,72), nc=2)
vec <- pmax(mm[,1], c(0,1+head(mm[,2],n=-1)))
sum(mm[,2]-vec+1)
}
)
## Unit: microseconds
## expr min lq median uq max neval
## beginneR 230.410 238.297 244.9015 261.228 443.574 100
## r2evans 37.791 40.725 44.7620 47.880 147.124 100
This benefits greatly from the use of matrices instead of data.frames.
Oh, and system time is not that helpful here :-)
system.time({
mm <- matrix(c(11,21,31,41,42,54,61,63,
20,30,40,50,51,63,70,72), nc=2)
vec <- pmax(mm[,1], c(0,1+head(mm[,2],n=-1)))
sum(mm[,2]-vec+1)
})
## user system elapsed
## 0 0 0

Convert character list to data frame

I have some data in JSON I am trying to use in R. My problem is I cannot get the data in the right format.
require(RJSONIO)
json <- "[{\"ID\":\"id1\",\"VALUE\":\"15\"},{\"ID\":\"id2\",\"VALUE\":\"10\"}]"
example <- fromJSON(json)
example <- do.call(rbind,example)
example <- as.data.frame(example,stringsAsFactors=FALSE)
> example
ID VALUE
1 id1 15
2 id2 10
This gets close, but I cannot get the numeric column to convert to numeric. I know I can convert columns manually, but I thought data.frame or as.data.frame scanned the data and made the most appropriate class definitions. Clearly I misunderstood. I am reading in numerous tables - all very different - and I need to have the numeric data treated as such when it's numeric.
Ultimately I am looking to get data tables with numeric columns when the data is numeric.
read.table uses type.convert to convert data to the appropriate type. You could do the same as a cleaning step after reading in the JSON data.
sapply(example,class)
# ID VALUE
# "character" "character"
example[] <- lapply(example, type.convert, as.is = TRUE)
sapply(example, class)
# ID VALUE
# "character" "integer"
I would recommend that you use the jsonlite package, which would convert this to a data frame by default
jsonlite::fromJSON(json)
ID VALUE
1 id1 15
2 id2 10
NOTE: The numeric problem still remains since json does not have data types encoded. So you will have to manually convert numeric columns.
Just to follow-up to Ramnath's suggestion to transition to jsonlite I did some benchmarking of the two approaches:
##RJSONIO vs. jsonlite for a simple example
require(RJSONIO)
require(jsonlite)
require(microbenchmark)
json <- "{\"ID\":\"id1\",\"VALUE\":\"15\"},{\"ID\":\"id2\",\"VALUE\":\"10\"}"
test <- rep(json,1000)
test <- paste(test,collapse=",")
test <- paste0("[",test,"]")
func1 <- function(x){
temp <- jsonlite::fromJSON(x)
}
func2 <- function(x){
temp <- RJSONIO::fromJSON(x)
temp <- do.call(rbind,temp)
temp <- as.data.frame(temp,stringsAsFactors=FALSE)
}
> microbenchmark(func1(test),func2(test))
Unit: milliseconds
expr min lq median uq max neval
func1(test) 204.05228 221.46047 233.93321 246.90815 341.95684 100
func2(test) 21.60289 22.36368 22.70935 23.75409 27.41851 100
At least for now, and I know the jsonlite package is still new and focusing on accuracy over performance, the older RJSONIO is performing faster for this simple example - even with transforming the list into a data frame.
Update including rjson:
require(rjson)
func3 <- function(x){
temp <- rjson::fromJSON(x)
temp <- do.call(rbind,lapply(temp,unlist))
temp <- as.data.frame(temp,stringsAsFactors=FALSE)
}
> microbenchmark(func1(test),func2(test),func3(test))
Unit: milliseconds
expr min lq median uq max neval
func1(test) 205.34603 220.85428 234.79492 249.87628 323.96853 100
func2(test) 21.76972 22.67311 23.11287 23.56642 32.97469 100
func3(test) 14.16942 15.96937 17.29122 20.19562 35.63004 100
> microbenchmark(func1(test),func2(test),func3(test),times=500)
Unit: milliseconds
expr min lq median uq max neval
func1(test) 206.48986 225.70693 241.16301 253.83269 336.88535 500
func2(test) 21.75367 22.53256 23.06782 23.93026 103.70623 500
func3(test) 14.21577 15.61421 16.86046 19.27347 95.13606 500
> identical(func1(test),func2(test)) & identical(func1(test),func3(test))
[1] TRUE
At least on my machine rjson is only slightly faster, although I did not test how it scales compared to RJSONIO which may be where it gets the big performance bump Ramnath suggested.

Count number of distinct values in a vector

I have a vector of scalar values of which I'm trying to get: "How many different values there are".
For instance in group <- c(1,2,3,1,2,3,4,6) unique values are 1,2,3,4,6 so I want to get 5.
I came up with:
length(unique(group))
But I'm not sure it's the most efficient way to do it. Isn't there a better way to do this?
Note: My case is more complex than the example, consisting of around 1000 numbers with at most 25 different values.
Here are a few ideas, all points towards your solution already being very fast. length(unique(x)) is what I would have used as well:
x <- sample.int(25, 1000, TRUE)
library(microbenchmark)
microbenchmark(length(unique(x)),
nlevels(factor(x)),
length(table(x)),
sum(!duplicated(x)))
# Unit: microseconds
# expr min lq median uq max neval
# length(unique(x)) 24.810 25.9005 27.1350 28.8605 48.854 100
# nlevels(factor(x)) 367.646 371.6185 380.2025 411.8625 1347.343 100
# length(table(x)) 505.035 511.3080 530.9490 575.0880 1685.454 100
# sum(!duplicated(x)) 24.030 25.7955 27.4275 30.0295 70.446 100
You can use rle from base package
x<-c(1,2,3,1,2,3,4,6)
length(rle(sort(x))$values)
rle produces two vectors (lengths and values ). The length of values vector gives you the number of unique values.
I have used this function
length(unique(array))
and it works fine, and doesn't require external libraries.
uniqueN function from data.table is equivalent to length(unique(group)). It is also several times faster on larger datasets, but not so much on your example.
library(data.table)
library(microbenchmark)
xSmall <- sample.int(25, 1000, TRUE)
xBig <- sample.int(2500, 100000, TRUE)
microbenchmark(length(unique(xSmall)), uniqueN(xSmall),
length(unique(xBig)), uniqueN(xBig))
#Unit: microseconds
# expr min lq mean median uq max neval cld
#1 length(unique(xSmall)) 17.742 24.1200 34.15156 29.3520 41.1435 104.789 100 a
#2 uniqueN(xSmall) 12.359 16.1985 27.09922 19.5870 29.1455 97.103 100 a
#3 length(unique(xBig)) 1611.127 1790.3065 2024.14570 1873.7450 2096.5360 3702.082 100 c
#4 uniqueN(xBig) 790.576 854.2180 941.90352 896.1205 974.6425 1714.020 100 b
We can use n_distinct from dplyr
dplyr::n_distinct(group)
#[1] 5
If one wants to get number of unique elements in a matrix or data frame or list, the following code would do:
if( typeof(Y)=="list"){ # Y is a list or data frame
# data frame to matrix
numUniqueElems <- length( na.exclude( unique(unlist(Y)) ) )
} else if ( is.null(dim(Y)) ){ # Y is a vector
numUniqueElems <- length( na.exclude( unique(Y) ) )
} else { # length(dim(Y))==2, Yis a matrix
numUniqueElems <- length( na.exclude( unique(c(Y)) ) )
}

Resources