I am working on a financial problem of deleting messages from a financial center. I am using data.table and I am very satisfied with its performance and easy handling.
Though, I ask myself always how to improve and use the whole power of data.table.
Here is an example of my task:
set.seed(1)
DT <- data.table(SYM = c(rep("A", 10), rep("B", 12)), PRC = format(rlnorm(22, 2), digits = 2), VOL = rpois(22, 312), ID = c(seq(1000, 1009), seq(1004, 1015)), FLAG = c(rep("", 8), "R", "A", rep("", 4), "R", rep("", 7)))
DT$PRC[9] <- DT$PRC[6]
DT$PRC[7] <- DT$PRC[6]
DT$VOL[9] <- DT$VOL[6]
DT$VOL[7] <- DT$VOL[6]
DT$PRC[15] <- DT$PRC[13]
DT$VOL[15] <- DT$VOL[13]
## See the original dataset
DT
## Set the key
setkey(DT, "SYM", "PRC", "VOL", "FLAG")
## Get all rows, that match a row with FLAG == "R" on the given variables in the list
DT[DT[FLAG == "R"][,list(SYM, PRC, VOL)]]
## Remove these rows from the dataset
DT <- DT[!DT[FLAG == "R"][,list(SYM, PRC, VOL)]]
## See the modified data.table
DT
My questions are now:
Is this an efficient way to perform my task or does there exist something more 'data.table' style? Is the key set efficiently?
How can I perform my task if I do not only have three variables to match on (here: SYM, PRC, VOL) but a lot more, does there exist something like exclusion (I do know I can use it data.frame style but I want to know if there is a more elegant way for a data.table)?
What is with the copying in the last command? Following the thread on remove row by reference, I think copying is the only way to do it. What if I have several tasks, can I compound them in a way and avoid copying for each task?
I'm confused why you're setting the key to FLAG, isn't what you want simply
setkey(DT, SYM, PRC, VOL)
DT[!DT[FLAG == "R"]]
If you are only setting the key to perform this operation, #eddi's answer is the best and easiest to read.
setkey(DT, SYM, PRC, VOL)
# ^ as in #eddi's answer, since you are not using the rest of the key
microbenchmark(
notjoin=DT[!DT[FLAG == "R"][,list(SYM, PRC, VOL)]],
logi_not=DT[!DT[,rep(any(FLAG=='R'),.N),by='SYM,PRC,VOL']$V1],
idx_not=DT[!DT[,if(any(FLAG=='R')){.I}else{NULL},by='SYM,PRC,VOL']$V1],
SD=DT[,if(!any(FLAG=='R')){.SD}else{NULL},by='SYM,PRC,VOL'],
eddi=DT[!DT[FLAG == "R"]],
times=1000L
)
results:
Unit: milliseconds
expr min lq median uq max neval
notjoin 4.983404 5.577309 5.715527 5.903417 66.468771 1000
logi_not 4.393278 4.960187 5.097595 5.273607 66.429358 1000
idx_not 4.523397 5.139439 5.287645 5.453129 15.068991 1000
SD 3.670874 4.180012 4.308781 4.463737 9.429053 1000
eddi 2.767599 3.047273 3.137979 3.255680 11.970966 1000
On the other hand, several of options above do not require that your operation involve grouping by the key. Suppose you either...
are doing this once using groups other than the key (which you don't want to change) or
want to perform several operations like this using different groupings before doing the copy operation to drop rows, newDT <- DT[...] (as mentioned in the OP's point 3).
.
setkey(DT,NULL)
shuffDT <- DT[sample(1:nrow(DT))] # not realistic, of course
# same benchmark with shuffDT, but without methods that require a key
# Unit: milliseconds
# expr min lq median uq max neval
# logi_not 4.466166 5.120273 5.298174 5.562732 64.30966 1000
# idx_not 4.623821 5.319501 5.517378 5.799484 15.57165 1000
# SD 4.053672 4.448080 4.612213 4.849505 66.76140 1000
In these cases, the OP's and eddi's methods are not available (since joining requires a key). For a one-off operation, using .SD seems faster. For subsetting by multiple criteria, you'll want to keep track of the rows you want to keep/drop before making the copy newDT <- DT[!union(badrows1,badrows2,...)].
DT[,rn:=1:.N] # same as .I
badflagrows <- DT[,if(any(FLAG=='R')){rn}else{NULL},by='SYM,PRC,VOL']$V1
# fill in next_cond, next_grp
badnextrows <- DT[!badflagrows][,
if(any(next_cond)){rn}else{NULL},by='next_grp']$V1
Perhaps something similar can be done with the logical subsetting ("logi_not" in the benchmarks), which is a little faster.
Related
Point: if you are going to vote to close, it is poor form not to give a reason why. If it can be improved without requiring a close, take the 10 seconds it takes to write a brief comment.
Question:
How do I do the following "partial melt" in a way that memory can support?
Details:
I have a few million rows and around 1000 columns. The names of the columns have 2 pieces of information in them.
Normally I would melt to a data frame (or table) comprised of a pair of columns, then I would split on the variable name to create two new columns, then I would cast using one of the new splits for new column names, and one for row names.
This isn't working. My billion or so rows of data are making the additional columns overwhelm my memory.
Outside the "iterative force" (as opposed to brute force) of a for-loop, is there a clean and effective way to do this?
Thoughts:
this is a little like melt-colsplit-cast
libraries common for this seem to be "dplyr", "tidyr", "reshape2", and "data.table".
tidyr's gather+separate+spread looks good, but doesn't like not having a unique row identifier
reshape2's dcast (I'm looking for 2d output) wants to aggregate
brute force loses the labels. By brute force I mean df <- rbind(df[,block1],...) where block is the first 200 column indices, block2 is the second, etcetera.
Update (dummy code):
#libraries
library(stringr)
#reproducibility
set.seed(56873504)
#geometry
Ncol <- 2e3
Nrow <- 1e6
#column names
namelist <- numeric(length=Ncol)
for(i in 1:(Ncol/200)){
col_idx <- 1:200+200*(i-1)
if(i<26){
namelist[col_idx] <- paste0(intToUtf8(64+i),str_pad(string=1:200,width=3,pad="0"))
} else {
namelist[col_idx] <- paste0(intToUtf8(96+i),str_pad(string=1:200,width=3,pad="0"))
}
}
#random data
df <- as.data.frame(matrix(runif(n=Nrow*Ncol,min=0, max=16384),nrow=Nrow,ncol=Ncol))
names(df) <- namelist
The output that I would be looking for would have a column with the first character of the current name (single alphabet character) and colnames would be 1 to 200. It would be much less wide than "df" but not fully melted. It would also not kill my cpu or memory.
(Ugly/Manual) Brute force version:
(working on it... )
Here are two options both using data.table.
If you know that each column string always has 200 (or n) fields associated with it (i.e., A001 - A200), you can use melt() and make a list of measurement variables.
melt(dt
, measure.vars = lapply(seq_len(Ncol_p_grp), seq.int, to = Ncol_p_grp * n_grp, by = Ncol_p_grp)
, value.name = as.character(seq_len(Ncol_p_grp))
)[, variable := rep(namelist_letters, each = Nrow)][]
#this data set used Ncol_p_grp <- 5 to help condense the data.
variable 1 2 3 4 5
1: A 0.2655087 0.06471249 0.2106027 0.41530902 0.59303088
2: A 0.3721239 0.67661240 0.1147864 0.14097138 0.55288322
3: A 0.5728534 0.73537169 0.1453641 0.45750426 0.59670404
4: A 0.9082078 0.11129967 0.3099322 0.80301300 0.39263068
5: A 0.2016819 0.04665462 0.1502421 0.32111280 0.26037592
---
259996: Z 0.5215874 0.78318812 0.7857528 0.61409610 0.67813484
259997: Z 0.6841282 0.99271480 0.7106837 0.82174887 0.92676493
259998: Z 0.1698301 0.70759513 0.5345685 0.09007727 0.77255570
259999: Z 0.2190295 0.14661878 0.1041779 0.96782695 0.99447460
260000: Z 0.4364768 0.06679642 0.6148842 0.91976255 0.08949571
Alternatively, we can use rbindlist(lapply(...)) to go through the data set and subset it based on the letter within the columns.
rbindlist(
lapply(namelist_letters,
function(x) setnames(
dt[, grep(x, names(dt), value = T), with = F]
, as.character(seq_len(Ncol_p_grp)))
)
, idcol = 'ID'
, use.names = F)[, ID := rep(namelist_letters, each = Nrow)][]
With 78 million elements in this dataset, it takes around a quarter of a second. I tried to up it to 780 million, but I just don't really have the RAM to generate the data that quickly in the first place.
#78 million elements - 10,000 rows * 26 grps * 200 cols_per_group
Unit: milliseconds
expr min lq mean median uq max neval
melt_option 134.0395 135.5959 137.3480 137.1523 139.0022 140.8521 3
rbindlist_option 290.2455 323.4414 350.1658 356.6373 380.1260 403.6147 3
Data: Run this before everything above:
#packages ----
library(data.table)
library(stringr)
#data info
Nrow <- 10000
Ncol_p_grp <- 200
n_grp <- 26
#generate data
set.seed(1)
dt <- data.table(replicate(Ncol_p_grp * n_grp, runif(n = Nrow)))
names(dt) <- paste0(rep(LETTERS[1:n_grp], each = Ncol_p_grp)
, str_pad(rep(seq_len(Ncol_p_grp), n_grp), width = 3, pad = '0'))
#first letter
namelist_letters <- unique(substr(names(dt), 1, 1))
I would like to know how many times each variable changes within each group and later add the result for all groups.
I've found this way:
mi[,lapply(.SD, function(x) sum(x != shift(x),
na.rm=T) ), by = ID][,-1][,lapply(.SD,sum, na.rm=T)]
It works, it produces the proper result but it's really slow in my large datatable.
I would like to do both operations inside the same lapply (or something faster and more compact), but the first one is done by group, the second isn't.
It could be written in an easier way (maybe not always)
mi[,lapply(.SD, function(x) sum(x != shift(x),
na.rm=T) )] [,-1]-mi[,length(unique(ID))]+1
But it's still slow and needs a lot of memory.
Any other idea?
I've also tried with diffs instead of shift, but it becomes more difficult.
Here you have a dummy example:
mi <- data.table(ID=rep(1:3,each=4) , year=rep(1:4, times=3),
VREP=rep(1:3,each=4) , VDI=rep(1:4, times=3), RAN=sample(12))
mi <- rbind(mi, data.table(4,1,1,1,0), use.names=F)
Big example for benchmark
mi <- as.data.table(matrix(sample(0:100,10000000,
replace=T), nrow=100000, ncol=100))
mi[,ID := rep(1:1000,each=100)]
My problem is that the true dataset is much bigger, it's in the limit of memory size, then I've configured R to be able to use more memory using the pagefile, and it makes many operations slow.
I know I could do it splitting the file and joining it again, but sometimes that makes things more difficult or some operations are not splittable.
Your second method produces incorrect results, so is not a fair comparison point. Here's an optimized version of alexis_laz's suggestion instead:
setorder(mi, ID)
setDT(Map(`!=`, mi, shift(mi)))[,
lapply(lapply(.SD, `&`, !ID), sum, na.rm = T), .SDcols = -"ID"]
# year VREP VDI RAN
#1: 9 0 9 9
On your bigger sample:
setorder(mi, ID)
microbenchmark(method1(), alexis_laz(), eddi(), times = 5)
#Unit: milliseconds
# expr min lq mean median uq max neval
# method1() 7336.1830 7510.9543 7932.0476 8150.3197 8207.2181 8455.563 5
# alexis_laz() 1350.0338 1492.3793 1509.0790 1492.5426 1577.3318 1633.107 5
# eddi() 400.3999 475.6908 494.5805 504.6163 524.2077 567.988 5
I have a one million length vector of words called WORDS. I got a 9 millions objects list called SENTENCES. Each object of my list is a sentence which is represented by a 10-50 length vector of words. Here is an example :
head(WORDS)
[1] "aba" "accra" "ada" "afrika" "afrikan" "afula" "aggamemon"
SENTENCES[[1]]
[1] "how" "to" "interpret" "that" "picture"
I want to convert every sentence of my list into a numeric vector whose elements correspond to the position of the sentence's word in the WORDS big vector.
Actually, I know how to do it with that command :
convert <- function(sentence){
return(which(WORDS %in% sentence))
}
SENTENCES_NUM <- lapply(SENTENCES, convert)
The problem is that it takes way too long time. I mean my RStudio blows up although i got a 16Go RAM computer. So the question is do you have any ideas to speed up the computation?
fastmatch, a small package by an R core person, hashes the lookups so the initial and especially subsequent searches are faster.
What you are really doing is making a factor with predefined levels common to each sentence. The slow step in his C code is sorting the factor levels, which you can avoid by providing the (unique) list of factor levels to his fast version of the factor function.
If you just want the integer positions, you can easily convert from factor to integer: many do this inadvertently.
You don't actually need a factor at all for what you want, just match. Your code also generates a logical vector, then recalculates positions from it: match just goes straight to the positions.
library(fastmatch)
library(microbenchmark)
WORDS <- read.table("https://dotnetperls-controls.googlecode.com/files/enable1.txt", stringsAsFactors = FALSE)[[1]]
words_factor <- as.factor(WORDS)
# generate 100 sentences of between 5 and 15 words:
SENTENCES <- lapply(c(1:100), sample, x = WORDS, size = sample(c(5:15), size = 1))
bench_fun <- function(fun)
lapply(SENTENCES, fun)
# poster's slow solution:
hg_convert <- function(sentence)
return(which(WORDS %in% sentence))
jw_convert_match <- function(sentence)
match(sentence, WORDS)
jw_convert_match_factor <- function(sentence)
match(sentence, words_factor)
jw_convert_fastmatch <- function(sentence)
fmatch(sentence, WORDS)
jw_convert_fastmatch_factor <- function(sentence)
fmatch(sentence, words_factor)
message("starting benchmark one")
print(microbenchmark(bench_fun(hg_convert),
bench_fun(jw_convert_match),
bench_fun(jw_convert_match_factor),
bench_fun(jw_convert_fastmatch),
bench_fun(jw_convert_fastmatch_factor),
times = 10))
# now again with big samples
# generating the SENTENCES is quite slow...
SENTENCES <- lapply(c(1:1e6), sample, x = WORDS, size = sample(c(5:15), size = 1))
message("starting benchmark two, compare with factor vs vector of words")
print(microbenchmark(bench_fun(jw_convert_fastmatch),
bench_fun(jw_convert_fastmatch_factor),
times = 10))
I put this on https://gist.github.com/jackwasey/59848d84728c0f55ef11
The results don't format very well, suffice to say, fastmatch with or without factor input is dramatically faster.
# starting benchmark one
Unit: microseconds
expr min lq mean median uq max neval
bench_fun(hg_convert) 665167.953 678451.008 704030.2427 691859.576 738071.699 777176.143 10
bench_fun(jw_convert_match) 878269.025 950580.480 962171.6683 956413.486 990592.691 1014922.639 10
bench_fun(jw_convert_match_factor) 1082116.859 1104331.677 1182310.1228 1184336.810 1198233.436 1436600.764 10
bench_fun(jw_convert_fastmatch) 203.031 220.134 462.1246 289.647 305.070 2196.906 10
bench_fun(jw_convert_fastmatch_factor) 251.474 300.729 1351.6974 317.439 362.127 10604.506 10
# starting benchmark two, compare with factor vs vector of words
Unit: seconds
expr min lq mean median uq max neval
bench_fun(jw_convert_fastmatch) 3.066001 3.134702 3.186347 3.177419 3.212144 3.351648 10
bench_fun(jw_convert_fastmatch_factor) 3.012734 3.149879 3.281194 3.250365 3.498593 3.563907 10
And therefore I wouldn't go to the trouble of a parallel implementation just yet.
Won't be faster, but it is the tidy way of going about things.
library(dplyr)
library(tidyr)
sentence =
data_frame(word.name = SENTENCES,
sentence.ID = 1:length(SENTENCES) %>%
unnest(word.name)
word = data_frame(
word.name = WORDS,
word.ID = 1:length(WORDS)
sentence__word =
sentence %>%
left_join(word)
When attempting to get a "count" for a specified number of observations, it appears that each of these functions work, as they derive the same result. But how are these functions operating differently in the background, and in what scenarios would it not be appropriate to swap one for the other?
sum(grade.data$Quiz >= (100*.45))
length(which(grade.data$Quiz >= (100*.45)))
nrow(grade.data[grade.data$Quiz >= (100*.45),])
The middle one will not give misleading answers when there are missing values. Both of the other ones will.
Number 1 sums a logical vector that is coerced to 1's and 0's. If you added na.rm it would be valid when NA's are present.
Number 2 determines the length of a numeric vector.
Number three constructs a subset and then counts the rows. I would expect it to be rather inefficient compared to the other two as well as having the problem with NA values. If you added & !is.na(grade.data$Quiz) to the logical expression inside [ , ], you would get valid answers.
A fourth method like the third (and also inefficient) without the NA problem would be:
nrow( subset( grade.data, Quiz >= (100*.45) ) )
Let's generate 100k row data.frame to see which method is fastest.
grade.data = data.frame(Quiz = sample(100000), age = sample(18:24, 100000, replace = TRUE))
library(data.table)
dt.grade.data = as.data.table(grade.data)
The methods posted here
data.table = function(x) dt.grade.data[,sum(Quiz>=100*.45)]
logical.sum = function(x) sum(grade.data$Quiz >= (100*.45))
logical.counting.table = function(x) table(grade.data$Quiz >= (100*.45))[["TRUE"]]
logical.which = function(x) length(which(grade.data$Quiz >= (100*.45)))
subsetting = function(x) nrow(grade.data[grade.data$Quiz >= (100*.45),])
subset.cmd = function(x) nrow(subset(grade.data, Quiz >= (100*.45) ))
Benchmark
microbenchmark(data.table(), logical.sum(), logical.counting.table(), logical.pointless.which(), subsetting(), subset.cmd(), times = 100L)
Unit: microseconds
expr min lq median uq max neval
data.table() 1766.148 2188.8000 2308.267 2469.405 29185.36 100
logical.sum() 739.385 945.4765 993.921 1074.386 10253.67 100
logical.counting.table() 28867.605 30847.0290 31546.796 32725.255 65514.14 100
logical.which() 701.205 1080.9555 1138.635 1228.545 3565.96 100
subsetting() 27376.931 28406.7730 29243.866 30564.371 168034.45 100
subset.cmd() 29004.315 31203.1730 32219.878 33362.003 89801.34 100
Seems that a vectorized logical check is the fastest method. In a smaller data frame (500 rows). data.table is actually much slower than all the other methods.
edit: Apparently, relatively efficiency of logical.sum() and logical.which() depends on the data structure. Using different Quiz score distribution can make the logical.sum() the fastest method. And as expected, data.table selection/subsetting blows data.frame subsetting out of the water.
This may look like an innocuously simple problem but it takes a very long time to execute. Any ideas for speeding it up or vectorization etc. would be greatly appreciated.
I have a R data frame with 5 million rows and 50 columns : OriginalDataFrame
A list of Indices from that Frame : IndexList (55000 [numIndex] unique indices)
Its a time series so there are ~ 5 Million rows for 55K unique indices.
The OriginalDataFrame has been ordered by dataIndex. All the indices in IndexList are not present in OriginalDataFrame. The task is to find the indices that are present, and construct a new data frame : FinalDataFrame
Currently I am running this code using library(foreach):
FinalDataFrame <- foreach (i=1:numIndex, .combine="rbind") %dopar% {
OriginalDataFrame[(OriginalDataFrame$dataIndex == IndexList[i]),]
}
I run this on a machine with 24 cores and 128GB RAM and yet this takes around 6 hours to complete.
Am I doing something exceedingly silly or are there better ways in R to do this?
Here's a little benchmark comparing data.table to data.frame. If you know the special data table invocation for this case, it's about 7x faster, ignoring the cost of setting up the index (which is relatively small, and would typically be amortised across multiple calls). If you don't know the special syntax, it's only a little faster. (Note the problem size is a little smaller than the original to make it easier to explore)
library(data.table)
library(microbenchmark)
options(digits = 3)
# Regular data frame
df <- data.frame(id = 1:1e5, x = runif(1e5), y = runif(1e5))
# Data table, with index
dt <- data.table(df)
setkey(dt, "id")
ids <- sample(1e5, 1e4)
microbenchmark(
df[df$id %in% ids , ], # won't preserve order
df[match(ids, df$id), ],
dt[id %in% ids, ],
dt[match(ids, id), ],
dt[.(ids)]
)
# Unit: milliseconds
# expr min lq median uq max neval
# df[df$id %in% ids, ] 13.61 13.99 14.69 17.26 53.81 100
# df[match(ids, df$id), ] 16.62 17.03 17.36 18.10 21.22 100
# dt[id %in% ids, ] 7.72 7.99 8.35 9.23 12.18 100
# dt[match(ids, id), ] 16.44 17.03 17.36 17.77 61.57 100
# dt[.(ids)] 1.93 2.16 2.27 2.43 5.77 100
I had originally thought you might also be able to do this with
rownames, which I thought built up a hash table and did the indexing
efficiently. But that's obviously not the case:
df2 <- df
rownames(df2) <- as.character(df$id)
df2[as.character(ids), ],
microbenchmark(
df[df$id %in% ids , ], # won't preserve order
df2[as.character(ids), ],
times = 1
)
# Unit: milliseconds
# expr min lq median uq max neval
# df[df$id %in% ids, ] 15.3 15.3 15.3 15.3 15.3 1
# df2[as.character(ids), ] 3609.8 3609.8 3609.8 3609.8 3609.8 1
If you have 5M rows and you use == to identify rows to subset, then for each pass of your loop, you are performing 5M comparisons. If you instead key your data (as it inherently is) then you can increase efficiency significantly:
library(data.table)
OriginalDT <- as.data.table(OriginalDataFrame)
setkey(OriginalDT, dataIndex)
# Now inside your foreach:
OriginalDT[ .( IndexList[[i]] ) ]
Note that the setkey function uses a very fast implementation of radix sort. However if your data is already guaranteed to be sorted, #eddi or #arun had posted a nice hack to simply set the attribute to the DT. (I can't find it right now, but perhaps someone can edit this answer and link to it).
You might try just collecting all the results into a list of data.tables then using rbindlist and compare the speed against using .combine=rbind (if you do, please feel free to post benchmark results). I've never tested .combine=rbindlist but that might work as well and would be interesting to try.
edit:
If the sole task is to index the data.frame, then simply use:
dataIndex[ .( IndexList ) ]
No foreach necessary and you still leverage the key's DT
Check data.table package. It works just like data.frame but faster.
Like this (where df is your data frame):
table <- data.table(df)
and use table