setkey and the := operator, data.table, R - r

When using the data.table package, I am a bit unsure of when i need to setkey(). For example, when using the := operator with the by option, things seem to still be very fast even though I have not set a key. Could someone please elucidate when setkey() is necessary and when it is not? And if it is not necessary prior to calling := with by then how is the data.table package so fast since presumably it has to do the same thing as apply in standard data.frame R by doing a sequential search rather than a binary one since it doesn't know whether my data.table is actually sorted by the argument to by.
Thanks

These 2 FAQs seem close :
3.2 I don't have a key on a large table, but grouping is still really quick. Why is that?
data.table uses radix sorting. This is signicantly faster than other sort algorithms. Radix is specifically for integers only, see ?base::sort.list(x,method="radix"). This is also one reason why setkey is quick. When no key is set, or we group in a different order from that of the key, we call it an ad hoc by.
3.3 Why is grouping by columns in the key faster than an ad hoc by?
Because each group is contiguous in RAM, thereby minimising page fetches, and memory can be copied in bulk (memcpy in C) rather than looping in C.
What it doesn't say, and probably should do, is that you need a very large dataset, where each group is also very large, before you notice the difference between keyed by and ad hoc by. Something like 100 groups of 100MB each (a 10GB data.table) such as 1e8 rows and 13 columns. Otherwise, there is no need to setkey first, especially since that can get onerous.

Related

Is there a visual explanation of why data.table operations are faster than tidyverse operations when you need to group by a variable?

I understand from excellent resources here, here and here that data.table utilises automatic indexing (to create a key i.e. supercharged row names) and binary search based subset in contrast to tidyverse, which relies on vector scanning.
I understand that vector scanning requires scanning each individual row and the creation of nrow(dataset) length logical vectors, and that doing this repeatedly is not as efficient.
I'm wondering if someone can help me frame exactly how these two methods means that data.table operations run a lot faster compared to tidyverse when you need to group by a variable. I.e. is it because data.table automatically indexes the group_by column and breaks it into grouped subsets and runs operations on each subset, whilst a vector scanning approach would require the generation of n = unique groups of multiple logical vectors, and then run operations on each individual logical vector, before collating results?
Also, according to the data.table vignette,
We can set keys on multiple columns and the column can be of different
types...
Since the rows are reordered, a data.table can have at most one key
because it can not be sorted in more than one way.
What does it mean that we can set keys on multiple columns and yet a data.table can have at most one key? I.e. is it that during any moment when running an operation, there is only one reference key, but which column the reference key is set as can change as we progress to another component of the overall operation?
Thank you in advance!
There is no.
There are different ways to finding groups, and then to compute expression by groups. Each single thing can be differently implemented. They are not related to keys or index. Also data.table is not automatically creating key/index during group by (as of now).
data.table has very fast, carefully implemented, order function, it is being used to find groups. It was contributed to base R later on. There is an idea to use it in dplyr to speed up grouping: https://github.com/tidyverse/dplyr/issues/4406
Yet data.table order function got improved since then and now scales even better.
Aside from finding groups, there is a part about computing an expression. If we evaluate "user defined function" it will always be much slower. Many common functions are internally optimized, so they don't switch between R and C for every group. Here, data.table has also very carefully implemented "GForce" functions. Not sure but in dplyr they are called "hybrid evaluation".
It is always important to test on your particular data use case. If you have just 2 unique groups in data, then fast grouping algorithms will not shine much.
Also there is a community repository which meant to describe data.table algorithms https://github.com/asantucci/algo_data.table but it is not very active. I just recently posted there a comment about "groupby optimization", will paste it here as well. Answer was provided by data.table author Matt Dowle.
Q: does GForce allocate mem for biggest group, then copy there values of a group, to aggregate, so it can benefit from being contiguous in memory and will be more cache efficient? if so, do we check if groups aren't sorted already? so we can avoid doing allocation and copy?
A: gforce (gsum) assigns to many group results at once; it doesn't gather the groups together. You're describing non-gforce (dogroup.c) which copies to the largest group. See the branch in dogroups.c which knows whether groups are already grouped: it swithes to a memcpy. The memcpy is very fast (contiguous, pre-fetch) so it's pretty good already. We must copy because R's DATAPTR is not a pointer we can repoint, it's an offset from SEXP.

R - Order of the data table records from subsetting columns

I am currently learning data.table in R. a few questions which got me confused:
Does subsetting columns always preserve the order of records? (i.e. Row 1,2,3 will stay as Row 1,2,3 instead of Row 1,3,2)
Also, does the same conclusion apply to different expressions, such as DB[[1]], DB$V1, etc.
2.
When subsetting multiple columns, I know I need to use something like DB[,.(V1, V2)], but I am confused about what's the result from DB[,V1, V2]?
The code runs, seems to produce the result but the rows are not in the same order as the original table. If someone can explain what does the latter code mean, that would be great help.
Thanks a lot!
I wanted to start with small suggestion... if you create data processing related question on SO it is enormously better to ship reproducible code in the question, and expected output if it isn't clear. You will reach much bigger audience and gather more quality solutions. This is generally common practice on r tag.
Subsetting preserve order, underlying storage of data is column oriented unlike regular SQL db (which are not aware of row order), it works exactly the same as subsetting a vector in base R, just much faster.
Regarding [[ and $, these are just a methods for extracting column from data.table, and a list in general, you can use DB[[1]], DB[["V1"]], DB$V1. They behave differently depending if column/list element exists.
Third argument inside data.table [ operator is by which expect columns to group by over, so you query column V1 grouped by V2, without using any aggregate function. And this is very different than DB[, .(V1, V2)] or DB[, c("V1","V2"), with=FALSE] or DB[, list(V1,V2)] or DB[, .SD, .SDcols=c("V1","V2")], ... . Most of the api is borrowed from base R, functions like subset() or with().
At the end I would recommend to go through data.table vignettes, also there is my recent longish post that goes through various data.table examples: Boost Your Data Munging with R.

How to use daply (from plyr) on 2billion rows using less memory

Does any one know, how one could apply the following function that converts 3 columns table into a matrix using a file that has 2 billion rows (with less than 10GB memory).
where x is 1st, y is 2nd and z is 3rd column.
library(plyr)
daply(a, .(x, y), function(x) x$z)
If you cannot load all the tuples at once
I know this is not the answer you are looking for: use SQLite.
The problem with R is that it must load the entire frame at once. If you don't have enough memory, then it simply can't continue.
SQLite is way smarter than R to do aggregates. Perhaps the most important feature is that it optimizes the memory available, and if it can, it does not need to read all the elements at once. See this for details on how to do it.
http://www.r-bloggers.com/using-sqlite-in-r/
If SQLite does not support the aggregate you want, you can create it yourself (see user defined functions in SQLite).
Alternatively you can try to partition your data (outside R), so you can aggregate in stages. But that will still require some sort of program that can read process the files in less than the available memory. Unix/MacOS/Linux sort is one of those utilities that can deal with more-than-available-memory data. It might be useful.

What is the purpose of setting a key in data.table?

I am using data.table and there are many functions which require me to set a key (e.g. X[Y]). As such, I wish to understand what a key does in order to properly set keys in my data tables.
One source I read was ?setkey.
setkey() sorts a data.table and marks it as sorted. The sorted columns are the key. The key can be any columns in any order. The columns are sorted in ascending order always. The table is changed by reference. No copy is made at all, other than temporary working memory as large as one column.
My takeaway here is that a key would "sort" the data.table, resulting in a very similar effect to order(). However, it doesn't explain the purpose of having a key.
The data.table FAQ 3.2 and 3.3 explains:
3.2 I don't have a key on a large table, but grouping is still really quick. Why is that?
data.table uses radix sorting. This is signicantly faster than other
sort algorithms. Radix is specically for integers only, see
?base::sort.list(x,method="radix"). This is also one reason why
setkey() is quick. When no key is set, or we group in a different order
from that of the key, we call it an ad hoc by.
3.3 Why is grouping by columns in the key faster than an ad hoc by?
Because each group is contiguous in RAM, thereby minimising page
fetches, and memory can be copied in bulk (memcpy in C) rather than
looping in C.
From here, I guess that setting a key somehow allows R to use "radix sorting" over other algorithms, and that's why it is faster.
The 10 minute quick start guide also has a guide on keys.
Keys
Let's start by considering data.frame, specically rownames (or in
English, row names). That is, the multiple names belonging to a single
row. The multiple names belonging to the single row? That is not what
we are used to in a data.frame. We know that each row has at most one
name. A person has at least two names, a rst name and a second name.
That is useful to organise a telephone directory, for example, which
is sorted by surname, then rst name. However, each row in a
data.frame can only have one name.
A key consists of one or more
columns of rownames, which may be integer, factor, character or some
other class, not simply character. Furthermore, the rows are sorted by
the key. Therefore, a data.table can have at most one key, because it
cannot be sorted in more than one way.
Uniqueness is not enforced,
i.e., duplicate key values are allowed. Since the rows are sorted by
the key, any duplicates in the key will appear consecutively
The telephone directory was helpful in understanding what a key is, but it seems that a key is no different when compared to having a factor column. Furthermore, it does not explain why is a key needed (especially to use certain functions) and how to choose the column to set as key. Also, it seems that in a data.table with time as a column, setting any other column as key would probably mess the time column too, which makes it even more confusing as I do not know if I am allowed set any other column as key. Can someone enlighten me please?
In addition to this answer, please refer to the vignettes Secondary indices and auto indexing and Keys and fast binary search based subset as well.
This issue highlights the other vignettes that we plan to.
I've updated this answer again (Feb 2016) in light of the new on= feature that allows ad-hoc joins as well. See history for earlier (outdated) answers.
What exactly does setkey(DT, a, b) do?
It does two things:
reorders the rows of the data.table DT by the column(s) provided (a, b) by reference, always in increasing order.
marks those columns as key columns by setting an attribute called sorted to DT.
The reordering is both fast (due to data.table's internal radix sorting) and memory efficient (only one extra column of type double is allocated).
When is setkey() required?
For grouping operations, setkey() was never an absolute requirement. That is, we can perform a cold-by or adhoc-by.
## "cold" by
require(data.table)
DT <- data.table(x=rep(1:5, each=2), y=1:10)
DT[, mean(y), by=x] # no key is set, order of groups preserved in result
However, prior to v1.9.6, joins of the form x[i] required key to be set on x. With the new on= argument from v1.9.6+, this is not true anymore, and setting keys is therefore not an absolute requirement here as well.
## joins using < v1.9.6
setkey(X, a) # absolutely required
setkey(Y, a) # not absolutely required as long as 'a' is the first column
X[Y]
## joins using v1.9.6+
X[Y, on="a"]
# or if the column names are x_a and y_a respectively
X[Y, on=c("x_a" = "y_a")]
Note that on= argument can be explicitly specified even for keyed joins as well.
The only operation that requires key to be absolutely set is the foverlaps() function. But we are working on some more features which when done would remove this requirement.
So what's the reason for implementing on= argument?
There are quite a few reasons.
It allows to clearly distinguish the operation as an operation involving two data.tables. Just doing X[Y] does not distinguish this as well, although it could be clear by naming the variables appropriately.
It also allows to understand the columns on which the join/subset is being performed immediately by looking at that line of code (and not having to traceback to the corresponding setkey() line).
In operations where columns are added or updated by reference, on= operations are much more performant as it doesn't need the entire data.table to be reordered just to add/update column(s). For example,
## compare
setkey(X, a, b) # why physically reorder X to just add/update a column?
X[Y, col := i.val]
## to
X[Y, col := i.val, on=c("a", "b")]
In the second case, we did not have to reorder. It's not computing the order that's time consuming, but physically reordering the data.table in RAM, and by avoiding it, we retain the original order, and it is also performant.
Even otherwise, unless you're performing joins repetitively, there should be no noticeable performance difference between a keyed and ad-hoc joins.
This leads to the question, what advantage does keying a data.table have anymore?
Is there an advantage to keying a data.table?
Keying a data.table physically reorders it based on those column(s) in RAM. Computing the order is not usually the time consuming part, rather the reordering itself. However, once we've the data sorted in RAM, the rows belonging to the same group are all contiguous in RAM, and is therefore very cache efficient. It's the sortedness that speeds up operations on keyed data.tables.
It is therefore essential to figure out if the time spent on reordering the entire data.table is worth the time to do a cache-efficient join/aggregation. Usually, unless there are repetitive grouping / join operations being performed on the same keyed data.table, there should not be a noticeable difference.
In most cases therefore, there shouldn't be a need to set keys anymore. We recommend using on= wherever possible, unless setting key has a dramatic improvement in performance that you'd like to exploit.
Question: What do you think would be the performance like in comparison to a keyed join, if you use setorder() to reorder the data.table and use on=? If you've followed thus far, you should be able to figure it out :-).
A key is basically an index into a dataset, which allows for very fast and efficient sort, filter, and join operations. These are probably the best reasons to use data tables instead of data frames (the syntax for using data tables is also much more user friendly, but that has nothing to do with keys).
If you don't understand indexes, consider this: a phone book is "indexed" by name. So if I want to look up someone's phone number, it's pretty straightforward. But suppose I want to search by phone number (e.g., look up who has a particular phone number)? Unless I can "re-index" the phone book by phone number, it will take a very long time.
Consider the following example: suppose I have a table, ZIP, of all the zip codes in the US (>33,000) along with associated information (city, state, population, median income, etc.). If I want to look up the information for a specific zip code, the search (filter) is about 1000 times faster if I setkey(ZIP, zipcode) first.
Another benefit has to do with joins. Suppose a have a list of people and their zip codes in a data table (call it "PPL"), and I want to append information from the ZIP table (e.g. city, state, and so on). The following code will do it:
setkey(ZIP, zipcode)
setkey(PPL, zipcode)
full.info <- PPL[ZIP, nomatch = FALSE]
This is a "join" in the sense that I'm joining the information from 2 tables based in a common field (zipcode). Joins like this on very large tables are extremely slow with data frames, and extremely fast with data tables. In a real-life example I had to do more than 20,000 joins like this on a full table of zip codes. With data tables the script took about 20 min. to run. I didn't even try it with data frames because it would have taken more than 2 weeks.
IMHO you should not just read but study the FAQ and Intro material. It's easier to grasp if you have an actual problem to apply this to.
[Response to #Frank's comment]
Re: sorting vs. indexing - Based on the answer to this question, it appears that setkey(...) does in fact rearrange the columns in the table (e.g., a physical sort), and does not create an index in the database sense. This has some practical implications: for one thing if you set the key in a table with setkey(...) and then change any of the values in the key column, data.table merely declares the table to be no longer sorted (by turning off the sorted attribute); it does not dynamically re-index to maintain the proper sort order (as would happen in a database). Also, "removing the key" using setkey(DT, NULL) does not restore the table to it's original, unsorted order.
Re: filter vs. join - the practical difference is that filtering extracts a subset from a single dataset, whereas join combines data from two datasets based on a common field. There are many different kinds of join (inner, outer, left). The example above is an inner join (only records with keys common to both tables are returned), and this does have many similarities to filtering.

Out of memory when modifying a big R data.frame

I have a big data frame taking about 900MB ram. Then I tried to modify it like this:
dataframe[[17]][37544]=0
It seems that makes R using more than 3G ram and R complains "Error: cannot allocate vector of size 3.0 Mb", ( I am on a 32bit machine.)
I found this way is better:
dataframe[37544, 17]=0
but R's footprint still doubled and the command takes quite some time to run.
From a C/C++ background, I am really confused about this behavior. I thought something like dataframe[37544, 17]=0 should be completed in a blink without costing any extra memory (only one cell should be modified). What is R doing for those commands I posted? What is the right way to modify some elements in a data frame then without doubling the memory footprint?
Thanks so much for your help!
Tao
Following up on Joran suggesting data.table, here are some links. Your object, at 900MB, is manageable in RAM even in 32bit R, with no copies at all.
When should I use the := operator in data.table?
Why has data.table defined := rather than overloading <-?
Also, data.table v1.8.0 (not yet on CRAN but stable on R-Forge) has a set() function which provides even faster assignment to elements, as fast as assignment to matrix (appropriate for use inside loops for example). See latest NEWS for more details and example. Also see ?":=" which is linked from ?data.table.
And, here are 12 questions on Stack Overflow with the data.table tag containing the word "reference".
For completeness :
require(data.table)
DT = as.data.table(dataframe)
# say column name 17 is 'Q' (i.e. LETTERS[17])
# then any of the following :
DT[37544, Q:=0] # using column name (often preferred)
DT[37544, 17:=0, with=FALSE] # using column number
col = "Q"
DT[37544, col:=0, with=FALSE] # variable holding name
col = 17
DT[37544, col:=0, with=FALSE] # variable holding number
set(DT,37544L,17L,0) # using set(i,j,value) in v1.8.0
set(DT,37544L,"Q",0)
But, please do see linked questions and the package's documentation to see how := is more general than this simple example; e.g., combining := with binary search in an i join.
Look up 'copy-on-write' in the context of R discussions related to memory. As soon as one part of a (potentially really large) data structure changes, a copy is made.
A useful rule of thumb is that if your largest object is N mb/gb/... large, you need around 3*N of RAM. Such is life with an interpreted system.
Years ago when I had to handle large amounts of data on machines with (relative to the data volume) relatively low-ram 32-bit machines, I got good use out of early versions of the bigmemory package. It uses the 'external pointer' interface to keep large gobs of memory outside of R. That save you not only the '3x' factor, but possibly more as you may get away with non-contiguous memory (which is the other thing R likes).
Data frames are the worst structure you can choose to make modification to. Due to quite the complex handling of all features (such as keeping row names in synch, partial matching, etc.) which is done in pure R code (unlike most other objects that can go straight to C) they tend to force additional copies as you can't edit them in place. Check R-devel on the detailed discussions on this - it has been discussed in length several times.
The practical rule is to never use data frames for large data, unless you treat them read-only. You will be orders of magnitude more efficient if you either work on vectors or matrices.
There is type of object called a ffdf in the ff package which is basically a data.frame stored on disk. In addition to the other tips above you can try that.
You can also try the RSQLite package.

Resources