Difference between 'select' and '$' in R - r

I want to understand the speed difference between select and $ to subset columns in R (whilst appreciating that they do not return exactly the same things, rather both perform the conceptual get-me-a-column operation). I would like to understand when either is most appropriate.
Specifically, under what conditions would the following select statement be faster than the corresponding $ statement?
Syntax is:
select(df, colName1, colName2, ...)
df$colName

In summary, you should use dplyr when speed of development, ease of understanding or ease of maintenance is most important.
Benchmarks below show that the operation takes longer with dplyr than base R equivalents.
dplyr returns a different (more complex) object.
Base R $ and similar operations can be faster to execute, but come with additional risks (e.g. partial matching behaviour); may be harder to read and/to maintain; return a (minimal) vector object, which might be missing some of the contextual richness of a data frame.
This might also help tease out (if one is wont to avoid looking at source code of packages) that dplyr is doing alot of work under the hood to target columns. It's also an unfair test since we get back different things, but all the ops are "give me this column" ops, so read it with that context:
library(dplyr)
microbenchmark::microbenchmark(
base1 = mtcars$cyl, # returns a vector
base2 = mtcars[['cyl', exact = TRUE]], # returns a vector
base2a = mtcars[['cyl', exact = FALSE]], # returns a vector
base3 = mtcars[,"cyl"], # returns a vector
base4 = subset(mtcars, select = cyl), # returns a 1 column data frame
dplyr1 = dplyr::select(mtcars, cyl), # returns a 1 column data frame
dplyr2 = dplyr::select(mtcars, "cyl"), # returns a 1 column data frame
dplyr3 = dplyr::pull(mtcars, cyl), # returns a vector
dplyr4 = dplyr::pull(mtcars, "cyl") # returns a vector
)
## Unit: microseconds
## expr min lq mean median uq max neval
## base1 4.682 6.3860 9.23727 7.7125 10.6050 25.397 100
## base2 4.224 5.9905 9.53136 7.7590 11.1095 27.329 100
## base2a 3.710 5.5380 7.92479 7.0845 10.1045 16.026 100
## base3 6.312 10.9935 13.99914 13.1740 16.2715 37.765 100
## base4 51.084 70.3740 92.03134 76.7350 95.9365 662.395 100
## dplyr1 698.954 742.9615 978.71306 784.8050 1154.6750 3568.188 100
## dplyr2 711.925 749.2365 1076.32244 808.9615 1146.1705 7875.388 100
## dplyr3 64.299 78.3745 126.97205 85.3110 112.1000 2383.731 100
## dplyr4 63.235 73.0450 99.28021 85.1080 114.8465 263.219 100
But, what if we have alot of columns:
# Make a wider version of mtcars
do.call(
cbind.data.frame,
lapply(1:20, function(i) setNames(mtcars, sprintf("%s_%d", colnames(mtcars), i)))
) -> mtcars_manycols
# I randomly chose to get "cyl_4"
microbenchmark::microbenchmark(
base1 = mtcars_manycols$cyl_4, # returns a vector
base2 = mtcars_manycols[['cyl_4', exact = TRUE]], # returns a vector
base2a = mtcars_manycols[['cyl_4', exact = FALSE]], # returns a vector
base3 = mtcars_manycols[,"cyl_4"], # returns a vector
base4 = subset(mtcars_manycols, select = cyl_4), # returns a 1 column data frame
dplyr1 = dplyr::select(mtcars_manycols, cyl_4), # returns a 1 column data frame
dplyr2 = dplyr::select(mtcars_manycols, "cyl_4"), # returns a 1 column data frame
dplyr3 = dplyr::pull(mtcars_manycols, cyl_4), # returns a vector
dplyr4 = dplyr::pull(mtcars_manycols, "cyl_4") # returns a vector
)
## Unit: microseconds
## expr min lq mean median uq max neval
## base1 4.534 6.8535 12.15802 8.7865 13.1775 75.095 100
## base2 4.150 6.5390 11.59937 9.3005 13.2220 73.332 100
## base2a 3.904 5.9755 10.73095 7.5820 11.2715 61.687 100
## base3 6.255 11.5270 16.42439 13.6385 18.6910 70.106 100
## base4 66.175 89.8560 118.37694 99.6480 122.9650 340.653 100
## dplyr1 1970.706 2155.4170 3051.18823 2443.1130 3656.1705 9354.698 100
## dplyr2 1995.165 2169.9520 3191.28939 2554.2680 3765.9420 11550.716 100
## dplyr3 124.295 142.9535 216.89692 166.7115 209.1550 1138.368 100
## dplyr4 127.280 150.0575 195.21398 169.5285 209.0480 488.199 100
For a ton of projects, dplyr is a great choice. Speed of execution, however, is very often not an attribute of the "tidyverse" but the speed of development and expressiveness usually outweigh the speed difference.
NOTE: dplyr verbs are likely better candidates than subset() and — while I lazily use $ it's also a tad dangerous due to default partial matching behaviour as is [[]] without exact=TRUE. A good habit (IMO) to get into is setting options(warnPartialMatchDollar = TRUE) in all your projects where you aren't knowingly counting on this behaviour.

It is not the same. If you're looking for the same functionality you could consider pull() from the same dplyr package.
Dollarsign returns a vector 'build' from the dataframe, pull does the same.

select is in the dplyr package, part of the tidyverse. https://dplyr.tidyverse.org/
you might do something like
df %>%
select(colName1, colName2)
Which would select those columns from df. These statements are written like verbs (e.g. select, arrange, group_by, etc.) and makes it much easier to work with data.
$ is from base r. It would show you only that column from df.

Related

How to make factor names appear in ifelse statement in R?

I have to following dataset. I want to create a column so that if there is a number in the unid column then in dat$identification I want it to say "unidentified" otherwise I want it to print whatever is there in the species column. So the final output should look like dat$identificaiton x,y,unidentified,unidentified. With this code it shows 1,2,unidentified,unidentified.
Please note, for other purposes I want to use only the unid column for the !(is.na) part of the ifelse statement and not the species.
unid <- c(NA,NA,1,4)
species <- c("x","y",NA,NA)
df <- data.frame(unid, species)
df$identification <- ifelse(!is.na(unid), "unidentified", df$species)
#Current Output of df$identification:
1,2,unidentified,unidentified
#Needed Output
x,y,unidentified,unidentified
You can coerce the column of class 'factorto classcharacterin theifelse`.
df$identification <- ifelse(!is.na(unid), "unidentified", as.character(df$species))
df
# unid species identification
#1 NA x x
#2 NA y y
#3 1 <NA> unidentified
#4 4 <NA> unidentified
Edit.
After the OP accepted the answer, I reminded myself that ifelse is slow and indexing fast, so I tested both using a larger dataset.
First of all, see if both solutions produce the same results:
df$id1 <- ifelse(!is.na(unid), "unidentified", as.character(df$species))
df$id2 <- "unidentified"
df$id2[is.na(unid)] <- species[is.na(unid)]
identical(df$id1, df$id2)
#[1] TRUE
The results are the same.
Now time them both using package microbenchmark.
n <- 1e4
df1 <- data.frame(unid = rep(unid, n), species = rep(species, n))
microbenchmark::microbenchmark(
ifelse = {df1$id1 <- ifelse(!is.na(df1$unid), "unidentified", as.character(df1$species))},
index = {df1$id2 <- "unidentified"
df1$id2[is.na(df1$unid)] <- species[is.na(df1$unid)]
},
relative = TRUE
)
#Unit: nanoseconds
# expr min lq mean median uq max neval cld
# ifelse 12502465 12749881 16080160.39 14365841 14507468.5 85836870 100 c
# index 3243697 3299628 4575818.33 3326692 4983170.0 74526390 100 b
#relative 67 68 208.89 228 316.5 540 100 a
On average, indexing is 200 times faster. More than worth the trouble to write two lines of code instead of just one for ifelse.

How can I add a column with the names of the nth list element to each nth element of the list?

Say I have
library(dplyr)
a <- list(a=tbl_df(cars), b=tbl_df(iris))
How can I add to each element of this list a column name whose values are the name of the named element of the list?
For instance, this how the output should look like for the first element
Source: local data frame [50 x 3]
speed dist name
(dbl) (dbl) (chr)
1 4 2 a
2 4 10 a
3 7 4 a
4 7 22 a
5 8 16 a
6 9 10 a
7 10 18 a
8 10 26 a
9 10 34 a
10 11 17 a
After all this commenting, guess I'll write an answer.
You should use a for loop for this: it's quick to code, quick to execute, readable and straightforward:
for (i in seq_along(a)) a[[i]]$name = names(a)[i]
You could use map or mapply or lapply instead of the for loop. In this case, I would think it will be less readable.
You could also use mutate instead of [ for adding the column. This will be slower:
library(microbenchmark)
library(dplyr)
cars_tbl = tbl_df(cars)
mbm = microbenchmark
mbm(
mutate = {cars_tbl = mutate(cars_tbl, name = 'a')},
base = {cars_tbl['name'] = 'a'}
)
# Unit: microseconds
# expr min lq mean median uq max neval cld
# mutate 240.617 262.4730 293.29001 276.158 299.7255 813.078 100 b
# base 34.971 42.1935 55.46356 53.407 57.3980 226.932 100 a
For such a simple operation, [<- is going to be hard to beat. data.table will probably be faster, but only if the object is already a data.table. If the object is a data.frame rather than a tbl_df, then the mutate is about twice as slow. But these differences are in microseconds. Unless you are repeatedly doing this operation to lists of at least hundreds of thousands of data frames it won't matter.
This is not to say dplyr has poor performance - when you are using the grouping operations, relying on the NSE built in to dplyr, it's excellent. This is just a simple case where the simple base solution is easiest and also quickest.
If we increase the size of the data enough so that it takes a noticeable amount of time to do these operations (10 million rows, here), the differences essentially go away:
df = tbl_df(data.frame(x = rep(1, 1e7)))
mbm(
mutate = {df = mutate(df, name = 'a')},
base = {df['name'] = 'a'}
)
# Unit: milliseconds
# expr min lq mean median uq max neval cld
# mutate 58.08095 59.87531 132.3180 105.22507 207.6439 261.8121 100 a
# base 52.09899 53.96386 129.9304 99.96153 203.8581 237.0084 100 a
Implementing with for loops and with map, comparing [<- and mutate
# base for loop
for (i in seq_along(a)) {
a[[i]]$name = names(a)[i]
}
# dplyr in for loop
for (i in seq_along(a)) {
a[[i]] = mutate(a[[i]], name = names(a)[i])
}
# dplyr hiding the loop in Map()
a = Map(function(x, y) mutate(x, name = y), x = a, y = names(a))
We could benchmark these (I did -- see the edit history if you want the results), but the differences are less than 1 millisecond so it shouldn't matter. Go with whatever is easiest for you to read, write, and understand.
All this comes with the caveat that if your eventual goal is to bind these data frames together and you want the name column to see what list element the data came from, then that is implemented directly in dplyr::bind_rows.

Millions of tiny matches in R : need performance

I have a one million length vector of words called WORDS. I got a 9 millions objects list called SENTENCES. Each object of my list is a sentence which is represented by a 10-50 length vector of words. Here is an example :
head(WORDS)
[1] "aba" "accra" "ada" "afrika" "afrikan" "afula" "aggamemon"
SENTENCES[[1]]
[1] "how" "to" "interpret" "that" "picture"
I want to convert every sentence of my list into a numeric vector whose elements correspond to the position of the sentence's word in the WORDS big vector.
Actually, I know how to do it with that command :
convert <- function(sentence){
return(which(WORDS %in% sentence))
}
SENTENCES_NUM <- lapply(SENTENCES, convert)
The problem is that it takes way too long time. I mean my RStudio blows up although i got a 16Go RAM computer. So the question is do you have any ideas to speed up the computation?
fastmatch, a small package by an R core person, hashes the lookups so the initial and especially subsequent searches are faster.
What you are really doing is making a factor with predefined levels common to each sentence. The slow step in his C code is sorting the factor levels, which you can avoid by providing the (unique) list of factor levels to his fast version of the factor function.
If you just want the integer positions, you can easily convert from factor to integer: many do this inadvertently.
You don't actually need a factor at all for what you want, just match. Your code also generates a logical vector, then recalculates positions from it: match just goes straight to the positions.
library(fastmatch)
library(microbenchmark)
WORDS <- read.table("https://dotnetperls-controls.googlecode.com/files/enable1.txt", stringsAsFactors = FALSE)[[1]]
words_factor <- as.factor(WORDS)
# generate 100 sentences of between 5 and 15 words:
SENTENCES <- lapply(c(1:100), sample, x = WORDS, size = sample(c(5:15), size = 1))
bench_fun <- function(fun)
lapply(SENTENCES, fun)
# poster's slow solution:
hg_convert <- function(sentence)
return(which(WORDS %in% sentence))
jw_convert_match <- function(sentence)
match(sentence, WORDS)
jw_convert_match_factor <- function(sentence)
match(sentence, words_factor)
jw_convert_fastmatch <- function(sentence)
fmatch(sentence, WORDS)
jw_convert_fastmatch_factor <- function(sentence)
fmatch(sentence, words_factor)
message("starting benchmark one")
print(microbenchmark(bench_fun(hg_convert),
bench_fun(jw_convert_match),
bench_fun(jw_convert_match_factor),
bench_fun(jw_convert_fastmatch),
bench_fun(jw_convert_fastmatch_factor),
times = 10))
# now again with big samples
# generating the SENTENCES is quite slow...
SENTENCES <- lapply(c(1:1e6), sample, x = WORDS, size = sample(c(5:15), size = 1))
message("starting benchmark two, compare with factor vs vector of words")
print(microbenchmark(bench_fun(jw_convert_fastmatch),
bench_fun(jw_convert_fastmatch_factor),
times = 10))
I put this on https://gist.github.com/jackwasey/59848d84728c0f55ef11
The results don't format very well, suffice to say, fastmatch with or without factor input is dramatically faster.
# starting benchmark one
Unit: microseconds
expr min lq mean median uq max neval
bench_fun(hg_convert) 665167.953 678451.008 704030.2427 691859.576 738071.699 777176.143 10
bench_fun(jw_convert_match) 878269.025 950580.480 962171.6683 956413.486 990592.691 1014922.639 10
bench_fun(jw_convert_match_factor) 1082116.859 1104331.677 1182310.1228 1184336.810 1198233.436 1436600.764 10
bench_fun(jw_convert_fastmatch) 203.031 220.134 462.1246 289.647 305.070 2196.906 10
bench_fun(jw_convert_fastmatch_factor) 251.474 300.729 1351.6974 317.439 362.127 10604.506 10
# starting benchmark two, compare with factor vs vector of words
Unit: seconds
expr min lq mean median uq max neval
bench_fun(jw_convert_fastmatch) 3.066001 3.134702 3.186347 3.177419 3.212144 3.351648 10
bench_fun(jw_convert_fastmatch_factor) 3.012734 3.149879 3.281194 3.250365 3.498593 3.563907 10
And therefore I wouldn't go to the trouble of a parallel implementation just yet.
Won't be faster, but it is the tidy way of going about things.
library(dplyr)
library(tidyr)
sentence =
data_frame(word.name = SENTENCES,
sentence.ID = 1:length(SENTENCES) %>%
unnest(word.name)
word = data_frame(
word.name = WORDS,
word.ID = 1:length(WORDS)
sentence__word =
sentence %>%
left_join(word)

Use Factor Vector to Lookup Value in Data Frame

I have a vector
> head(gbmPred)
[1] COMPLETED DEAD COMPLETED COMPLETED COMPLETED LOW
I also have a data frame
> head(gbmPredProb)
COLLECTION COMPLETED DEAD LOW
1 0.04535981 0.8639282 0.07698963 0.01372232
2 0.19031127 0.6680874 0.11708416 0.02451713
3 0.25004446 0.6789679 0.04827067 0.02271702
4 0.09625138 0.7877128 0.09906595 0.01696983
5 0.15696875 0.7617585 0.04441733 0.03685539
6 0.14157307 0.7690410 0.06057754 0.02880836
I want to be create a vector by using the levels in gbmPred to lookup the values in gbmPredProb:
0.8639282
0.1170841
0.6789679
0.7877128
0.7617585
0.02880836
Does anyone know how to do this in R? Appreciate the help.
EDIT *** Sorry copy and paste error. Fixed above
The first value .86 matches COMPLETED
the second value .11 matches DEAD
WHat I am looking for is to loop through the vector gbmPred to get the value (COMPLETED,etc), then search gbmPredProb data frame for the value matching the column with the same name as well as the index of the vector.
So, the first value is COMPLETED. Look at gbmPredProb and get .863
The second value of gbmPred is DEAD. Look at gbmPredProb and get .117
the thrid value of gbmPred is COMPLETED. Look at gbmPredProb and get .678
If you have a bunch of (row, col) pairs that you want to grab out of a matrix, a good way to get them is to index by a 2-column matrix where the first column is all the row numbers of the elements you want and the second column is all the column numbers of the elements you want:
gbmPredProb[cbind(1:length(gbmPred), match(gbmPred, names(gbmPredProb)))]
# [1] 0.86392820 0.11708416 0.67896790 0.78771280 0.76175850
# [6] 0.02880836
One advantage of this sort of an approach is that it will be a good deal quicker than a row-by-row approach on larger data frames:
gbmPredProb <- gbmPredProb[rep(1:6, each=1000),] # 6000x4
gbmPred <- rep(gbmPred, each=1000) # Length 6000
josilber <- function(mat, vec) mat[cbind(1:length(vec), match(vec, names(mat)))]
rscriven <- function(mat, vec) sapply(seq_along(vec), function(i) mat[i, as.character(vec[i])])
all.equal(josilber(gbmPredProb, gbmPred), rscriven(gbmPredProb, gbmPred))
# [1] TRUE
library(microbenchmark)
microbenchmark(josilber(gbmPredProb, gbmPred), rscriven(gbmPredProb, gbmPred))
# Unit: microseconds
# expr min lq median uq max neval
# josilber(gbmPredProb, gbmPred) 328.524 398.8545 442.065 512.949 766.082 100
# rscriven(gbmPredProb, gbmPred) 97843.015 111478.4360 117294.079 123901.368 254645.966 100

Difference between sum(), length(which()), and nrow() in R

When attempting to get a "count" for a specified number of observations, it appears that each of these functions work, as they derive the same result. But how are these functions operating differently in the background, and in what scenarios would it not be appropriate to swap one for the other?
sum(grade.data$Quiz >= (100*.45))
length(which(grade.data$Quiz >= (100*.45)))
nrow(grade.data[grade.data$Quiz >= (100*.45),])
The middle one will not give misleading answers when there are missing values. Both of the other ones will.
Number 1 sums a logical vector that is coerced to 1's and 0's. If you added na.rm it would be valid when NA's are present.
Number 2 determines the length of a numeric vector.
Number three constructs a subset and then counts the rows. I would expect it to be rather inefficient compared to the other two as well as having the problem with NA values. If you added & !is.na(grade.data$Quiz) to the logical expression inside [ , ], you would get valid answers.
A fourth method like the third (and also inefficient) without the NA problem would be:
nrow( subset( grade.data, Quiz >= (100*.45) ) )
Let's generate 100k row data.frame to see which method is fastest.
grade.data = data.frame(Quiz = sample(100000), age = sample(18:24, 100000, replace = TRUE))
library(data.table)
dt.grade.data = as.data.table(grade.data)
The methods posted here
data.table = function(x) dt.grade.data[,sum(Quiz>=100*.45)]
logical.sum = function(x) sum(grade.data$Quiz >= (100*.45))
logical.counting.table = function(x) table(grade.data$Quiz >= (100*.45))[["TRUE"]]
logical.which = function(x) length(which(grade.data$Quiz >= (100*.45)))
subsetting = function(x) nrow(grade.data[grade.data$Quiz >= (100*.45),])
subset.cmd = function(x) nrow(subset(grade.data, Quiz >= (100*.45) ))
Benchmark
microbenchmark(data.table(), logical.sum(), logical.counting.table(), logical.pointless.which(), subsetting(), subset.cmd(), times = 100L)
Unit: microseconds
expr min lq median uq max neval
data.table() 1766.148 2188.8000 2308.267 2469.405 29185.36 100
logical.sum() 739.385 945.4765 993.921 1074.386 10253.67 100
logical.counting.table() 28867.605 30847.0290 31546.796 32725.255 65514.14 100
logical.which() 701.205 1080.9555 1138.635 1228.545 3565.96 100
subsetting() 27376.931 28406.7730 29243.866 30564.371 168034.45 100
subset.cmd() 29004.315 31203.1730 32219.878 33362.003 89801.34 100
Seems that a vectorized logical check is the fastest method. In a smaller data frame (500 rows). data.table is actually much slower than all the other methods.
edit: Apparently, relatively efficiency of logical.sum() and logical.which() depends on the data structure. Using different Quiz score distribution can make the logical.sum() the fastest method. And as expected, data.table selection/subsetting blows data.frame subsetting out of the water.

Resources