Aggregating data frame rows using an input vector - r

I have this toy data.frame:
df = data.frame(id = c("a","b","c","d"), value = c(2,3,6,5))
and I'd like to aggregate its rows according to this toy vector:
collapsed.ids = c("a,b","c","d")
where the aggregated data.frame should keep max(df$value) of its aggregated rows.
So for this toy example the output would be:
> aggregated.df
id value
1 a,b 3
2 c 6
3 d 5
I should note that my real data.frame is ~150,000 rows

I would use data.table for this.
Something like the following should work:
library(data.table)
DT <- data.table(df, key = "id") # Main data.table
Key <- data.table(ind = collapsed.ids) # your "Key" table
## We need your "Key" table in a long form
Key <- Key[, list(id = unlist(strsplit(ind, ",", fixed = TRUE))), by = ind]
setkey(Key, id) # Set the key to facilitate a merge
## Merge and aggregate in one step
DT[Key][, list(value = max(value)), by = ind]
# ind value
# 1: a,b 3
# 2: c 6
# 3: d 5

You don't need data.table, you can just use base R.
split.ids <- strsplit(collapsed.ids, ",")
split.df <- data.frame(id = tmp <- unlist(split.ids),
joinid = rep(collapsed.ids, sapply(split.ids, length)))
aggregated.df <- aggregate(value ~ id, data = merge(df, split.df), max)
Result:
# id value
# 1 a,b 3
# 2 c 6
# 3 d 5
Benchmark
df <- df[rep(1:4, 50000), ] # Make a big data.frame
system.time(...) # of the above code
# user system elapsed
# 1.700 0.154 1.947
EDIT: Apparently Ananda's code runs in 0.039, so I'm eating crow. But either are acceptable for this size.

Related

rbindlist function to keep data frames with 0 rows in output

I have a list of data frames df_list that I would like to convert into a single data frame out_df. This is simple with the function rbindlist but my only issue here is that only non-empty dataframes will be kept in the output dataframe.
I know the fill option fills missing columns with NAs by using:
out_df<- rbindlist(df_list, fill=TRUE)
But what I want to do as well is to keep and fill missing rows from the input list. What would be the way to do this?
Thanks in advance.
You can just create a new list which replaces any empty data.table with one with an NA row.
# Create some data
dt1 <- data.table(a = 1:3, b = letters[1:3])
dt2 <- data.table(a = numeric(0), b = character(0))
dt3 <- dt1
l <- list(dt1, dt2, dt3)
One way to do this is through coercing an empty matrix:
l2 <- lapply(l, \(dt) {
if (nrow(dt) == 0) {
col_names <- names(dt)
dt <- matrix(ncol = ncol(dt)) |>
data.table() |>
setNames(col_names)
}
dt
})
The important thing is to make sure we are not copying the entirety of the data. We can check that by making sure that an unchanged data.table has the same memory address:
tracemem(dt1) == tracemem(l[[1]]) # TRUE
tracemem(l[[1]]) == tracemem(l2[[1]]) # TRUE
rbindlist(l2)
# a b
# 1: 1 a
# 2: 2 b
# 3: 3 c
# 4: NA <NA>
# 5: 1 a
# 6: 2 b
# 7: 3 c

Is there a function in R to avoid using loop when we look for all matching index for all element of a vector?

I have this for loop which return the first matching index of every element in the vector
but it is very slow (nrow(data) > 50 000)
example:
id1 <- c(1,5,8,10)
id2 <- c(5,8,10,1)
data <- data.frame(id1,id2, idx = 1:length(id1))
results should be :
data$new_id
4 1 2 3
data$new_id <- NA
for(i in 1:nrow(data)){
data$new_id[i] <- which(data$id2 == data$id1[i])
}
I found that this works for small data frame but unfortunatly R return a "Error: cannot allocate vector of size 22.2 Gb"
A <- outer(data$id1,data$id2, "==")
data <- data %>%
mutate(new_id = which(t(A)),
id0 = 0:(nrow(data)-1),
new_id = new_id-(nrow(data))*id0)
Does other solution exist to do this indexing ?
We can use match which is very fast as a base R function. Here, we are just matching two column of a dataset without even trying to get both datasets together
with(data, match(id1, id2))
#[1] 4 1 2 3
To make this faster, use fmatch from fastmatch
library(fastmatch)
with(data, fmatch(id1, id2))
Benchmarks
set.seed(24)
data1 <- data.frame(id1 = sample(1e7), id2 = sample(1e7))
system.time(with(data1, match(id1, id2)))
# user system elapsed
# 1.635 0.079 1.691
system.time(with(data1, fmatch(id1, id2)))
# user system elapsed
# 1.155 0.062 1.195
system.time({
data2 <- data.table(id = data1$id1)
data3 <- data.table(id = data1$id2)
data2[data3, idx := .I, on = .(id)]
})
# user system elapsed
# 2.306 0.051 2.353
When using large datasets, you could try a data.table join (usually pretty fast). Should be even faster (on large sets) if you set keys first
library( data.table )
#make data.frames out of your vectors
dt1 <- data.table( id = id1 )
dt2 <- data.table (id = id2 )
#update join with indexnumbers from dt2 of dt1, matching id.
dt1[dt2, idx := .I, on = .(id)]
# id idx
# 1: 1 4
# 2: 5 1
# 3: 8 2
# 4: 10 3
NB: this only returns the first matching position!

Create combinations of measurements concatenated using underscore

I have a dataframe df1
ID <- c("A","B","C")
Measurement <- c("Length","Height","Breadth")
df1 <- data.frame(ID,Measurement)
I am trying to create combinations of measurements with an underscore between them and put it under the ID column "ALL"
Here is my desired output
ID Measurement
A Length
B Height
C Breadth
ALL Length_Height_Breadth
ALL Length_Breadth_Height
ALL Breadth_Height_Length
ALL Breadth_Length_Height
ALL Height_Length_Breadth
ALL Height_Breadth_Length
Also when there are similar measurements in the "measurement" column, I want to eliminate the underscore.
For example:
ID <- c("A","B")
Measurement <- c("Length","Length")
df2 <- data.frame(ID,Measurement)
Then I would want the desired output to be
ID Measurement
A Length
B Length
ALL Length
I am trying to do something like this which is totally wrong
df1$ID <- paste(df1$Measurement, df1$Measurement, sep="_")
Can someone point me in the right direction to achieving the above outputs?
I would like to see how it is done programmatically instead of using the actual measurement names. I am intending to apply the logic to a larger dataset that has several measurement names and so a general solution would be much appreciated.
We could use the permn function from the combinat package:
library(combinat)
sol_1 <- sapply(permn(unique(df1$Measurement)),
FUN = function(x) paste(x, collapse = '_'))
rbind.data.frame(df1, data.frame('ID' = 'All', 'Measurement' = sol_1))
# ID Measurement
# 1 A Length
# 2 B Height
# 3 C Breadth
# 4 All Length_Height_Breadth
# 5 All Length_Breadth_Height
# 6 All Breadth_Length_Height
# 7 All Breadth_Height_Length
# 8 All Height_Breadth_Length
# 9 All Height_Length_Breadth
sol_2 <- sapply(permn(unique(df2$Measurement)),
FUN = function(x) paste(x, collapse = '_'))
rbind.data.frame(df2, data.frame('ID' = 'All', 'Measurement' = sol_2))
# ID Measurement
# 1 A Length
# 2 B Length
# 3 All Length
Giving credit where credit is due: Generating all distinct permutations of a list.
We could also use permutations from the gtools package (HT #joel.wilson):
library(gtools)
unique_meas <- as.character(unique(df1$Measurement))
apply(permutations(length(unique_meas), length(unique_meas), unique_meas),
1, FUN = function(x) paste(x, collapse = '_'))
# "Breadth_Height_Length" "Breadth_Length_Height"
# "Height_Breadth_Length" "Height_Length_Breadth"
# "Length_Breadth_Height" "Length_Height_Breadth"

Fastest way to filter a data.frame list column contents in R / Rcpp

I have a data.frame:
df <- structure(list(id = 1:3, vars = list("a", c("a", "b", "c"), c("b",
"c"))), .Names = c("id", "vars"), row.names = c(NA, -3L), class = "data.frame")
with a list column (each with a character vector):
> str(df)
'data.frame': 3 obs. of 2 variables:
$ id : int 1 2 3
$ vars:List of 3
..$ : chr "a"
..$ : chr "a" "b" "c"
..$ : chr "b" "c"
I want to filter the data.frame according to setdiff(vars,remove_this)
library(dplyr)
library(tidyr)
res <- df %>% mutate(vars = lapply(df$vars, setdiff, "a"))
which gets me this:
> res
id vars
1 1
2 2 b, c
3 3 b, c
But to get drop the character(0) vars I have to do something like:
res %>% unnest(vars) # and then do the equivalent of nest(vars) again after...
Actual datasets:
560K rows and 3800K rows that also have 10 more columns (to carry along).
(this is quite slow, which leads to question...)
What is the Fastest way to do this in R?
Is there a dplyr/ data.table/ other faster method?
How to do this with Rcpp?
UPDATE/EXTENSION:
can the column modification be done in place rather then by copying the lapply(vars,setdiff(... result?
what's the most efficient way to filter out for vars == character(0) if it must be a seperate step.
Setting aside any algorithmic improvements, the analogous data.table solution is automatically going to be faster because you won't have to copy the entire thing just to add a column:
library(data.table)
dt = as.data.table(df) # or use setDT to convert in place
dt[, newcol := lapply(vars, setdiff, 'a')][sapply(newcol, length) != 0]
# id vars newcol
#1: 2 a,b,c b,c
#2: 3 b,c b,c
You can also delete the original column (with basically 0 cost), by adding [, vars := NULL] at the end). Or you can simply overwrite the initial column if you don't need that info, i.e. dt[, vars := lapply(vars, setdiff, 'a')].
Now as far as algorithmic improvements go, assuming your id values are unique for each vars (and if not, add a new unique identifier), I think this is much faster and automatically takes care of the filtering:
dt[, unlist(vars), by = id][!V1 %in% 'a', .(vars = list(V1)), by = id]
# id vars
#1: 2 b,c
#2: 3 b,c
To carry along the other columns, I think it's easiest to simply merge back:
dt[, othercol := 5:7]
# notice the keyby
dt[, unlist(vars), by = id][!V1 %in% 'a', .(vars = list(V1)), keyby = id][dt, nomatch = 0]
# id vars i.vars othercol
#1: 2 b,c a,b,c 6
#2: 3 b,c b,c 7
Here's another way:
# prep
DT <- data.table(df)
DT[,vstr:=paste0(sort(unlist(vars)),collapse="_"),by=1:nrow(DT)]
setkey(DT,vstr)
get_badkeys <- function(x)
unlist(sapply(1:length(x),function(n) combn(sort(x),n,paste0,collapse="_")))
# choose values to exclude
baduns <- c("a","b")
# subset
DT[!J(get_badkeys(baduns))]
This is fairly fast, but it takes up your key.
Benchmarks. Here's a made-up example:
Candidates:
hannahh <- function(df,baduns){
df %>%
mutate(vars = lapply(.$vars, setdiff, baduns)) %>%
filter(!!sapply(vars,length))
}
eddi <- function(df,baduns){
dt = as.data.table(df)
dt[,
unlist(vars)
, by = id][!V1 %in% baduns,
.(vars = list(V1))
, keyby = id][dt, nomatch = 0]
}
stevenb <- function(df,baduns){
df %>%
rowwise() %>%
do(id = .$id, vars = .$vars, newcol = setdiff(.$vars, baduns)) %>%
mutate(length = length(newcol)) %>%
ungroup() %>%
filter(length > 0)
}
frank <- function(df,baduns){
DT <- data.table(df)
DT[,vstr:=paste0(sort(unlist(vars)),collapse="_"),by=1:nrow(DT)]
setkey(DT,vstr)
DT[!J(get_badkeys(baduns))]
}
Simulation:
nvals <- 4
nbads <- 2
maxlen <- 4
nobs <- 1e4
exdf <- data.table(
id=1:nobs,
vars=replicate(nobs,list(sample(valset,sample(maxlen,1))))
)
setDF(exdf)
baduns <- valset[1:nbads]
Results:
system.time(frank_res <- frank(exdf,baduns))
# user system elapsed
# 0.24 0.00 0.28
system.time(hannahh_res <- hannahh(exdf,baduns))
# 0.42 0.00 0.42
system.time(eddi_res <- eddi(exdf,baduns))
# 0.05 0.00 0.04
system.time(stevenb_res <- stevenb(exdf,baduns))
# 36.27 55.36 93.98
Checks:
identical(sort(frank_res$id),eddi_res$id) # TRUE
identical(unlist(stevenb_res$id),eddi_res$id) # TRUE
identical(unlist(hannahh_res$id),eddi_res$id) # TRUE
Discussion:
For eddi() and hannahh(), the results scarcely change with nvals, nbads and maxlen. In contrast, when baduns goes over 20, frank() becomes incredibly slow (like 20+ sec); it also scales up with nbads and maxlen a little worse than the other two.
Scaling up nobs, eddi()'s lead over hannahh() stays the same, at about 10x. Against frank(), it sometimes shrinks and sometimes stays the same. In the best nobs = 1e5 case for frank(), eddi() is still 3x faster.
If we switch from a valset of characters to something that frank() must coerce to a character for its by-row paste0 operation, both eddi() and hannahh() beat it as nobs grows.
Benchmarks for doing this repeatedly. This is probably obvious, but if you have to do this "many" times (...how many is hard to say), it's better to create the key column than to go through the subsetting for each set of baduns. In the simulation above, eddi() is about 5x as fast as frank(), so I'd go for the latter if I was doing this subsetting 10+ times.
maxbadlen <- 2
set_o_baduns <- replicate(10,sample(valset,size=sample(maxbadlen,1)))
system.time({
DT <- data.table(exdf)
DT[,vstr:=paste0(sort(unlist(vars)),collapse="_"),by=1:nrow(DT)]
setkey(DT,vstr)
for (i in 1:10) DT[!J(get_badkeys(set_o_baduns[[i]]))]
})
# user system elapsed
# 0.29 0.00 0.29
system.time({
dt = as.data.table(exdf)
for (i in 1:10) dt[,
unlist(vars), by = id][!V1 %in% set_o_baduns[[i]],
.(vars = list(V1)), keyby = id][dt, nomatch = 0]
})
# user system elapsed
# 0.39 0.00 0.39
system.time({
for (i in 1:10) hannahh(exdf,set_o_baduns[[i]])
})
# user system elapsed
# 4.10 0.00 4.13
So, as expected, frank() takes very little time for additional evaluations, while eddi() and hannahh() grow linearly.
Here's another idea:
df %>%
rowwise() %>%
do(id = .$id, vars = .$vars, newcol = setdiff(.$vars, "a")) %>%
mutate(length = length(newcol)) %>%
ungroup()
Which gives:
# id vars newcol length
#1 1 a 0
#2 2 a, b, c b, c 2
#3 3 b, c b, c 2
You could then filter on length > 0 to keep only non-empty newcol
df %>%
rowwise() %>%
do(id = .$id, vars = .$vars, newcol = setdiff(.$vars, "a")) %>%
mutate(length = length(newcol)) %>%
ungroup() %>%
filter(length > 0)
Which gives:
# id vars newcol length
#1 2 a, b, c b, c 2
#2 3 b, c b, c 2
Note: As mentioned by #Arun in the comments, this approach is quite slow. You are better off with the data.table solutions.

How can I delete column from data frame without causing a memory allocation error? [duplicate]

I have a number of columns that I would like to remove from a data frame. I know that we can delete them individually using something like:
df$x <- NULL
But I was hoping to do this with fewer commands.
Also, I know that I could drop columns using integer indexing like this:
df <- df[ -c(1, 3:6, 12) ]
But I am concerned that the relative position of my variables may change.
Given how powerful R is, I figured there might be a better way than dropping each column one by one.
You can use a simple list of names :
DF <- data.frame(
x=1:10,
y=10:1,
z=rep(5,10),
a=11:20
)
drops <- c("x","z")
DF[ , !(names(DF) %in% drops)]
Or, alternatively, you can make a list of those to keep and refer to them by name :
keeps <- c("y", "a")
DF[keeps]
EDIT :
For those still not acquainted with the drop argument of the indexing function, if you want to keep one column as a data frame, you do:
keeps <- "y"
DF[ , keeps, drop = FALSE]
drop=TRUE (or not mentioning it) will drop unnecessary dimensions, and hence return a vector with the values of column y.
There's also the subset command, useful if you know which columns you want:
df <- data.frame(a = 1:10, b = 2:11, c = 3:12)
df <- subset(df, select = c(a, c))
UPDATED after comment by #hadley: To drop columns a,c you could do:
df <- subset(df, select = -c(a, c))
within(df, rm(x))
is probably easiest, or for multiple variables:
within(df, rm(x, y))
Or if you're dealing with data.tables (per How do you delete a column by name in data.table?):
dt[, x := NULL] # Deletes column x by reference instantly.
dt[, !"x"] # Selects all but x into a new data.table.
or for multiple variables
dt[, c("x","y") := NULL]
dt[, !c("x", "y")]
You could use %in% like this:
df[, !(colnames(df) %in% c("x","bar","foo"))]
list(NULL) also works:
dat <- mtcars
colnames(dat)
# [1] "mpg" "cyl" "disp" "hp" "drat" "wt" "qsec" "vs" "am" "gear"
# [11] "carb"
dat[,c("mpg","cyl","wt")] <- list(NULL)
colnames(dat)
# [1] "disp" "hp" "drat" "qsec" "vs" "am" "gear" "carb"
If you want remove the columns by reference and avoid the internal copying associated with data.frames then you can use the data.table package and the function :=
You can pass a character vector names to the left hand side of the := operator, and NULL as the RHS.
library(data.table)
df <- data.frame(a=1:10, b=1:10, c=1:10, d=1:10)
DT <- data.table(df)
# or more simply DT <- data.table(a=1:10, b=1:10, c=1:10, d=1:10) #
DT[, c('a','b') := NULL]
If you want to predefine the names as as character vector outside the call to [, wrap the name of the object in () or {} to force the LHS to be evaluated in the calling scope not as a name within the scope of DT.
del <- c('a','b')
DT <- data.table(a=1:10, b=1:10, c=1:10, d=1:10)
DT[, (del) := NULL]
DT <- <- data.table(a=1:10, b=1:10, c=1:10, d=1:10)
DT[, {del} := NULL]
# force or `c` would also work.
You can also use set, which avoids the overhead of [.data.table, and also works for data.frames!
df <- data.frame(a=1:10, b=1:10, c=1:10, d=1:10)
DT <- data.table(df)
# drop `a` from df (no copying involved)
set(df, j = 'a', value = NULL)
# drop `b` from DT (no copying involved)
set(DT, j = 'b', value = NULL)
There is a potentially more powerful strategy based on the fact that grep() will return a numeric vector. If you have a long list of variables as I do in one of my dataset, some variables that end in ".A" and others that end in ".B" and you only want the ones that end in ".A" (along with all the variables that don't match either pattern, do this:
dfrm2 <- dfrm[ , -grep("\\.B$", names(dfrm)) ]
For the case at hand, using Joris Meys example, it might not be as compact, but it would be:
DF <- DF[, -grep( paste("^",drops,"$", sep="", collapse="|"), names(DF) )]
Another dplyr answer.
Use select(-column).
If your variables have some common naming structure, you might try starts_with(). For example
library(dplyr)
df <- data.frame(var1 = rnorm(5), var2 = rnorm(5), var3 = rnorm (5),
var4 = rnorm(5), char1 = rnorm(5), char2 = rnorm(5))
df
# var2 char1 var4 var3 char2 var1
#1 -0.4629512 -0.3595079 -0.04763169 0.6398194 0.70996579 0.75879754
#2 0.5489027 0.1572841 -1.65313658 -1.3228020 -1.42785427 0.31168919
#3 -0.1707694 -0.9036500 0.47583030 -0.6636173 0.02116066 0.03983268
df1 <- df %>% select(-starts_with("char"))
df1
# var2 var4 var3 var1
#1 -0.4629512 -0.04763169 0.6398194 0.75879754
#2 0.5489027 -1.65313658 -1.3228020 0.31168919
#3 -0.1707694 0.47583030 -0.6636173 0.03983268
If you want to drop a sequence of variables in the data frame, you can use :. For example if you wanted to drop var2, var3, and all variables in between, you'd just be left with var1:
df2 <- df1 %>% select(-c(var2:var3) )
df2
# var1
#1 0.75879754
#2 0.31168919
#3 0.03983268
Dplyr Solution
I doubt this will get much attention down here, but if you have a list of columns that you want to remove, and you want to do it in a dplyr chain I use one_of() in the select clause:
Here is a simple, reproducable example:
undesired <- c('mpg', 'cyl', 'hp')
mtcars <- mtcars %>%
select(-one_of(undesired))
Documentation can be found by running ?one_of or here:
http://genomicsclass.github.io/book/pages/dplyr_tutorial.html
Another possibility:
df <- df[, setdiff(names(df), c("a", "c"))]
or
df <- df[, grep('^(a|c)$', names(df), invert=TRUE)]
DF <- data.frame(
x=1:10,
y=10:1,
z=rep(5,10),
a=11:20
)
DF
Output:
x y z a
1 1 10 5 11
2 2 9 5 12
3 3 8 5 13
4 4 7 5 14
5 5 6 5 15
6 6 5 5 16
7 7 4 5 17
8 8 3 5 18
9 9 2 5 19
10 10 1 5 20
DF[c("a","x")] <- list(NULL)
Output:
y z
1 10 5
2 9 5
3 8 5
4 7 5
5 6 5
6 5 5
7 4 5
8 3 5
9 2 5
10 1 5
Out of interest, this flags up one of R's weird multiple syntax inconsistencies. For example given a two-column data frame:
df <- data.frame(x=1, y=2)
This gives a data frame
subset(df, select=-y)
but this gives a vector
df[,-2]
This is all explained in ?[ but it's not exactly expected behaviour. Well at least not to me...
Here is a dplyr way to go about it:
#df[ -c(1,3:6, 12) ] # original
df.cut <- df %>% select(-col.to.drop.1, -col.to.drop.2, ..., -col.to.drop.6) # with dplyr::select()
I like this because it's intuitive to read & understand without annotation and robust to columns changing position within the data frame. It also follows the vectorized idiom using - to remove elements.
I keep thinking there must be a better idiom, but for subtraction of columns by name, I tend to do the following:
df <- data.frame(a=1:10, b=1:10, c=1:10, d=1:10)
# return everything except a and c
df <- df[,-match(c("a","c"),names(df))]
df
There's a function called dropNamed() in Bernd Bischl's BBmisc package that does exactly this.
BBmisc::dropNamed(df, "x")
The advantage is that it avoids repeating the data frame argument and thus is suitable for piping in magrittr (just like the dplyr approaches):
df %>% BBmisc::dropNamed("x")
Another solution if you don't want to use #hadley's above: If "COLUMN_NAME" is the name of the column you want to drop:
df[,-which(names(df) == "COLUMN_NAME")]
Beyond select(-one_of(drop_col_names)) demonstrated in earlier answers, there are a couple other dplyr options for dropping columns using select() that do not involve defining all the specific column names (using the dplyr starwars sample data for some variety in column names):
library(dplyr)
starwars %>%
select(-(name:mass)) %>% # the range of columns from 'name' to 'mass'
select(-contains('color')) %>% # any column name that contains 'color'
select(-starts_with('bi')) %>% # any column name that starts with 'bi'
select(-ends_with('er')) %>% # any column name that ends with 'er'
select(-matches('^f.+s$')) %>% # any column name matching the regex pattern
select_if(~!is.list(.)) %>% # not by column name but by data type
head(2)
# A tibble: 2 x 2
homeworld species
<chr> <chr>
1 Tatooine Human
2 Tatooine Droid
If you need to drop a column that may or may not exist in the data frame, here's a slight twist using select_if() that unlike using one_of() will not throw an Unknown columns: warning if the column name does not exist. In this example 'bad_column' is not a column in the data frame:
starwars %>%
select_if(!names(.) %in% c('height', 'mass', 'bad_column'))
Provide the data frame and a string of comma separated names to remove:
remove_features <- function(df, features) {
rem_vec <- unlist(strsplit(features, ', '))
res <- df[,!(names(df) %in% rem_vec)]
return(res)
}
Usage:
remove_features(iris, "Sepal.Length, Petal.Width")
Drop and delete columns by columns name in data frame.
A <- df[ , c("Name","Name1","Name2","Name3")]
There are a lot of ways you can do...
Option-1:
df[ , -which(names(df) %in% c("name1","name2"))]
Option-2:
df[!names(df) %in% c("name1", "name2")]
Option-3:
subset(df, select=-c(name1,name2))
Find the index of the columns you want to drop using which. Give these indexes a negative sign (*-1). Then subset on those values, which will remove them from the dataframe. This is an example.
DF <- data.frame(one=c('a','b'), two=c('c', 'd'), three=c('e', 'f'), four=c('g', 'h'))
DF
# one two three four
#1 a d f i
#2 b e g j
DF[which(names(DF) %in% c('two','three')) *-1]
# one four
#1 a g
#2 b h
If you have a large data.frame and are low on memory use [ . . . . or rm and within to remove columns of a data.frame, as subset is currently (R 3.6.2) using more memory - beside the hint of the manual to use subset interactively.
getData <- function() {
n <- 1e7
set.seed(7)
data.frame(a = runif(n), b = runif(n), c = runif(n), d = runif(n))
}
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF <- DF[setdiff(names(DF), c("a", "c"))] ##
#DF <- DF[!(names(DF) %in% c("a", "c"))] #Alternative
#DF <- DF[-match(c("a","c"),names(DF))] #Alternative
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#0.1 MB are used
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF <- subset(DF, select = -c(a, c)) ##
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#357 MB are used
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF <- within(DF, rm(a, c)) ##
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#0.1 MB are used
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF[c("a", "c")] <- NULL ##
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#0.1 MB are used
Another data.table option which hasn't been posted yet is using the special verb .SD, which stands for subset of data. Together with the .SDcols argument you can select/drop columns by name or index.
require(data.table)
# data
dt = data.table(
A = LETTERS[1:5],
B = 1:5,
C = rep(TRUE, 5)
)
# delete B
dt[ , .SD, .SDcols =! 'B' ]
# delete all matches (i.e. all columns)
cols = grep('[A-Z]+', names(dt), value = TRUE)
dt[ , .SD, .SDcols =! cols ]
A summary of all the options for such a task in data.table can be found here
df <- data.frame(
+ a=1:5,
+ b=6:10,
+ c=rep(22,5),
+ d=round(runif(5)*100, 2),
+ e=round(runif(5)*100, 2),
+ f=round(runif(5)*100, 2),
+ g=round(runif(5)*100, 2),
+ h=round(runif(5)*100, 2)
+ )
> df
a b c d e f g h
1 1 6 22 76.31 39.96 66.62 72.75 73.14
2 2 7 22 53.41 94.85 96.02 97.31 85.32
3 3 8 22 98.29 38.95 12.61 29.67 88.45
4 4 9 22 20.04 53.53 83.07 77.50 94.99
5 5 10 22 5.67 0.42 15.07 59.75 31.21
> # remove cols: d g h
> newDf <- df[, c(1:3, 5), drop=TRUE]
> newDf
a b c e
1 1 6 22 39.96
2 2 7 22 94.85
3 3 8 22 38.95
4 4 9 22 53.53
5 5 10 22 0.42
Another option using the function fselect from the collapse package. Here is a reproducible example:
DF <- data.frame(
x=1:10,
y=10:1,
z=rep(5,10),
a=11:20
)
library(collapse)
fselect(DF, -z)
#> x y a
#> 1 1 10 11
#> 2 2 9 12
#> 3 3 8 13
#> 4 4 7 14
#> 5 5 6 15
#> 6 6 5 16
#> 7 7 4 17
#> 8 8 3 18
#> 9 9 2 19
#> 10 10 1 20
Created on 2022-08-26 with reprex v2.0.2

Categories

Resources