Related
I have a vector containing "potential" column names:
col_vector <- c("A", "B", "C")
I also have a data frame, e.g.
library(tidyverse)
df <- tibble(A = 1:2,
B = 1:2)
My goal now is to create all columns mentioned in col_vector that don't yet exist in df.
For the above exmaple, my code below works:
df %>%
mutate(!!sym(setdiff(col_vector, colnames(.))) := NA)
# A tibble: 2 x 3
A B C
<int> <int> <lgl>
1 1 1 NA
2 2 2 NA
Problem is that this code fails as soon as a) more than one column from col_vector is missing or b) no column from col_vector is missing. I thought about some sort of if_else, but don't know how to make the column creation conditional in such a way - preferably in a tidyverse way. I know I can just create a loop going through all the missing columns, but I'm wondering if there is a more direc approach.
Example data where code above fails:
df2 <- tibble(A = 1:2)
df3 <- tibble(A = 1:2,
B = 1:2,
C = 1:2)
This should work.
df[,setdiff(col_vector, colnames(df))] <- NA
Solution
This base operation might be simpler than a full-fledged dplyr workflow:
library(tidyverse) # For the setdiff() function.
# ...
# Code to generate 'df'.
# ...
# Find the subset of missing names, and create them as columns filled with 'NA'.
df[, setdiff(col_vector, names(df))] <- NA
# View results
df
Results
Given your sample col_vector and df here
col_vector <- c("A", "B", "C")
df <- tibble(A = 1:2, B = 1:2)
this solution should yield the following results:
# A tibble: 2 x 3
A B C
<int> <int> <lgl>
1 1 1 NA
2 2 2 NA
Advantages
An advantage of my solution, over the alternative linked above by #geoff, is that you need not code by hand the set of column names, as symbols and strings within the dplyr workflow.
df %>% mutate(
#####################################
A = ifelse("A" %in% names(.), A, NA),
B = ifelse("B" %in% names(.), B, NA),
C = ifelse("C" %in% names(.), B, NA)
# ...
# etc.
#####################################
)
My solution is by contrast more dynamic
##############################
df[, setdiff(col_vector, names(df))] <- NA
##############################
if you ever decide to change (or even dynamically calculate!) your variable names midstream, since it determines the setdiff() at runtime.
Note
Incredibly, #AustinGraves posted their answer at precisely the same time (2021-10-25 21:03:05Z) as I posted mine, so both answers qualify as original solutions.
I have a number of columns that I would like to remove from a data frame. I know that we can delete them individually using something like:
df$x <- NULL
But I was hoping to do this with fewer commands.
Also, I know that I could drop columns using integer indexing like this:
df <- df[ -c(1, 3:6, 12) ]
But I am concerned that the relative position of my variables may change.
Given how powerful R is, I figured there might be a better way than dropping each column one by one.
You can use a simple list of names :
DF <- data.frame(
x=1:10,
y=10:1,
z=rep(5,10),
a=11:20
)
drops <- c("x","z")
DF[ , !(names(DF) %in% drops)]
Or, alternatively, you can make a list of those to keep and refer to them by name :
keeps <- c("y", "a")
DF[keeps]
EDIT :
For those still not acquainted with the drop argument of the indexing function, if you want to keep one column as a data frame, you do:
keeps <- "y"
DF[ , keeps, drop = FALSE]
drop=TRUE (or not mentioning it) will drop unnecessary dimensions, and hence return a vector with the values of column y.
There's also the subset command, useful if you know which columns you want:
df <- data.frame(a = 1:10, b = 2:11, c = 3:12)
df <- subset(df, select = c(a, c))
UPDATED after comment by #hadley: To drop columns a,c you could do:
df <- subset(df, select = -c(a, c))
within(df, rm(x))
is probably easiest, or for multiple variables:
within(df, rm(x, y))
Or if you're dealing with data.tables (per How do you delete a column by name in data.table?):
dt[, x := NULL] # Deletes column x by reference instantly.
dt[, !"x"] # Selects all but x into a new data.table.
or for multiple variables
dt[, c("x","y") := NULL]
dt[, !c("x", "y")]
You could use %in% like this:
df[, !(colnames(df) %in% c("x","bar","foo"))]
list(NULL) also works:
dat <- mtcars
colnames(dat)
# [1] "mpg" "cyl" "disp" "hp" "drat" "wt" "qsec" "vs" "am" "gear"
# [11] "carb"
dat[,c("mpg","cyl","wt")] <- list(NULL)
colnames(dat)
# [1] "disp" "hp" "drat" "qsec" "vs" "am" "gear" "carb"
If you want remove the columns by reference and avoid the internal copying associated with data.frames then you can use the data.table package and the function :=
You can pass a character vector names to the left hand side of the := operator, and NULL as the RHS.
library(data.table)
df <- data.frame(a=1:10, b=1:10, c=1:10, d=1:10)
DT <- data.table(df)
# or more simply DT <- data.table(a=1:10, b=1:10, c=1:10, d=1:10) #
DT[, c('a','b') := NULL]
If you want to predefine the names as as character vector outside the call to [, wrap the name of the object in () or {} to force the LHS to be evaluated in the calling scope not as a name within the scope of DT.
del <- c('a','b')
DT <- data.table(a=1:10, b=1:10, c=1:10, d=1:10)
DT[, (del) := NULL]
DT <- <- data.table(a=1:10, b=1:10, c=1:10, d=1:10)
DT[, {del} := NULL]
# force or `c` would also work.
You can also use set, which avoids the overhead of [.data.table, and also works for data.frames!
df <- data.frame(a=1:10, b=1:10, c=1:10, d=1:10)
DT <- data.table(df)
# drop `a` from df (no copying involved)
set(df, j = 'a', value = NULL)
# drop `b` from DT (no copying involved)
set(DT, j = 'b', value = NULL)
There is a potentially more powerful strategy based on the fact that grep() will return a numeric vector. If you have a long list of variables as I do in one of my dataset, some variables that end in ".A" and others that end in ".B" and you only want the ones that end in ".A" (along with all the variables that don't match either pattern, do this:
dfrm2 <- dfrm[ , -grep("\\.B$", names(dfrm)) ]
For the case at hand, using Joris Meys example, it might not be as compact, but it would be:
DF <- DF[, -grep( paste("^",drops,"$", sep="", collapse="|"), names(DF) )]
Another dplyr answer.
Use select(-column).
If your variables have some common naming structure, you might try starts_with(). For example
library(dplyr)
df <- data.frame(var1 = rnorm(5), var2 = rnorm(5), var3 = rnorm (5),
var4 = rnorm(5), char1 = rnorm(5), char2 = rnorm(5))
df
# var2 char1 var4 var3 char2 var1
#1 -0.4629512 -0.3595079 -0.04763169 0.6398194 0.70996579 0.75879754
#2 0.5489027 0.1572841 -1.65313658 -1.3228020 -1.42785427 0.31168919
#3 -0.1707694 -0.9036500 0.47583030 -0.6636173 0.02116066 0.03983268
df1 <- df %>% select(-starts_with("char"))
df1
# var2 var4 var3 var1
#1 -0.4629512 -0.04763169 0.6398194 0.75879754
#2 0.5489027 -1.65313658 -1.3228020 0.31168919
#3 -0.1707694 0.47583030 -0.6636173 0.03983268
If you want to drop a sequence of variables in the data frame, you can use :. For example if you wanted to drop var2, var3, and all variables in between, you'd just be left with var1:
df2 <- df1 %>% select(-c(var2:var3) )
df2
# var1
#1 0.75879754
#2 0.31168919
#3 0.03983268
Dplyr Solution
I doubt this will get much attention down here, but if you have a list of columns that you want to remove, and you want to do it in a dplyr chain I use one_of() in the select clause:
Here is a simple, reproducable example:
undesired <- c('mpg', 'cyl', 'hp')
mtcars <- mtcars %>%
select(-one_of(undesired))
Documentation can be found by running ?one_of or here:
http://genomicsclass.github.io/book/pages/dplyr_tutorial.html
Another possibility:
df <- df[, setdiff(names(df), c("a", "c"))]
or
df <- df[, grep('^(a|c)$', names(df), invert=TRUE)]
DF <- data.frame(
x=1:10,
y=10:1,
z=rep(5,10),
a=11:20
)
DF
Output:
x y z a
1 1 10 5 11
2 2 9 5 12
3 3 8 5 13
4 4 7 5 14
5 5 6 5 15
6 6 5 5 16
7 7 4 5 17
8 8 3 5 18
9 9 2 5 19
10 10 1 5 20
DF[c("a","x")] <- list(NULL)
Output:
y z
1 10 5
2 9 5
3 8 5
4 7 5
5 6 5
6 5 5
7 4 5
8 3 5
9 2 5
10 1 5
Out of interest, this flags up one of R's weird multiple syntax inconsistencies. For example given a two-column data frame:
df <- data.frame(x=1, y=2)
This gives a data frame
subset(df, select=-y)
but this gives a vector
df[,-2]
This is all explained in ?[ but it's not exactly expected behaviour. Well at least not to me...
Here is a dplyr way to go about it:
#df[ -c(1,3:6, 12) ] # original
df.cut <- df %>% select(-col.to.drop.1, -col.to.drop.2, ..., -col.to.drop.6) # with dplyr::select()
I like this because it's intuitive to read & understand without annotation and robust to columns changing position within the data frame. It also follows the vectorized idiom using - to remove elements.
I keep thinking there must be a better idiom, but for subtraction of columns by name, I tend to do the following:
df <- data.frame(a=1:10, b=1:10, c=1:10, d=1:10)
# return everything except a and c
df <- df[,-match(c("a","c"),names(df))]
df
There's a function called dropNamed() in Bernd Bischl's BBmisc package that does exactly this.
BBmisc::dropNamed(df, "x")
The advantage is that it avoids repeating the data frame argument and thus is suitable for piping in magrittr (just like the dplyr approaches):
df %>% BBmisc::dropNamed("x")
Another solution if you don't want to use #hadley's above: If "COLUMN_NAME" is the name of the column you want to drop:
df[,-which(names(df) == "COLUMN_NAME")]
Beyond select(-one_of(drop_col_names)) demonstrated in earlier answers, there are a couple other dplyr options for dropping columns using select() that do not involve defining all the specific column names (using the dplyr starwars sample data for some variety in column names):
library(dplyr)
starwars %>%
select(-(name:mass)) %>% # the range of columns from 'name' to 'mass'
select(-contains('color')) %>% # any column name that contains 'color'
select(-starts_with('bi')) %>% # any column name that starts with 'bi'
select(-ends_with('er')) %>% # any column name that ends with 'er'
select(-matches('^f.+s$')) %>% # any column name matching the regex pattern
select_if(~!is.list(.)) %>% # not by column name but by data type
head(2)
# A tibble: 2 x 2
homeworld species
<chr> <chr>
1 Tatooine Human
2 Tatooine Droid
If you need to drop a column that may or may not exist in the data frame, here's a slight twist using select_if() that unlike using one_of() will not throw an Unknown columns: warning if the column name does not exist. In this example 'bad_column' is not a column in the data frame:
starwars %>%
select_if(!names(.) %in% c('height', 'mass', 'bad_column'))
Provide the data frame and a string of comma separated names to remove:
remove_features <- function(df, features) {
rem_vec <- unlist(strsplit(features, ', '))
res <- df[,!(names(df) %in% rem_vec)]
return(res)
}
Usage:
remove_features(iris, "Sepal.Length, Petal.Width")
Drop and delete columns by columns name in data frame.
A <- df[ , c("Name","Name1","Name2","Name3")]
There are a lot of ways you can do...
Option-1:
df[ , -which(names(df) %in% c("name1","name2"))]
Option-2:
df[!names(df) %in% c("name1", "name2")]
Option-3:
subset(df, select=-c(name1,name2))
Find the index of the columns you want to drop using which. Give these indexes a negative sign (*-1). Then subset on those values, which will remove them from the dataframe. This is an example.
DF <- data.frame(one=c('a','b'), two=c('c', 'd'), three=c('e', 'f'), four=c('g', 'h'))
DF
# one two three four
#1 a d f i
#2 b e g j
DF[which(names(DF) %in% c('two','three')) *-1]
# one four
#1 a g
#2 b h
If you have a large data.frame and are low on memory use [ . . . . or rm and within to remove columns of a data.frame, as subset is currently (R 3.6.2) using more memory - beside the hint of the manual to use subset interactively.
getData <- function() {
n <- 1e7
set.seed(7)
data.frame(a = runif(n), b = runif(n), c = runif(n), d = runif(n))
}
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF <- DF[setdiff(names(DF), c("a", "c"))] ##
#DF <- DF[!(names(DF) %in% c("a", "c"))] #Alternative
#DF <- DF[-match(c("a","c"),names(DF))] #Alternative
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#0.1 MB are used
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF <- subset(DF, select = -c(a, c)) ##
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#357 MB are used
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF <- within(DF, rm(a, c)) ##
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#0.1 MB are used
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF[c("a", "c")] <- NULL ##
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#0.1 MB are used
Another data.table option which hasn't been posted yet is using the special verb .SD, which stands for subset of data. Together with the .SDcols argument you can select/drop columns by name or index.
require(data.table)
# data
dt = data.table(
A = LETTERS[1:5],
B = 1:5,
C = rep(TRUE, 5)
)
# delete B
dt[ , .SD, .SDcols =! 'B' ]
# delete all matches (i.e. all columns)
cols = grep('[A-Z]+', names(dt), value = TRUE)
dt[ , .SD, .SDcols =! cols ]
A summary of all the options for such a task in data.table can be found here
df <- data.frame(
+ a=1:5,
+ b=6:10,
+ c=rep(22,5),
+ d=round(runif(5)*100, 2),
+ e=round(runif(5)*100, 2),
+ f=round(runif(5)*100, 2),
+ g=round(runif(5)*100, 2),
+ h=round(runif(5)*100, 2)
+ )
> df
a b c d e f g h
1 1 6 22 76.31 39.96 66.62 72.75 73.14
2 2 7 22 53.41 94.85 96.02 97.31 85.32
3 3 8 22 98.29 38.95 12.61 29.67 88.45
4 4 9 22 20.04 53.53 83.07 77.50 94.99
5 5 10 22 5.67 0.42 15.07 59.75 31.21
> # remove cols: d g h
> newDf <- df[, c(1:3, 5), drop=TRUE]
> newDf
a b c e
1 1 6 22 39.96
2 2 7 22 94.85
3 3 8 22 38.95
4 4 9 22 53.53
5 5 10 22 0.42
Another option using the function fselect from the collapse package. Here is a reproducible example:
DF <- data.frame(
x=1:10,
y=10:1,
z=rep(5,10),
a=11:20
)
library(collapse)
fselect(DF, -z)
#> x y a
#> 1 1 10 11
#> 2 2 9 12
#> 3 3 8 13
#> 4 4 7 14
#> 5 5 6 15
#> 6 6 5 16
#> 7 7 4 17
#> 8 8 3 18
#> 9 9 2 19
#> 10 10 1 20
Created on 2022-08-26 with reprex v2.0.2
I have a number of columns that I would like to remove from a data frame. I know that we can delete them individually using something like:
df$x <- NULL
But I was hoping to do this with fewer commands.
Also, I know that I could drop columns using integer indexing like this:
df <- df[ -c(1, 3:6, 12) ]
But I am concerned that the relative position of my variables may change.
Given how powerful R is, I figured there might be a better way than dropping each column one by one.
You can use a simple list of names :
DF <- data.frame(
x=1:10,
y=10:1,
z=rep(5,10),
a=11:20
)
drops <- c("x","z")
DF[ , !(names(DF) %in% drops)]
Or, alternatively, you can make a list of those to keep and refer to them by name :
keeps <- c("y", "a")
DF[keeps]
EDIT :
For those still not acquainted with the drop argument of the indexing function, if you want to keep one column as a data frame, you do:
keeps <- "y"
DF[ , keeps, drop = FALSE]
drop=TRUE (or not mentioning it) will drop unnecessary dimensions, and hence return a vector with the values of column y.
There's also the subset command, useful if you know which columns you want:
df <- data.frame(a = 1:10, b = 2:11, c = 3:12)
df <- subset(df, select = c(a, c))
UPDATED after comment by #hadley: To drop columns a,c you could do:
df <- subset(df, select = -c(a, c))
within(df, rm(x))
is probably easiest, or for multiple variables:
within(df, rm(x, y))
Or if you're dealing with data.tables (per How do you delete a column by name in data.table?):
dt[, x := NULL] # Deletes column x by reference instantly.
dt[, !"x"] # Selects all but x into a new data.table.
or for multiple variables
dt[, c("x","y") := NULL]
dt[, !c("x", "y")]
You could use %in% like this:
df[, !(colnames(df) %in% c("x","bar","foo"))]
list(NULL) also works:
dat <- mtcars
colnames(dat)
# [1] "mpg" "cyl" "disp" "hp" "drat" "wt" "qsec" "vs" "am" "gear"
# [11] "carb"
dat[,c("mpg","cyl","wt")] <- list(NULL)
colnames(dat)
# [1] "disp" "hp" "drat" "qsec" "vs" "am" "gear" "carb"
If you want remove the columns by reference and avoid the internal copying associated with data.frames then you can use the data.table package and the function :=
You can pass a character vector names to the left hand side of the := operator, and NULL as the RHS.
library(data.table)
df <- data.frame(a=1:10, b=1:10, c=1:10, d=1:10)
DT <- data.table(df)
# or more simply DT <- data.table(a=1:10, b=1:10, c=1:10, d=1:10) #
DT[, c('a','b') := NULL]
If you want to predefine the names as as character vector outside the call to [, wrap the name of the object in () or {} to force the LHS to be evaluated in the calling scope not as a name within the scope of DT.
del <- c('a','b')
DT <- data.table(a=1:10, b=1:10, c=1:10, d=1:10)
DT[, (del) := NULL]
DT <- <- data.table(a=1:10, b=1:10, c=1:10, d=1:10)
DT[, {del} := NULL]
# force or `c` would also work.
You can also use set, which avoids the overhead of [.data.table, and also works for data.frames!
df <- data.frame(a=1:10, b=1:10, c=1:10, d=1:10)
DT <- data.table(df)
# drop `a` from df (no copying involved)
set(df, j = 'a', value = NULL)
# drop `b` from DT (no copying involved)
set(DT, j = 'b', value = NULL)
There is a potentially more powerful strategy based on the fact that grep() will return a numeric vector. If you have a long list of variables as I do in one of my dataset, some variables that end in ".A" and others that end in ".B" and you only want the ones that end in ".A" (along with all the variables that don't match either pattern, do this:
dfrm2 <- dfrm[ , -grep("\\.B$", names(dfrm)) ]
For the case at hand, using Joris Meys example, it might not be as compact, but it would be:
DF <- DF[, -grep( paste("^",drops,"$", sep="", collapse="|"), names(DF) )]
Another dplyr answer.
Use select(-column).
If your variables have some common naming structure, you might try starts_with(). For example
library(dplyr)
df <- data.frame(var1 = rnorm(5), var2 = rnorm(5), var3 = rnorm (5),
var4 = rnorm(5), char1 = rnorm(5), char2 = rnorm(5))
df
# var2 char1 var4 var3 char2 var1
#1 -0.4629512 -0.3595079 -0.04763169 0.6398194 0.70996579 0.75879754
#2 0.5489027 0.1572841 -1.65313658 -1.3228020 -1.42785427 0.31168919
#3 -0.1707694 -0.9036500 0.47583030 -0.6636173 0.02116066 0.03983268
df1 <- df %>% select(-starts_with("char"))
df1
# var2 var4 var3 var1
#1 -0.4629512 -0.04763169 0.6398194 0.75879754
#2 0.5489027 -1.65313658 -1.3228020 0.31168919
#3 -0.1707694 0.47583030 -0.6636173 0.03983268
If you want to drop a sequence of variables in the data frame, you can use :. For example if you wanted to drop var2, var3, and all variables in between, you'd just be left with var1:
df2 <- df1 %>% select(-c(var2:var3) )
df2
# var1
#1 0.75879754
#2 0.31168919
#3 0.03983268
Dplyr Solution
I doubt this will get much attention down here, but if you have a list of columns that you want to remove, and you want to do it in a dplyr chain I use one_of() in the select clause:
Here is a simple, reproducable example:
undesired <- c('mpg', 'cyl', 'hp')
mtcars <- mtcars %>%
select(-one_of(undesired))
Documentation can be found by running ?one_of or here:
http://genomicsclass.github.io/book/pages/dplyr_tutorial.html
Another possibility:
df <- df[, setdiff(names(df), c("a", "c"))]
or
df <- df[, grep('^(a|c)$', names(df), invert=TRUE)]
DF <- data.frame(
x=1:10,
y=10:1,
z=rep(5,10),
a=11:20
)
DF
Output:
x y z a
1 1 10 5 11
2 2 9 5 12
3 3 8 5 13
4 4 7 5 14
5 5 6 5 15
6 6 5 5 16
7 7 4 5 17
8 8 3 5 18
9 9 2 5 19
10 10 1 5 20
DF[c("a","x")] <- list(NULL)
Output:
y z
1 10 5
2 9 5
3 8 5
4 7 5
5 6 5
6 5 5
7 4 5
8 3 5
9 2 5
10 1 5
Out of interest, this flags up one of R's weird multiple syntax inconsistencies. For example given a two-column data frame:
df <- data.frame(x=1, y=2)
This gives a data frame
subset(df, select=-y)
but this gives a vector
df[,-2]
This is all explained in ?[ but it's not exactly expected behaviour. Well at least not to me...
Here is a dplyr way to go about it:
#df[ -c(1,3:6, 12) ] # original
df.cut <- df %>% select(-col.to.drop.1, -col.to.drop.2, ..., -col.to.drop.6) # with dplyr::select()
I like this because it's intuitive to read & understand without annotation and robust to columns changing position within the data frame. It also follows the vectorized idiom using - to remove elements.
I keep thinking there must be a better idiom, but for subtraction of columns by name, I tend to do the following:
df <- data.frame(a=1:10, b=1:10, c=1:10, d=1:10)
# return everything except a and c
df <- df[,-match(c("a","c"),names(df))]
df
There's a function called dropNamed() in Bernd Bischl's BBmisc package that does exactly this.
BBmisc::dropNamed(df, "x")
The advantage is that it avoids repeating the data frame argument and thus is suitable for piping in magrittr (just like the dplyr approaches):
df %>% BBmisc::dropNamed("x")
Another solution if you don't want to use #hadley's above: If "COLUMN_NAME" is the name of the column you want to drop:
df[,-which(names(df) == "COLUMN_NAME")]
Beyond select(-one_of(drop_col_names)) demonstrated in earlier answers, there are a couple other dplyr options for dropping columns using select() that do not involve defining all the specific column names (using the dplyr starwars sample data for some variety in column names):
library(dplyr)
starwars %>%
select(-(name:mass)) %>% # the range of columns from 'name' to 'mass'
select(-contains('color')) %>% # any column name that contains 'color'
select(-starts_with('bi')) %>% # any column name that starts with 'bi'
select(-ends_with('er')) %>% # any column name that ends with 'er'
select(-matches('^f.+s$')) %>% # any column name matching the regex pattern
select_if(~!is.list(.)) %>% # not by column name but by data type
head(2)
# A tibble: 2 x 2
homeworld species
<chr> <chr>
1 Tatooine Human
2 Tatooine Droid
If you need to drop a column that may or may not exist in the data frame, here's a slight twist using select_if() that unlike using one_of() will not throw an Unknown columns: warning if the column name does not exist. In this example 'bad_column' is not a column in the data frame:
starwars %>%
select_if(!names(.) %in% c('height', 'mass', 'bad_column'))
Provide the data frame and a string of comma separated names to remove:
remove_features <- function(df, features) {
rem_vec <- unlist(strsplit(features, ', '))
res <- df[,!(names(df) %in% rem_vec)]
return(res)
}
Usage:
remove_features(iris, "Sepal.Length, Petal.Width")
Drop and delete columns by columns name in data frame.
A <- df[ , c("Name","Name1","Name2","Name3")]
There are a lot of ways you can do...
Option-1:
df[ , -which(names(df) %in% c("name1","name2"))]
Option-2:
df[!names(df) %in% c("name1", "name2")]
Option-3:
subset(df, select=-c(name1,name2))
Find the index of the columns you want to drop using which. Give these indexes a negative sign (*-1). Then subset on those values, which will remove them from the dataframe. This is an example.
DF <- data.frame(one=c('a','b'), two=c('c', 'd'), three=c('e', 'f'), four=c('g', 'h'))
DF
# one two three four
#1 a d f i
#2 b e g j
DF[which(names(DF) %in% c('two','three')) *-1]
# one four
#1 a g
#2 b h
If you have a large data.frame and are low on memory use [ . . . . or rm and within to remove columns of a data.frame, as subset is currently (R 3.6.2) using more memory - beside the hint of the manual to use subset interactively.
getData <- function() {
n <- 1e7
set.seed(7)
data.frame(a = runif(n), b = runif(n), c = runif(n), d = runif(n))
}
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF <- DF[setdiff(names(DF), c("a", "c"))] ##
#DF <- DF[!(names(DF) %in% c("a", "c"))] #Alternative
#DF <- DF[-match(c("a","c"),names(DF))] #Alternative
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#0.1 MB are used
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF <- subset(DF, select = -c(a, c)) ##
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#357 MB are used
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF <- within(DF, rm(a, c)) ##
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#0.1 MB are used
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF[c("a", "c")] <- NULL ##
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#0.1 MB are used
Another data.table option which hasn't been posted yet is using the special verb .SD, which stands for subset of data. Together with the .SDcols argument you can select/drop columns by name or index.
require(data.table)
# data
dt = data.table(
A = LETTERS[1:5],
B = 1:5,
C = rep(TRUE, 5)
)
# delete B
dt[ , .SD, .SDcols =! 'B' ]
# delete all matches (i.e. all columns)
cols = grep('[A-Z]+', names(dt), value = TRUE)
dt[ , .SD, .SDcols =! cols ]
A summary of all the options for such a task in data.table can be found here
df <- data.frame(
+ a=1:5,
+ b=6:10,
+ c=rep(22,5),
+ d=round(runif(5)*100, 2),
+ e=round(runif(5)*100, 2),
+ f=round(runif(5)*100, 2),
+ g=round(runif(5)*100, 2),
+ h=round(runif(5)*100, 2)
+ )
> df
a b c d e f g h
1 1 6 22 76.31 39.96 66.62 72.75 73.14
2 2 7 22 53.41 94.85 96.02 97.31 85.32
3 3 8 22 98.29 38.95 12.61 29.67 88.45
4 4 9 22 20.04 53.53 83.07 77.50 94.99
5 5 10 22 5.67 0.42 15.07 59.75 31.21
> # remove cols: d g h
> newDf <- df[, c(1:3, 5), drop=TRUE]
> newDf
a b c e
1 1 6 22 39.96
2 2 7 22 94.85
3 3 8 22 38.95
4 4 9 22 53.53
5 5 10 22 0.42
Another option using the function fselect from the collapse package. Here is a reproducible example:
DF <- data.frame(
x=1:10,
y=10:1,
z=rep(5,10),
a=11:20
)
library(collapse)
fselect(DF, -z)
#> x y a
#> 1 1 10 11
#> 2 2 9 12
#> 3 3 8 13
#> 4 4 7 14
#> 5 5 6 15
#> 6 6 5 16
#> 7 7 4 17
#> 8 8 3 18
#> 9 9 2 19
#> 10 10 1 20
Created on 2022-08-26 with reprex v2.0.2
I have a number of columns that I would like to remove from a data frame. I know that we can delete them individually using something like:
df$x <- NULL
But I was hoping to do this with fewer commands.
Also, I know that I could drop columns using integer indexing like this:
df <- df[ -c(1, 3:6, 12) ]
But I am concerned that the relative position of my variables may change.
Given how powerful R is, I figured there might be a better way than dropping each column one by one.
You can use a simple list of names :
DF <- data.frame(
x=1:10,
y=10:1,
z=rep(5,10),
a=11:20
)
drops <- c("x","z")
DF[ , !(names(DF) %in% drops)]
Or, alternatively, you can make a list of those to keep and refer to them by name :
keeps <- c("y", "a")
DF[keeps]
EDIT :
For those still not acquainted with the drop argument of the indexing function, if you want to keep one column as a data frame, you do:
keeps <- "y"
DF[ , keeps, drop = FALSE]
drop=TRUE (or not mentioning it) will drop unnecessary dimensions, and hence return a vector with the values of column y.
There's also the subset command, useful if you know which columns you want:
df <- data.frame(a = 1:10, b = 2:11, c = 3:12)
df <- subset(df, select = c(a, c))
UPDATED after comment by #hadley: To drop columns a,c you could do:
df <- subset(df, select = -c(a, c))
within(df, rm(x))
is probably easiest, or for multiple variables:
within(df, rm(x, y))
Or if you're dealing with data.tables (per How do you delete a column by name in data.table?):
dt[, x := NULL] # Deletes column x by reference instantly.
dt[, !"x"] # Selects all but x into a new data.table.
or for multiple variables
dt[, c("x","y") := NULL]
dt[, !c("x", "y")]
You could use %in% like this:
df[, !(colnames(df) %in% c("x","bar","foo"))]
list(NULL) also works:
dat <- mtcars
colnames(dat)
# [1] "mpg" "cyl" "disp" "hp" "drat" "wt" "qsec" "vs" "am" "gear"
# [11] "carb"
dat[,c("mpg","cyl","wt")] <- list(NULL)
colnames(dat)
# [1] "disp" "hp" "drat" "qsec" "vs" "am" "gear" "carb"
If you want remove the columns by reference and avoid the internal copying associated with data.frames then you can use the data.table package and the function :=
You can pass a character vector names to the left hand side of the := operator, and NULL as the RHS.
library(data.table)
df <- data.frame(a=1:10, b=1:10, c=1:10, d=1:10)
DT <- data.table(df)
# or more simply DT <- data.table(a=1:10, b=1:10, c=1:10, d=1:10) #
DT[, c('a','b') := NULL]
If you want to predefine the names as as character vector outside the call to [, wrap the name of the object in () or {} to force the LHS to be evaluated in the calling scope not as a name within the scope of DT.
del <- c('a','b')
DT <- data.table(a=1:10, b=1:10, c=1:10, d=1:10)
DT[, (del) := NULL]
DT <- <- data.table(a=1:10, b=1:10, c=1:10, d=1:10)
DT[, {del} := NULL]
# force or `c` would also work.
You can also use set, which avoids the overhead of [.data.table, and also works for data.frames!
df <- data.frame(a=1:10, b=1:10, c=1:10, d=1:10)
DT <- data.table(df)
# drop `a` from df (no copying involved)
set(df, j = 'a', value = NULL)
# drop `b` from DT (no copying involved)
set(DT, j = 'b', value = NULL)
There is a potentially more powerful strategy based on the fact that grep() will return a numeric vector. If you have a long list of variables as I do in one of my dataset, some variables that end in ".A" and others that end in ".B" and you only want the ones that end in ".A" (along with all the variables that don't match either pattern, do this:
dfrm2 <- dfrm[ , -grep("\\.B$", names(dfrm)) ]
For the case at hand, using Joris Meys example, it might not be as compact, but it would be:
DF <- DF[, -grep( paste("^",drops,"$", sep="", collapse="|"), names(DF) )]
Another dplyr answer.
Use select(-column).
If your variables have some common naming structure, you might try starts_with(). For example
library(dplyr)
df <- data.frame(var1 = rnorm(5), var2 = rnorm(5), var3 = rnorm (5),
var4 = rnorm(5), char1 = rnorm(5), char2 = rnorm(5))
df
# var2 char1 var4 var3 char2 var1
#1 -0.4629512 -0.3595079 -0.04763169 0.6398194 0.70996579 0.75879754
#2 0.5489027 0.1572841 -1.65313658 -1.3228020 -1.42785427 0.31168919
#3 -0.1707694 -0.9036500 0.47583030 -0.6636173 0.02116066 0.03983268
df1 <- df %>% select(-starts_with("char"))
df1
# var2 var4 var3 var1
#1 -0.4629512 -0.04763169 0.6398194 0.75879754
#2 0.5489027 -1.65313658 -1.3228020 0.31168919
#3 -0.1707694 0.47583030 -0.6636173 0.03983268
If you want to drop a sequence of variables in the data frame, you can use :. For example if you wanted to drop var2, var3, and all variables in between, you'd just be left with var1:
df2 <- df1 %>% select(-c(var2:var3) )
df2
# var1
#1 0.75879754
#2 0.31168919
#3 0.03983268
Dplyr Solution
I doubt this will get much attention down here, but if you have a list of columns that you want to remove, and you want to do it in a dplyr chain I use one_of() in the select clause:
Here is a simple, reproducable example:
undesired <- c('mpg', 'cyl', 'hp')
mtcars <- mtcars %>%
select(-one_of(undesired))
Documentation can be found by running ?one_of or here:
http://genomicsclass.github.io/book/pages/dplyr_tutorial.html
Another possibility:
df <- df[, setdiff(names(df), c("a", "c"))]
or
df <- df[, grep('^(a|c)$', names(df), invert=TRUE)]
DF <- data.frame(
x=1:10,
y=10:1,
z=rep(5,10),
a=11:20
)
DF
Output:
x y z a
1 1 10 5 11
2 2 9 5 12
3 3 8 5 13
4 4 7 5 14
5 5 6 5 15
6 6 5 5 16
7 7 4 5 17
8 8 3 5 18
9 9 2 5 19
10 10 1 5 20
DF[c("a","x")] <- list(NULL)
Output:
y z
1 10 5
2 9 5
3 8 5
4 7 5
5 6 5
6 5 5
7 4 5
8 3 5
9 2 5
10 1 5
Out of interest, this flags up one of R's weird multiple syntax inconsistencies. For example given a two-column data frame:
df <- data.frame(x=1, y=2)
This gives a data frame
subset(df, select=-y)
but this gives a vector
df[,-2]
This is all explained in ?[ but it's not exactly expected behaviour. Well at least not to me...
Here is a dplyr way to go about it:
#df[ -c(1,3:6, 12) ] # original
df.cut <- df %>% select(-col.to.drop.1, -col.to.drop.2, ..., -col.to.drop.6) # with dplyr::select()
I like this because it's intuitive to read & understand without annotation and robust to columns changing position within the data frame. It also follows the vectorized idiom using - to remove elements.
I keep thinking there must be a better idiom, but for subtraction of columns by name, I tend to do the following:
df <- data.frame(a=1:10, b=1:10, c=1:10, d=1:10)
# return everything except a and c
df <- df[,-match(c("a","c"),names(df))]
df
There's a function called dropNamed() in Bernd Bischl's BBmisc package that does exactly this.
BBmisc::dropNamed(df, "x")
The advantage is that it avoids repeating the data frame argument and thus is suitable for piping in magrittr (just like the dplyr approaches):
df %>% BBmisc::dropNamed("x")
Another solution if you don't want to use #hadley's above: If "COLUMN_NAME" is the name of the column you want to drop:
df[,-which(names(df) == "COLUMN_NAME")]
Beyond select(-one_of(drop_col_names)) demonstrated in earlier answers, there are a couple other dplyr options for dropping columns using select() that do not involve defining all the specific column names (using the dplyr starwars sample data for some variety in column names):
library(dplyr)
starwars %>%
select(-(name:mass)) %>% # the range of columns from 'name' to 'mass'
select(-contains('color')) %>% # any column name that contains 'color'
select(-starts_with('bi')) %>% # any column name that starts with 'bi'
select(-ends_with('er')) %>% # any column name that ends with 'er'
select(-matches('^f.+s$')) %>% # any column name matching the regex pattern
select_if(~!is.list(.)) %>% # not by column name but by data type
head(2)
# A tibble: 2 x 2
homeworld species
<chr> <chr>
1 Tatooine Human
2 Tatooine Droid
If you need to drop a column that may or may not exist in the data frame, here's a slight twist using select_if() that unlike using one_of() will not throw an Unknown columns: warning if the column name does not exist. In this example 'bad_column' is not a column in the data frame:
starwars %>%
select_if(!names(.) %in% c('height', 'mass', 'bad_column'))
Provide the data frame and a string of comma separated names to remove:
remove_features <- function(df, features) {
rem_vec <- unlist(strsplit(features, ', '))
res <- df[,!(names(df) %in% rem_vec)]
return(res)
}
Usage:
remove_features(iris, "Sepal.Length, Petal.Width")
Drop and delete columns by columns name in data frame.
A <- df[ , c("Name","Name1","Name2","Name3")]
There are a lot of ways you can do...
Option-1:
df[ , -which(names(df) %in% c("name1","name2"))]
Option-2:
df[!names(df) %in% c("name1", "name2")]
Option-3:
subset(df, select=-c(name1,name2))
Find the index of the columns you want to drop using which. Give these indexes a negative sign (*-1). Then subset on those values, which will remove them from the dataframe. This is an example.
DF <- data.frame(one=c('a','b'), two=c('c', 'd'), three=c('e', 'f'), four=c('g', 'h'))
DF
# one two three four
#1 a d f i
#2 b e g j
DF[which(names(DF) %in% c('two','three')) *-1]
# one four
#1 a g
#2 b h
If you have a large data.frame and are low on memory use [ . . . . or rm and within to remove columns of a data.frame, as subset is currently (R 3.6.2) using more memory - beside the hint of the manual to use subset interactively.
getData <- function() {
n <- 1e7
set.seed(7)
data.frame(a = runif(n), b = runif(n), c = runif(n), d = runif(n))
}
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF <- DF[setdiff(names(DF), c("a", "c"))] ##
#DF <- DF[!(names(DF) %in% c("a", "c"))] #Alternative
#DF <- DF[-match(c("a","c"),names(DF))] #Alternative
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#0.1 MB are used
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF <- subset(DF, select = -c(a, c)) ##
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#357 MB are used
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF <- within(DF, rm(a, c)) ##
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#0.1 MB are used
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF[c("a", "c")] <- NULL ##
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#0.1 MB are used
Another data.table option which hasn't been posted yet is using the special verb .SD, which stands for subset of data. Together with the .SDcols argument you can select/drop columns by name or index.
require(data.table)
# data
dt = data.table(
A = LETTERS[1:5],
B = 1:5,
C = rep(TRUE, 5)
)
# delete B
dt[ , .SD, .SDcols =! 'B' ]
# delete all matches (i.e. all columns)
cols = grep('[A-Z]+', names(dt), value = TRUE)
dt[ , .SD, .SDcols =! cols ]
A summary of all the options for such a task in data.table can be found here
df <- data.frame(
+ a=1:5,
+ b=6:10,
+ c=rep(22,5),
+ d=round(runif(5)*100, 2),
+ e=round(runif(5)*100, 2),
+ f=round(runif(5)*100, 2),
+ g=round(runif(5)*100, 2),
+ h=round(runif(5)*100, 2)
+ )
> df
a b c d e f g h
1 1 6 22 76.31 39.96 66.62 72.75 73.14
2 2 7 22 53.41 94.85 96.02 97.31 85.32
3 3 8 22 98.29 38.95 12.61 29.67 88.45
4 4 9 22 20.04 53.53 83.07 77.50 94.99
5 5 10 22 5.67 0.42 15.07 59.75 31.21
> # remove cols: d g h
> newDf <- df[, c(1:3, 5), drop=TRUE]
> newDf
a b c e
1 1 6 22 39.96
2 2 7 22 94.85
3 3 8 22 38.95
4 4 9 22 53.53
5 5 10 22 0.42
Another option using the function fselect from the collapse package. Here is a reproducible example:
DF <- data.frame(
x=1:10,
y=10:1,
z=rep(5,10),
a=11:20
)
library(collapse)
fselect(DF, -z)
#> x y a
#> 1 1 10 11
#> 2 2 9 12
#> 3 3 8 13
#> 4 4 7 14
#> 5 5 6 15
#> 6 6 5 16
#> 7 7 4 17
#> 8 8 3 18
#> 9 9 2 19
#> 10 10 1 20
Created on 2022-08-26 with reprex v2.0.2
I have a number of columns that I would like to remove from a data frame. I know that we can delete them individually using something like:
df$x <- NULL
But I was hoping to do this with fewer commands.
Also, I know that I could drop columns using integer indexing like this:
df <- df[ -c(1, 3:6, 12) ]
But I am concerned that the relative position of my variables may change.
Given how powerful R is, I figured there might be a better way than dropping each column one by one.
You can use a simple list of names :
DF <- data.frame(
x=1:10,
y=10:1,
z=rep(5,10),
a=11:20
)
drops <- c("x","z")
DF[ , !(names(DF) %in% drops)]
Or, alternatively, you can make a list of those to keep and refer to them by name :
keeps <- c("y", "a")
DF[keeps]
EDIT :
For those still not acquainted with the drop argument of the indexing function, if you want to keep one column as a data frame, you do:
keeps <- "y"
DF[ , keeps, drop = FALSE]
drop=TRUE (or not mentioning it) will drop unnecessary dimensions, and hence return a vector with the values of column y.
There's also the subset command, useful if you know which columns you want:
df <- data.frame(a = 1:10, b = 2:11, c = 3:12)
df <- subset(df, select = c(a, c))
UPDATED after comment by #hadley: To drop columns a,c you could do:
df <- subset(df, select = -c(a, c))
within(df, rm(x))
is probably easiest, or for multiple variables:
within(df, rm(x, y))
Or if you're dealing with data.tables (per How do you delete a column by name in data.table?):
dt[, x := NULL] # Deletes column x by reference instantly.
dt[, !"x"] # Selects all but x into a new data.table.
or for multiple variables
dt[, c("x","y") := NULL]
dt[, !c("x", "y")]
You could use %in% like this:
df[, !(colnames(df) %in% c("x","bar","foo"))]
list(NULL) also works:
dat <- mtcars
colnames(dat)
# [1] "mpg" "cyl" "disp" "hp" "drat" "wt" "qsec" "vs" "am" "gear"
# [11] "carb"
dat[,c("mpg","cyl","wt")] <- list(NULL)
colnames(dat)
# [1] "disp" "hp" "drat" "qsec" "vs" "am" "gear" "carb"
If you want remove the columns by reference and avoid the internal copying associated with data.frames then you can use the data.table package and the function :=
You can pass a character vector names to the left hand side of the := operator, and NULL as the RHS.
library(data.table)
df <- data.frame(a=1:10, b=1:10, c=1:10, d=1:10)
DT <- data.table(df)
# or more simply DT <- data.table(a=1:10, b=1:10, c=1:10, d=1:10) #
DT[, c('a','b') := NULL]
If you want to predefine the names as as character vector outside the call to [, wrap the name of the object in () or {} to force the LHS to be evaluated in the calling scope not as a name within the scope of DT.
del <- c('a','b')
DT <- data.table(a=1:10, b=1:10, c=1:10, d=1:10)
DT[, (del) := NULL]
DT <- <- data.table(a=1:10, b=1:10, c=1:10, d=1:10)
DT[, {del} := NULL]
# force or `c` would also work.
You can also use set, which avoids the overhead of [.data.table, and also works for data.frames!
df <- data.frame(a=1:10, b=1:10, c=1:10, d=1:10)
DT <- data.table(df)
# drop `a` from df (no copying involved)
set(df, j = 'a', value = NULL)
# drop `b` from DT (no copying involved)
set(DT, j = 'b', value = NULL)
There is a potentially more powerful strategy based on the fact that grep() will return a numeric vector. If you have a long list of variables as I do in one of my dataset, some variables that end in ".A" and others that end in ".B" and you only want the ones that end in ".A" (along with all the variables that don't match either pattern, do this:
dfrm2 <- dfrm[ , -grep("\\.B$", names(dfrm)) ]
For the case at hand, using Joris Meys example, it might not be as compact, but it would be:
DF <- DF[, -grep( paste("^",drops,"$", sep="", collapse="|"), names(DF) )]
Another dplyr answer.
Use select(-column).
If your variables have some common naming structure, you might try starts_with(). For example
library(dplyr)
df <- data.frame(var1 = rnorm(5), var2 = rnorm(5), var3 = rnorm (5),
var4 = rnorm(5), char1 = rnorm(5), char2 = rnorm(5))
df
# var2 char1 var4 var3 char2 var1
#1 -0.4629512 -0.3595079 -0.04763169 0.6398194 0.70996579 0.75879754
#2 0.5489027 0.1572841 -1.65313658 -1.3228020 -1.42785427 0.31168919
#3 -0.1707694 -0.9036500 0.47583030 -0.6636173 0.02116066 0.03983268
df1 <- df %>% select(-starts_with("char"))
df1
# var2 var4 var3 var1
#1 -0.4629512 -0.04763169 0.6398194 0.75879754
#2 0.5489027 -1.65313658 -1.3228020 0.31168919
#3 -0.1707694 0.47583030 -0.6636173 0.03983268
If you want to drop a sequence of variables in the data frame, you can use :. For example if you wanted to drop var2, var3, and all variables in between, you'd just be left with var1:
df2 <- df1 %>% select(-c(var2:var3) )
df2
# var1
#1 0.75879754
#2 0.31168919
#3 0.03983268
Dplyr Solution
I doubt this will get much attention down here, but if you have a list of columns that you want to remove, and you want to do it in a dplyr chain I use one_of() in the select clause:
Here is a simple, reproducable example:
undesired <- c('mpg', 'cyl', 'hp')
mtcars <- mtcars %>%
select(-one_of(undesired))
Documentation can be found by running ?one_of or here:
http://genomicsclass.github.io/book/pages/dplyr_tutorial.html
Another possibility:
df <- df[, setdiff(names(df), c("a", "c"))]
or
df <- df[, grep('^(a|c)$', names(df), invert=TRUE)]
DF <- data.frame(
x=1:10,
y=10:1,
z=rep(5,10),
a=11:20
)
DF
Output:
x y z a
1 1 10 5 11
2 2 9 5 12
3 3 8 5 13
4 4 7 5 14
5 5 6 5 15
6 6 5 5 16
7 7 4 5 17
8 8 3 5 18
9 9 2 5 19
10 10 1 5 20
DF[c("a","x")] <- list(NULL)
Output:
y z
1 10 5
2 9 5
3 8 5
4 7 5
5 6 5
6 5 5
7 4 5
8 3 5
9 2 5
10 1 5
Out of interest, this flags up one of R's weird multiple syntax inconsistencies. For example given a two-column data frame:
df <- data.frame(x=1, y=2)
This gives a data frame
subset(df, select=-y)
but this gives a vector
df[,-2]
This is all explained in ?[ but it's not exactly expected behaviour. Well at least not to me...
Here is a dplyr way to go about it:
#df[ -c(1,3:6, 12) ] # original
df.cut <- df %>% select(-col.to.drop.1, -col.to.drop.2, ..., -col.to.drop.6) # with dplyr::select()
I like this because it's intuitive to read & understand without annotation and robust to columns changing position within the data frame. It also follows the vectorized idiom using - to remove elements.
I keep thinking there must be a better idiom, but for subtraction of columns by name, I tend to do the following:
df <- data.frame(a=1:10, b=1:10, c=1:10, d=1:10)
# return everything except a and c
df <- df[,-match(c("a","c"),names(df))]
df
There's a function called dropNamed() in Bernd Bischl's BBmisc package that does exactly this.
BBmisc::dropNamed(df, "x")
The advantage is that it avoids repeating the data frame argument and thus is suitable for piping in magrittr (just like the dplyr approaches):
df %>% BBmisc::dropNamed("x")
Another solution if you don't want to use #hadley's above: If "COLUMN_NAME" is the name of the column you want to drop:
df[,-which(names(df) == "COLUMN_NAME")]
Beyond select(-one_of(drop_col_names)) demonstrated in earlier answers, there are a couple other dplyr options for dropping columns using select() that do not involve defining all the specific column names (using the dplyr starwars sample data for some variety in column names):
library(dplyr)
starwars %>%
select(-(name:mass)) %>% # the range of columns from 'name' to 'mass'
select(-contains('color')) %>% # any column name that contains 'color'
select(-starts_with('bi')) %>% # any column name that starts with 'bi'
select(-ends_with('er')) %>% # any column name that ends with 'er'
select(-matches('^f.+s$')) %>% # any column name matching the regex pattern
select_if(~!is.list(.)) %>% # not by column name but by data type
head(2)
# A tibble: 2 x 2
homeworld species
<chr> <chr>
1 Tatooine Human
2 Tatooine Droid
If you need to drop a column that may or may not exist in the data frame, here's a slight twist using select_if() that unlike using one_of() will not throw an Unknown columns: warning if the column name does not exist. In this example 'bad_column' is not a column in the data frame:
starwars %>%
select_if(!names(.) %in% c('height', 'mass', 'bad_column'))
Provide the data frame and a string of comma separated names to remove:
remove_features <- function(df, features) {
rem_vec <- unlist(strsplit(features, ', '))
res <- df[,!(names(df) %in% rem_vec)]
return(res)
}
Usage:
remove_features(iris, "Sepal.Length, Petal.Width")
Drop and delete columns by columns name in data frame.
A <- df[ , c("Name","Name1","Name2","Name3")]
There are a lot of ways you can do...
Option-1:
df[ , -which(names(df) %in% c("name1","name2"))]
Option-2:
df[!names(df) %in% c("name1", "name2")]
Option-3:
subset(df, select=-c(name1,name2))
Find the index of the columns you want to drop using which. Give these indexes a negative sign (*-1). Then subset on those values, which will remove them from the dataframe. This is an example.
DF <- data.frame(one=c('a','b'), two=c('c', 'd'), three=c('e', 'f'), four=c('g', 'h'))
DF
# one two three four
#1 a d f i
#2 b e g j
DF[which(names(DF) %in% c('two','three')) *-1]
# one four
#1 a g
#2 b h
If you have a large data.frame and are low on memory use [ . . . . or rm and within to remove columns of a data.frame, as subset is currently (R 3.6.2) using more memory - beside the hint of the manual to use subset interactively.
getData <- function() {
n <- 1e7
set.seed(7)
data.frame(a = runif(n), b = runif(n), c = runif(n), d = runif(n))
}
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF <- DF[setdiff(names(DF), c("a", "c"))] ##
#DF <- DF[!(names(DF) %in% c("a", "c"))] #Alternative
#DF <- DF[-match(c("a","c"),names(DF))] #Alternative
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#0.1 MB are used
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF <- subset(DF, select = -c(a, c)) ##
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#357 MB are used
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF <- within(DF, rm(a, c)) ##
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#0.1 MB are used
DF <- getData()
tt <- sum(.Internal(gc(FALSE, TRUE, TRUE))[13:14])
DF[c("a", "c")] <- NULL ##
sum(.Internal(gc(FALSE, FALSE, TRUE))[13:14]) - tt
#0.1 MB are used
Another data.table option which hasn't been posted yet is using the special verb .SD, which stands for subset of data. Together with the .SDcols argument you can select/drop columns by name or index.
require(data.table)
# data
dt = data.table(
A = LETTERS[1:5],
B = 1:5,
C = rep(TRUE, 5)
)
# delete B
dt[ , .SD, .SDcols =! 'B' ]
# delete all matches (i.e. all columns)
cols = grep('[A-Z]+', names(dt), value = TRUE)
dt[ , .SD, .SDcols =! cols ]
A summary of all the options for such a task in data.table can be found here
df <- data.frame(
+ a=1:5,
+ b=6:10,
+ c=rep(22,5),
+ d=round(runif(5)*100, 2),
+ e=round(runif(5)*100, 2),
+ f=round(runif(5)*100, 2),
+ g=round(runif(5)*100, 2),
+ h=round(runif(5)*100, 2)
+ )
> df
a b c d e f g h
1 1 6 22 76.31 39.96 66.62 72.75 73.14
2 2 7 22 53.41 94.85 96.02 97.31 85.32
3 3 8 22 98.29 38.95 12.61 29.67 88.45
4 4 9 22 20.04 53.53 83.07 77.50 94.99
5 5 10 22 5.67 0.42 15.07 59.75 31.21
> # remove cols: d g h
> newDf <- df[, c(1:3, 5), drop=TRUE]
> newDf
a b c e
1 1 6 22 39.96
2 2 7 22 94.85
3 3 8 22 38.95
4 4 9 22 53.53
5 5 10 22 0.42
Another option using the function fselect from the collapse package. Here is a reproducible example:
DF <- data.frame(
x=1:10,
y=10:1,
z=rep(5,10),
a=11:20
)
library(collapse)
fselect(DF, -z)
#> x y a
#> 1 1 10 11
#> 2 2 9 12
#> 3 3 8 13
#> 4 4 7 14
#> 5 5 6 15
#> 6 6 5 16
#> 7 7 4 17
#> 8 8 3 18
#> 9 9 2 19
#> 10 10 1 20
Created on 2022-08-26 with reprex v2.0.2