I have a table read in R as follows:
column1 column2
A B
What is the command to be used to match two columns together as follows?
Column 3
A_B
I'm a bit unsure what you mean by "merge", but is this what you mean?
> DF = data.frame(A = LETTERS[1:10], B = LETTERS[11:20])
> DF$C = paste(DF$A, DF$B, sep="_")
> head(DF)
A B C
1 A K A_K
2 B L B_L
3 C M C_M
4 D N D_N
Or equivalently, as #daroczig points out:
within(DF, C <- paste(A, B, sep='_'))
My personal favourite involves making use of the unite in tidyr:
set.seed(1)
df <- data.frame(colA = sample(LETTERS, 10),
colB = sample(LETTERS, 10))
# packs: pipe + unite
require(magrittr); require(tidyr)
# Unite
df %<>%
unite(ColAandB, colA, colB, remove = FALSE)
Results
> head(df, 3)
ColAandB colA colB
1 G_F G F
2 J_E J E
3 N_Q N Q
Side notes
Personally, I find the remove = TRUE / FALSE functionality of unite very useful. In addition tidyr firs the dplyr workflow very well and plays well with separate in case you change your mind about the columns being merged. On the same lines, if NAs are the problem introducing na.omit to your workflow would enable you to conveniently drop the undesirable rows before creating the desired column.
Related
I have a vector containing "potential" column names:
col_vector <- c("A", "B", "C")
I also have a data frame, e.g.
library(tidyverse)
df <- tibble(A = 1:2,
B = 1:2)
My goal now is to create all columns mentioned in col_vector that don't yet exist in df.
For the above exmaple, my code below works:
df %>%
mutate(!!sym(setdiff(col_vector, colnames(.))) := NA)
# A tibble: 2 x 3
A B C
<int> <int> <lgl>
1 1 1 NA
2 2 2 NA
Problem is that this code fails as soon as a) more than one column from col_vector is missing or b) no column from col_vector is missing. I thought about some sort of if_else, but don't know how to make the column creation conditional in such a way - preferably in a tidyverse way. I know I can just create a loop going through all the missing columns, but I'm wondering if there is a more direc approach.
Example data where code above fails:
df2 <- tibble(A = 1:2)
df3 <- tibble(A = 1:2,
B = 1:2,
C = 1:2)
This should work.
df[,setdiff(col_vector, colnames(df))] <- NA
Solution
This base operation might be simpler than a full-fledged dplyr workflow:
library(tidyverse) # For the setdiff() function.
# ...
# Code to generate 'df'.
# ...
# Find the subset of missing names, and create them as columns filled with 'NA'.
df[, setdiff(col_vector, names(df))] <- NA
# View results
df
Results
Given your sample col_vector and df here
col_vector <- c("A", "B", "C")
df <- tibble(A = 1:2, B = 1:2)
this solution should yield the following results:
# A tibble: 2 x 3
A B C
<int> <int> <lgl>
1 1 1 NA
2 2 2 NA
Advantages
An advantage of my solution, over the alternative linked above by #geoff, is that you need not code by hand the set of column names, as symbols and strings within the dplyr workflow.
df %>% mutate(
#####################################
A = ifelse("A" %in% names(.), A, NA),
B = ifelse("B" %in% names(.), B, NA),
C = ifelse("C" %in% names(.), B, NA)
# ...
# etc.
#####################################
)
My solution is by contrast more dynamic
##############################
df[, setdiff(col_vector, names(df))] <- NA
##############################
if you ever decide to change (or even dynamically calculate!) your variable names midstream, since it determines the setdiff() at runtime.
Note
Incredibly, #AustinGraves posted their answer at precisely the same time (2021-10-25 21:03:05Z) as I posted mine, so both answers qualify as original solutions.
I'm trying to figure out how to replace rows in one dataframe with another by matching the values of one of the columns. Both dataframes have the same column names.
Ex:
df1 <- data.frame(x = c(1,2,3,4), y = c("a", "b", "c", "d"))
df2 <- data.frame(x = c(1,2), y = c("f", "g"))
Is there a way to replace the rows of df1 with the same row in df2 where they share the same x variable? It would look like this.
data.frame(x = c(1,2,3,4), y = c("f","g","c","d")
I've been working on this for a while and this is the closest I've gotten -
df1[which(df1$x %in% df2$x),]$y <- df2[which(df1$x %in% df2$x),]$y
But it just replaces the values with NA.
Does anyone know how to do this?
We can use match. :
inds <- match(df1$x, df2$x)
df1$y[!is.na(inds)] <- df2$y[na.omit(inds)]
df1
# x y
#1 1 f
#2 2 g
#3 3 c
#4 4 d
First off, well done in producing a nice reproducible example that's directly copy-pastable. That always helps, specially with an example of expected output. Nice one!
You have several options, but lets look at why your solution doesn't quite work:
First of all, I tried copy-pasting your last line into a new session and got the dreaded factor-error:
Warning message:
In `[<-.factor`(`*tmp*`, iseq, value = 1:2) :
invalid factor level, NA generated
If we look at your data frames df1 and df2 with the str function, you will see that they do not contain text but factors. These are not text - in short they represent categorical data (male vs. female, scores A, B, C, D, and F, etc.) and are really integers that have a text as label. So that could be your issue.
Running your code gives a warning because you are trying to import new factors (labels) into df1 that don't exist. And R doesn't know what to do with them, so it just inserts NA-values.
As r2evens answered, he used the stringsAsFactors to disable using strings as Factors - you can even go as far as disabling it on a session-wide basis using options(stringsAsFactors=FALSE) (and I've heard it will be disabled as default in forthcoming R4.0 - yay!).
After disabling stringsAsFactors, your code works - or does it? Try this on for size:
df2 <- df2[c(2,1),]
df1[which(df1$x %in% df2$x),]$y <- df2[which(df1$x %in% df2$x),]$y
What's in df1 now? Not quite right anymore.
In the first line, I swapped the two rows in df2 and lo and behold, the replaced values in df1 were swapped. Why is that?
Let's deconstruct your statement df2[which(df1$x %in% df2$x),]$y
Call df1$x %in% df2$x returns a logical vector (boolean) of which elements in df1$x are found ind df2 - i.e. the first two and not the second two. But it doesn't relate which positions in the first vector corresponds to which in the second.
Calling which(df1$x %in% df2$x) then reduces the logical vector to which indices were TRUE. Again, we do not now which elements correspond to which.
For solutions, I would recommend r2evans, as it doesn't rely on extra packages (although data.table or dplyr are two powerful packages to get to know).
In his solution, he uses merge to perform a "full join" which matches rows based on the value, rather than - well, what you did. With transform, he assigns new variables within the context of the data.frame returned from the merge function called in the first argument.
I think what you need here is a "merge" or "join" operation.
(I add stringsAsFactors=FALSE to the frames so that the merging and later work is without any issue, as factors can be disruptive sometimes.)
Base R:
df1 <- data.frame(x = c(1,2,3,4), y = c("a", "b", "c", "d"), stringsAsFactors = FALSE)
# df2 <- data.frame(x = c(1,2), y = c("f", "g"), stringsAsFactors = FALSE)
merge(df1, df2, by = "x", all = TRUE)
# x y.x y.y
# 1 1 a f
# 2 2 b g
# 3 3 c <NA>
# 4 4 d <NA>
transform(merge(df1, df2, by = "x", all = TRUE), y = ifelse(is.na(y.y), y.x, y.y))
# x y.x y.y y
# 1 1 a f f
# 2 2 b g g
# 3 3 c <NA> c
# 4 4 d <NA> d
transform(merge(df1, df2, by = "x", all = TRUE), y = ifelse(is.na(y.y), y.x, y.y), y.x = NULL, y.y = NULL)
# x y
# 1 1 f
# 2 2 g
# 3 3 c
# 4 4 d
Dplyr:
library(dplyr)
full_join(df1, df2, by = "x") %>%
mutate(y = coalesce(y.y, y.x)) %>%
select(-y.x, -y.y)
# x y
# 1 1 f
# 2 2 g
# 3 3 c
# 4 4 d
A join option with data.table where we join on the 'x' column, assign the values of 'y' in second dataset (i.y) to the first one with :=
library(data.table)
setDT(df1)[df2, y := i.y, on = .(x)]
NOTE: It is better to use stringsAsFactors = FALSE (in R 4.0.0 - it is by default though) or else we need to have all the levels common in both datasets
I have a data set that has 313 columns, ~52000 rows of information. I need to remove each column that contains the word "PERMISSIONS". I've tried grep and dplyr but I can't seem to get it to work.
I've read the file in,
testSet <- read.csv("/Users/.../data.csv")
Other examples show how to remove columns by name but I don't know how to handle wildcards. Not quite sure where to go from here.
If you want to just remove columns that are named PERMISSIONS then you can use the select function in the dplyr package.
df <- data.frame("PERMISSIONS" = c(1,2), "Col2" = c(1,4), "Col3" = c(1,2))
PERMISSIONS Col2 Col3
1 1 1
2 4 2
df_sub <- select(df, -contains("PERMISSIONS"))
Col2 Col3
1 1
4 2
From what I could understand from the question, the OP has a data frame like this:
df <- read.table(text = '
a b c d
e f PERMISSIONS g
h i j k
PERMISSIONS l m n',
stringsAsFactors = F)
The goal is to remove every column that has any 'PERMISSIONS' entry. Assuming that there's no variability in 'PERMISSIONS', this code should work:
cols <- colSums(mapply('==', 'PERMISSIONS', df))
new.df <- df[,which(cols == 0)]
Try this,
New.testSet <- testSet[,!grepl("PERMISSIONS", colnames(testSet))]
EDIT: changed script as per comment.
We can use grepl with ! negate,
New.testSet <- testSet[!grepl("PERMISSIONS",row.names(testSet)),
!grepl("PERMISSIONS", colnames(testSet))]
It looks like these answers only do part of what you want. I think this is what you're looking for. There is probably a better way to write this though.
library(data.table)
df = data.frame("PERMISSIONS" = c(1,2), "Col2" = c("PERMISSIONS","A"), "Col3" = c(1,2))
PERMISSIONS Col2 Col3
1 1 PERMISSIONS 1
2 2 A 2
df = df[,!grepl("PERMISSIONS",colnames(df))]
setDT(df)
ind = df[, lapply(.SD, function(x) grepl("PERMISSIONS", x, perl=TRUE))]
df[,which(colSums(ind) == 0), with = FALSE]
Col3
1: 1
2: 2
Here is a simple testing case.
Was planning to split and extract only the first part of each string.
library(dplyr)
library(stringr)
test = data.frame(x= c('a b', 'c d'),stringsAsFactors = F)
test
x
1 a b
2 c d
test %>% mutate(y = str_split(x,'\\s+')[[1]][1])
x y
1 a b a
2 c d a
Was expecting something like:
x y
1 a b a
2 c d c
Nowadays there are various packaged functions for splitting columns into pieces. Here you could use the separate() function from the tidyr package. Since you want the first value of a split on the spaces, you can just remove everything after the first space.
tidyr::separate(test, x, "y", "\\s.*", FALSE, extra = "drop")
# x y
# 1 a b a
# 2 c d c
str_split returns a list where each element corresponds to an element in the original atomic vector. As such you will need to use lapply or similar to index appropriately
test %>% mutate(y = unlist(lapply(str_split(x,'\\s+'),'[[',1)))
We can also use sub
library(data.table)
setDT(test)[, y:= sub('\\s+.*', '', x)]
test
# x y
#1: a b a
#2: c d c
How can we select multiple columns using a vector of their numeric indices (position) in data.table?
This is how we would do with a data.frame:
df <- data.frame(a = 1, b = 2, c = 3)
df[ , 2:3]
# b c
# 1 2 3
For versions of data.table >= 1.9.8, the following all just work:
library(data.table)
dt <- data.table(a = 1, b = 2, c = 3)
# select single column by index
dt[, 2]
# b
# 1: 2
# select multiple columns by index
dt[, 2:3]
# b c
# 1: 2 3
# select single column by name
dt[, "a"]
# a
# 1: 1
# select multiple columns by name
dt[, c("a", "b")]
# a b
# 1: 1 2
For versions of data.table < 1.9.8 (for which numerical column selection required the use of with = FALSE), see this previous version of this answer. See also NEWS on v1.9.8, POTENTIALLY BREAKING CHANGES, point 3.
It's a bit verbose, but i've gotten used to using the hidden .SD variable.
b<-data.table(a=1,b=2,c=3,d=4)
b[,.SD,.SDcols=c(1:2)]
It's a bit of a hassle, but you don't lose out on other data.table features (I don't think), so you should still be able to use other important functions like join tables etc.
If you want to use column names to select the columns, simply use .(), which is an alias for list():
library(data.table)
dt <- data.table(a = 1:2, b = 2:3, c = 3:4)
dt[ , .(b, c)] # select the columns b and c
# Result:
# b c
# 1: 2 3
# 2: 3 4
From v1.10.2 onwards, you can also use ..
dt <- data.table(a=1:2, b=2:3, c=3:4)
keep_cols = c("a", "c")
dt[, ..keep_cols]
#Tom, thank you very much for pointing out this solution.
It works great for me.
I was looking for a way to just exclude one column from printing and from the example above. To exclude the second column you can do something like this
library(data.table)
dt <- data.table(a=1:2, b=2:3, c=3:4)
dt[,.SD,.SDcols=-2]
dt[,.SD,.SDcols=c(1,3)]