Trimming data frame in R with grep? - r

My dataframe, dat, has two columns which look like this:
value condition
2 learning/cat
4 learning/dog
1 naming/cat
6 naming/dog
I would like to 'trim' the data frame to only include rows in which condition contains "naming".
I've tried to do this with grep:
dat = dat[grep("naming", dat$condition, value = T)]
which causes the following error:
Error in `[.data.frame`(dat, grep("naming", dat$condition, value = T)) :
undefined columns selected
Can anyone suggest a fix? Any help would be greatly appreciated!

You can split up condition using separate from tidyr:
df = input_df %>% separate( condition, into = c("condition1", "condition2"), sep = "/")
Then just use filter:
only_naming_df = df %>% filter(condition1 == "naming")

The error is easy to fix once adding a comma after the parenthesis. But I want to have a list of available options to achieve this task. Belows are solution and comments from others and mine.
Use grep or grepl
grep returns the index (row number), while grepl returns a logical vector (TRUE or FALSE). Notice that when using grep in this case, value = T should not be added because it will return the string, which is not helpful for subsetting.
dat[grep("naming", dat$condition), ]
dat[grepl("naming", dat$condition), ]
Functions from dplyr and stringr
str_detect is equivalent to grepl(pattern, x), while str_which is equivalent to grep(pattern, x).
library(dplyr)
library(stringr)
dat %>% filter(str_detect(condition, "naming"))
dat %>% slice(str_which(condition, "naming"))
Data Preparation
# Create example dataframes
dat <- read.table(text = "value condition
2 learning/cat
4 learning/dog
1 naming/cat
6 naming/dog",
header = TRUE, stringsAsFactors = FALSE)

Related

grepl in multiple columns in R

I'm trying to do a string search and replace across multiple columns in R. My code:
# Get columns of interest
selected_columns <- c(368,370,372,374,376,378,380,382,384,386,388,390,392,394)
#Perform grepl across multiple columns
df[,selected_columns][grepl('apples',df[,selected_columns],ignore.case = TRUE)] <- 'category1'
However, I'm getting the error:
Error: undefined columns selected
Thanks in advance.
grep/grepl works on vectors/matrix and not on data.frame/list. According to the?grep`
x - a character vector where matches are sought, or an object which can be coerced by as.character to a character vector.
We can loop over the columns (lapply) and replace the values based on the match
df[, selected_columns] <- lapply(df[, selected_columns],
function(x) replace(x, grepl('apples', x, ignore.case = TRUE), 'category1'))
Or with dplyr
library(dplyr)
library(stringr)
df %>%
mutate_at(selected_columns, ~ replace(., str_detect(., 'apples'), 'category1'))
Assuming you want to partially match a cell and replace it, you could use rapply() and replace cell contents that have "apples" with "category1" using gsub():
df[selected_columns] <- rapply(df[selected_columns], function(x) gsub("apples", "category1", x), how = "replace")
Just keep in mind the difference between grepl()/gsub() (with and without boundaries in your regex), and %in%/match() when searching for strings.

str_extract_all: return all patterns found in string concatenated as vector

I want to extract everything but a pattern and return this concetenated in a string.
I tried to combine str_extract_all together with sapply and cat
x = c("a_1","a_20","a_40","a_30","a_28")
data <- tibble(age = x)
# extracting just the first pattern is easy
data %>%
mutate(age_new = str_extract(age,"[^a_]"))
# combining str_extract_all and sapply doesnt work
data %>%
mutate(age_new = sapply(str_extract_all(x,"[^a_]"),function(x) cat(x,sep="")))
class(str_extract_all(x,"[^a_]"))
sapply(str_extract_all(x,"[^a_]"),function(x) cat(x,sep=""))
Returns NULL instead of concatenated patterns
Instead of cat, we can use paste. Also, with tidyverse, can make use of map and str_c (in place of paste - from stringr)
library(tidyverse)
data %>%
mutate(age_new = map_chr(str_extract_all(x, "[^a_]+"), ~ str_c(.x, collapse="")))
using `OP's code
data %>%
mutate(age_new = sapply(str_extract_all(x,"[^a_]"),
function(x) paste(x,collapse="")))
If the intention is to get the numbers
library(readr)
data %>%
mutate(age_new = parse_number(x))
Here is a non tidyverse solution, just using stringr.
apply(str_extract_all(column,regex_command,simplify = TRUE),1,paste,collapse="")
'simplify' = TRUE changed str_extract_all to output a matrix, and apply iterates over the matrix. I got the idea from https://stackoverflow.com/a/4213674/8427463
Example: extract all 'r' in rownames(mtcar) and concatenate as a vector
library(stringr)
apply(str_extract_all(rownames(mtcars),"r",simplify = TRUE),1,paste,collapse="")

searching fields using grepl in R

I'm trying to use grepl to flag some data that might be interesting in a genetics dataset I have.
An example of the data looks like this
test <- c("AAT,TAA,TGA,A,G", "A,AAT,AAAT,AATAAT", "CA,CAA,CAAA")
pattern <- c("TAA", "G", "CAA")
df <- data.frame(test, pattern)
What I am trying to do is to create a third column, say result that evaluates whether the value in the pattern column is in the test column.
I tried this:
df.result <- df %>% mutate(result = grepl(pattern, test))
But for some reason I get a TRUE, TRUE, FALSE in the result column, which isn't what I'm expecting - I would expect a TRUE, FALSE, TRUE result.
I've played around with things like adding a comma to the end of each field, but that didn't seem to work either.
Would appreciate any help with this!
Thanks,
Steve
Use the apply() function:
df$result <- apply(df, 1, FUN=function(x) grepl(x[2], x[1])
df
# test pattern result
# 1 AAT,TAA,TGA,A,G TAA TRUE
# 2 A,AAT,AAAT,AATAAT G FALSE
# 3 CA,CAA,CAAA CAA TRUE
The apply function loops through each row of the df separately, feeding grepl with per row information. grepl cannot process a vector with three elements in the pattern argument. The help page says:
If a character vector of length 2 or more is supplied [as pattern], the first element is used with a warning.
Thus, the original command grepl(df$pattern, df$test) compared the first element from pattern (TAA) to the whole vector in test.
This can be otherwise done with mapply
df$result <- mapply(grepl, df$pattern, df$test)
df$result
#[1] TRUE FALSE TRUE
The stringi package provides string matching functions that are vectorised over both string and pattern;
library(stringi)
df %>% mutate(result = stri_detect_regex(test, pattern))
is one answer to the original question. An answer to the question about avoiding substring matches is
df %>% mutate(result = stri_detect_regex(test, stri_join('(^|,)', pattern, '(,|$)')))

remove rows that a particular column has NA [duplicate]

I am working on a large dataset, with some rows with NAs and others with blanks:
df <- data.frame(ID = c(1:7),
home_pc = c("","CB4 2DT", "NE5 7TH", "BY5 8IB", "DH4 6PB","MP9 7GH","KN4 5GH"),
start_pc = c(NA,"Home", "FC5 7YH","Home", "CB3 5TH", "BV6 5PB",NA),
end_pc = c(NA,"CB5 4FG","Home","","Home","",NA))
How do I remove the NAs and blanks in one go (in the start_pc and end_pc columns)? I have in the past used:
df<- df[-which(is.na(df$start_pc)), ]
... to remove the NAs - is there a similar command to remove the blanks?
df[!(is.na(df$start_pc) | df$start_pc==""), ]
It is the same construct - simply test for empty strings rather than NA:
Try this:
df <- df[-which(df$start_pc == ""), ]
In fact, looking at your code, you don't need the which, but use the negation instead, so you can simplify it to:
df <- df[!(df$start_pc == ""), ]
df <- df[!is.na(df$start_pc), ]
And, of course, you can combine these two statements as follows:
df <- df[!(df$start_pc == "" | is.na(df$start_pc)), ]
And simplify it even further with with:
df <- with(df, df[!(start_pc == "" | is.na(start_pc)), ])
You can also test for non-zero string length using nzchar.
df <- with(df, df[!(nzchar(start_pc) | is.na(start_pc)), ])
Disclaimer: I didn't test any of this code. Please let me know if there are syntax errors anywhere
An elegant solution with dplyr would be:
df %>%
# recode empty strings "" by NAs
na_if("") %>%
# remove NAs
na.omit
Alternative solution can be to remove the rows with blanks in one variable:
df <- subset(df, VAR != "")
An easy approach would be making all the blank cells NA and only keeping complete cases. You might also look for na.omit examples. It is a widely discussed topic.
df[df==""]<-NA
df<-df[complete.cases(df),]

Delete rows with blank values in one particular column

I am working on a large dataset, with some rows with NAs and others with blanks:
df <- data.frame(ID = c(1:7),
home_pc = c("","CB4 2DT", "NE5 7TH", "BY5 8IB", "DH4 6PB","MP9 7GH","KN4 5GH"),
start_pc = c(NA,"Home", "FC5 7YH","Home", "CB3 5TH", "BV6 5PB",NA),
end_pc = c(NA,"CB5 4FG","Home","","Home","",NA))
How do I remove the NAs and blanks in one go (in the start_pc and end_pc columns)? I have in the past used:
df<- df[-which(is.na(df$start_pc)), ]
... to remove the NAs - is there a similar command to remove the blanks?
df[!(is.na(df$start_pc) | df$start_pc==""), ]
It is the same construct - simply test for empty strings rather than NA:
Try this:
df <- df[-which(df$start_pc == ""), ]
In fact, looking at your code, you don't need the which, but use the negation instead, so you can simplify it to:
df <- df[!(df$start_pc == ""), ]
df <- df[!is.na(df$start_pc), ]
And, of course, you can combine these two statements as follows:
df <- df[!(df$start_pc == "" | is.na(df$start_pc)), ]
And simplify it even further with with:
df <- with(df, df[!(start_pc == "" | is.na(start_pc)), ])
You can also test for non-zero string length using nzchar.
df <- with(df, df[!(nzchar(start_pc) | is.na(start_pc)), ])
Disclaimer: I didn't test any of this code. Please let me know if there are syntax errors anywhere
An elegant solution with dplyr would be:
df %>%
# recode empty strings "" by NAs
na_if("") %>%
# remove NAs
na.omit
Alternative solution can be to remove the rows with blanks in one variable:
df <- subset(df, VAR != "")
An easy approach would be making all the blank cells NA and only keeping complete cases. You might also look for na.omit examples. It is a widely discussed topic.
df[df==""]<-NA
df<-df[complete.cases(df),]

Resources