split string each x characters in dataframe - r

I know there are some answers here about splitting a string every nth character, such as this one and this one, However these are pretty question specific and mostly related to a single string and not to a data frame of multiple strings.
Example data
df <- data.frame(id = 1:2, seq = c('ABCDEFGHI', 'ZABCDJHIA'))
Looks like this:
id seq
1 1 ABCDEFGHI
2 2 ZABCDJHIA
Splitting on every third character
I want to split the string in each row every thrid character, such that the resulting data frame looks like this:
id 1 2 3
1 ABC DEF GHI
2 ZAB CDJ HIA
What I tried
I used the splitstackshape before to split a string on a single character, like so: df %>% cSplit('seq', sep = '', stripWhite = FALSE, type.convert = FALSE) I would love to have a similar function (or perhaps it is possbile with cSplit) to split on every third character.

An option would be separate
library(tidyverse)
df %>%
separate(seq, into = paste0("x", 1:3), sep = c(3, 6))
# id x1 x2 x3
#1 1 ABC DEF GHI
#2 2 ZAB CDJ HIA
If we want to create it more generic
n1 <- nchar(as.character(df$seq[1])) - 3
s1 <- seq(3, n1, by = 3)
nm1 <- paste0("x", seq_len(length(s1) +1))
df %>%
separate(seq, into = nm1, sep = s1)
Or using base R, using strsplit, split the 'seq' column for each instance of 3 characters by passing a regex lookaround into a list and then rbind the list elements
df[paste0("x", 1:3)] <- do.call(rbind,
strsplit(as.character(df$seq), "(?<=.{3})", perl = TRUE))
NOTE: It is better to avoid column names that start with non-standard labels such as numbers. For that reason, appended 'x' at the beginning of the names

You can split a string each x characters in base also with read.fwf (Read Fixed Width Format Files), which needs either a file or a connection.
read.fwf(file=textConnection(as.character(df$seq)), widths=c(3,3,3))
V1 V2 V3
1 ABC DEF GHI
2 ZAB CDJ HIA

Related

How to split and paste a string while mutating a dataframe?

I have a dataframe like this one:
x <- data.frame(filename = c("aa-b-c x", "c-dd-e y"), number=c(1,2))
filename number
1 aa-b-c x 1
2 c-dd-e y 2
I want to mutate the filename column so it looks like this:
filename number
1 c/aa/b 1
2 e/dd/c 2
This works on a single row: paste(str_match(x$filename[1], "(\\w+)-(\\w+)-(\\w+)")[c(4,2,3)], collapse = "/") but it fails inside the mutate. I'm sure I'm missing a simple fix.
One option is to rearrange the components after capturing as a group
library(dplyr)
library(stringr)
x %>%
mutate(filname = str_replace(filename,
"^([a-z]+)-([a-z]+)-([a-z]+)\\s.*", "\\3/\\1/\\2"))
str_match returns a matrix when you give it multiple vectors. This should work pretty well:
apply(str_match(x$filename, "(\\w+)-(\\w+)-(\\w+)")[, c(4,2,3), drop = FALSE], 1, paste, collapse = "/")
# [1] "c/aa/b" "e/c/dd"
The drop = FALSE is necessary to keep the output a matrix in case there is only one row.

How do I change all the character values of a column that starts with specific characters?

I have a dataset with millions of observations.
One of the columns of this dataset uses 4 or 5 characters to classify these observations.
My goal is to merge this classification into smaller groups, for example, I want to replace all the values of the column that STARTS with "AA" (e.g., "AABC" or "AAUCC") for just "A". How can I do this?
To illustrate:
Considering that my data is labeled "f2016" and the column that I'm interested in is "SECT16", I've been using the following code to replace values:
f2016$SECT16[f2016$SECT16 == "AABB"] <- "A"
But I cannot do this to all combinations of letters that I have in the dataset. Is there a way that I can do the same replacement holding the first two letters constant?
Here is another base R solution:
f2016[startsWith(f2016$SECT16, "AA"),] <- "A"
# SECT16
# 1 A
# 2 A
# 3 ABBBBC
# 4 DDDDE
# 5 BABA
This replaces chars with the prefix specified in this case AA. An an excerpt from from the help(startsWith).
startsWith() is equivalent to but much faster than
substring(x, 1, nchar(prefix)) == prefix
or also
grepl("^", x)
where prefix is not to contain special regular expression characters.
Data
f2016 <- data.frame(SECT16 = c("AAABBB", "AAAAAABBBB", "ABBBBC", "DDDDE", "BABA"), stringsAsFactors = F)
We can use grep/grepl
f2016$SECT16[grep("^AA", f2016$SECT16)] <- "A"
#f2016$SECT16[grepl("^AA", f2016$SECT16)] <- "A"
Consider this dataset
df <- data.frame(A = c("ABCD", "AACD", "DASDD", "AABB"), stringsAsFactors = FALSE)
df
# A
#1 ABCD
#2 AACD
#3 DASDD
#4 AABB
df$A[grep("^AA", df$A)] <- "A"
df
# A
#1 ABCD
#2 A
#3 DASDD
#4 A
You can use stringr and dplyr.
Modify all columns:
df <- df %>% mutate_all(function(x) stringr::str_replace(x, "^AA.+", "A"))
Modify specific columns:
df <- df %>% mutate_at(1, function(x) stringr::str_replace(x, "^AA.+", "A"))
Data
df <- data.frame(SECT16 = c("AABC", "AABB"),
SECT17 = c("AADD", "AAEE"))

How to turn a column of strings into a list of factors, filtering values with regex

I have a data frame with a column (A) that contains strings such that each word is separated by a comma (still one string).
Df
A B etc.
"String1, String2, etc." ... etc.
I want to
Turn the observations in column A into a list. The list will contain elements string 1, string 2 etc.
I want to remove all strings that are not 8 characters long, start with 4 numbers and end with 4 digits (I already have the Regex for that)
I want to turn all the strings into factors
The end product should look like this
Df
A B etc.
[String1, String2, etc] ... etc.
Doing some testing, I've realised a combination of strsplit() and str_subset fulfills requirements 1 and 2
var = "ABCD1234, ABCDEFGH"
var = str_split(var, ", ")
var = str_subset(var, "^[A-Za-z]{4}\\d{4}$")
# Var = list("ABCD1234")
But I'm having trouble applying this to a dataframe column. So far, this has not worked
df = df %>% mutate(
A = strsplit(A, split = ", ")
A = case_when(
TRUE ~ str_subset(A, "^[A-Za-z]{4}\\d{4}$")
)
)
Could someone help please?
Thanks
We can combine the two steps by first splitting the string on ", " and then use str_subset to get the strings which follow a pattern.
library(tidyverse)
df %>%
mutate(new = str_split(A, ", "),
new = map(new, str_subset, pattern = "^[A-Za-z]{4}\\d{4}$"))
# A new
#1 ABCD1234, ABCDEFGH ABCD1234
#2 AQD12345, AQWE1
#3 ABCD1234, ABCD5678 ABCD1234, ABCD5678
We can do this in base R, as well
df$new <- lapply(strsplit(df$A, ", "), grep,
pattern = "^[A-Za-z]{4}\\d{4}$", value = TRUE)
data
df <- data.frame(A = c("ABCD1234, ABCDEFGH", "AQD12345, AQWE1",
"ABCD1234, ABCD5678"), stringsAsFactors = FALSE)

Extract value between second and third underscore in R

I have a data below in the dataframe column-
X_ABC_123_DF</n>
A_NJU_678_PP</n>
J_HH_99_LL</n>
II_00_777_PPP</n>
I want to extract the value between second and third underscore for each row in the dataframe, which i am planning to create a new column and store those values.. I found one way on SO mentioned below, but they haven't mentioned how to write this in R. I am not sure how to write its regex function.
^(?:[^_]+_){2}([^_ ]+)<br>
extract word between 2nd underscore and 3rd underscore or space
A few solutions:
df$values = sapply(strsplit(df$V1, "_"), function(x) x[3])
df$values = gsub("(.*_){2}(\\d+)_.+", "\\2", df$V1)
library(dplyr)
library(stringr)
df %>%
mutate(values = str_extract(V1, "\\d+(?=_[a-zA-Z]+.+$)"))
Result:
V1 values
1 X_ABC_123_DF</n> 123
2 A_NJU_678_PP</n> 678
3 J_HH_99_LL</n> 99
4 II_00_777_PPP</n> 777
Data:
df = read.table(text = "X_ABC_123_DF</n>
A_NJU_678_PP</n>
J_HH_99_LL</n>
II_00_777_PPP</n>", stringsAsFactors = FALSE)
1) Assume the input is a data frame df with a single column V1. Read it in using read.table with sep="_" and then pick out the third column. No packages or regular expressions are used. If df$V1 is already character (as opposed to factor) then the as.character could be omitted.
read.table(text = as.character(df$V1), sep = "_")$V3
## [1] 123 678 99 777
2) If the third column is the only one that contains digits (which is the case for the sample data in the question) then it would be sufficient to replace each non-digit with the empty string:
as.numeric(gsub("\\D", "", df$V1))
## [1] 123 678 99 777

How to remove '.' from column names in a dataframe?

My dataframe which I read from a csv file has column names like this
abc.def, ewf.asd.fkl, qqit.vsf.addw.coil
I want to remove the '.' from all the names and convert them to
abcdef, eqfasdfkl, qqitvsfaddwcoil.
I tried using the sub command sub(".","",colnames(dataframe)) but this command took out the first letter of each column name and the column names changed to
bc.def, wf.asd.fkl, qit.vsf.addw.coil
Anyone know another command to do this. I can change the column name one by one, but I have a lot of files with 30 or more columns in each file.
Again, I want to remove the "." from all the colnames. I am trying to do this so I can use "sqldf" commands, which don't deal well with "."
Thank you for your help
1) sqldf can deal with names having dots in them if you quote the names:
library(sqldf)
d0 <- read.csv(text = "A.B,C.D\n1,2")
sqldf('select "A.B", "C.D" from d0')
giving:
A.B C.D
1 1 2
2) When reading the data using read.table or read.csv use the check.names=FALSE argument.
Compare:
Lines <- "A B,C D
1,2
3,4"
read.csv(text = Lines)
## A.B C.D
## 1 1 2
## 2 3 4
read.csv(text = Lines, check.names = FALSE)
## A B C D
## 1 1 2
## 2 3 4
however, in this example it still leaves a name that would have to be quoted in sqldf since the names have embedded spaces.
3) To simply remove the periods, if DF is a data frame:
names(DF) <- gsub(".", "", names(DF), fixed = TRUE)
or it might be nicer to convert the periods to underscores so that it is reversible:
names(DF) <- gsub(".", "_", names(DF), fixed = TRUE)
This last line could be alternatively done like this:
names(DF) <- chartr(".", "_", names(DF))
UPDATE dplyr 0.8.0
As of dplyr 0.8 funs() is soft deprecated, use formula notation.
a dplyr way to do this using stringr.
library(dplyr)
library(stringr)
data <- data.frame(abc.def = 1, ewf.asd.fkl = 2, qqit.vsf.addw.coil = 3)
renamed_data <- data %>%
rename_all(~str_replace_all(.,"\\.","_")) # note we have to escape the '.' character with \\
Make sure you install the packages with install.packages().
Remember you have to escape the . character with \\. in regex, which functions like str_replace_all use, . is a wildcard.
To replace all the dots in the names you'll need to use gsub, rather than sub, which will only replace the first occurrence.
This should work.
test <- data.frame(abc.def = NA, ewf.asd.fkl = NA, qqit.vsf.addw.coil = NA)
names(test) <- gsub( ".", "", names(test), fixed = TRUE)
test
abcdef ewfasdfkl qqitvsfaddwcoil
1 NA NA NA
You can also try:
names(df) = gsub(pattern = ".", replacement = "", x = names(df))

Resources