regex commas not between two numbers - r

I am looking for a regex for gsub to remove all the unwanted commas:
Data:
,,,,,,,12345
12345,1345,1354
123,,,,,,
12345,
,12354
Desired result:
12345
12345,1345,1354
123
12345
12354
This is the progress I have made so far:
(,(?!\d+))

You seem to want to remove all leading and trailing commas.
You may do it with
gsub("^,+|,+$", "", x)
See the regex demo
The regex contans two alternations, ^,+ matches 1 or more commas at the start and ,+$ matches 1+ commas at the end, and gsub replaces these matches with empty strings.
See R demo
x <- c(",,,,,,,12345","12345,1345,1354","123,,,,,,","12345,",",12354")
gsub("^,+|,+$", "", x)
## [1] "12345" "12345,1345,1354" "123" "12345"
## [5] "12354"

You can also use str_extract from stringr. Thanks to greedy matching, you don't have to specify how many times a digit occurs, the longest match is automatically chosen:
library(dplyr)
library(stringr)
df %>%
mutate(V1 = str_extract(V1, "\\d.+\\d"))
or if you prefer base R:
df$V1 = regmatches(df$V1, gregexpr("\\d.+\\d", df$V1))
Result:
V1
1 12345
2 12345,1345,1354
3 123
4 12345
5 12354
Data:
df = read.table(text = ",,,,,,,12345
12345,1345,1354
123,,,,,,
12345,
,12354")

Related

Export multiple matching pattern

I am trying to extract AOB1 or AOB2 or AOB3 from the string below.
df <- data.frame(
id = c(1,2,3),
string = c("acv-32-AOB1", "osa-122-AOB2","cds-543-rr-AOB3")
)
> df
id string
1 1 acv-32-AOB1
2 2 osa-122-AOB2
3 3 cds-543-rr-AOB3
Any ideas?
Thanks!
We can use trimws from base R
trimws(df$string, whitespace =".*-")
[1] "AOB1" "AOB2" "AOB3"
Or use sub from base R
sub(".*-", "", df$string)
[1] "AOB1" "AOB2" "AOB3"
Or if we need to do extract the 'AOB' followed by digits
library(stringr)
str_extract(df$string, "AOB\\d+")
[1] "AOB1" "AOB2" "AOB3"
You can use regular expressions for this:
.* Match anything
(AOB[1-3]) then match AOB followed by a 1, 2 or 3
\\1 replace the entire string with the matched AOB1-3 slot
gsub(".*(AOB[1-3])", "\\1", df$string)
Here is one more dear friends:
m <- gregexpr("[a-zA-Z]{3}\\d{1}", df$string)
unlist(regmatches(df$string, m))
> unlist(regmatches(df$string, m))
[1] "AOB1" "AOB2" "AOB3"
I made some modification so that your desired pattern could be everywhere and you could use the following solution to extract it:
df$res <- gsub("(.*)?([A-Z]{3}\\d)(.*)?", "\\2", df$string, perl = TRUE)
df
id string res
1 1 acv-32-AOB1 AOB1
2 2 osa-122-AOB2 AOB2
3 3 cds-543-rr-AOB3 AOB3

How to remove strings that start with alphabet using gsub in R?

I have collected tweets and I would like to extract the emoji unicode from each tweet. The emoji unicode is in <U+00001F44D> format and I have used the gsub function on R to remove all texts before and after the emoji using the function
tweets$text <- gsub(".*(<.*>).*", "\\1", tweets$text)
However, because there may be several emojis per tweet, i have decided to split each column after the character ">".
In some columns, there are strings that are just alphabet characters and does not start with "<".
My question is: How do I remove the string if it does not start with a "<"?
example:
data$text <- c("<U+000>", "character", "abc <U+000>")
data$text <- gsub(".*(<.*>).*", "\\1", data$text)
the data will still include the "character" string, but I'm trying to remove all characters except emoji unicode.
We can use grep instead of gsub
grep("^\\<", v1, invert = TRUE, value = TRUE)
#[1] "<U+000>"
If we need to extract the emoji's and remove the rest of characters, we can use str_extract from stringr. Specify the regex to match i.e. < is a metacharacter, so we can escape it (\\<) followed by one or more characters that are not a > (inside the square brackets will evaluate the literal character - ^ - implies not that character) followed by the > (again escape \\)
library(stringr)
str_extract(v1, "\\<[^>]+\\>")
#[1] "<U+000>" NA "<U+000>"
If we need to create multiple columns if there are multiple elements
lst1 <- str_extract_all(dat$v2, "\\<[^>]+\\>")
n <- lengths(lst1)
lapply(lst1, `length<-`,max(n))
dat[paste0("Col", seq_len(max(n)))] <- do.call(rbind,
lapply(lst1, `length<-`,max(n)))
dat
# v2 Col1 Col2
#1 <U+000> <U+000> <NA>
#2 character <NA> <NA>
#3 abc <U+000> <U+000> <NA>
#4 <U+000> characters <U+000> <U+000> <U+000>
Or using base R
regmatches(v1, regexpr("\\<[^>]+\\>", v1, perl = TRUE))
#[1] "<U+000>" "<U+000>"
data
v1 <- c("<U+000>", "character", "abc <U+000>")
v2 <- c(v1, "<U+000> characters <U+000>")
dat <- data.frame(v2 = v2, stringsAsFactors = FALSE)

How to extract everything after a specific string?

I'd like to extract everything after "-" in vector of strings in R.
For example in :
test = c("Pierre-Pomme","Jean-Poire","Michel-Fraise")
I'd like to get
c("Pomme","Poire","Fraise")
Thanks !
With str_extract. \\b is a zero-length token that matches a word-boundary. This includes any non-word characters:
library(stringr)
str_extract(test, '\\b\\w+$')
# [1] "Pomme" "Poire" "Fraise"
We can also use a back reference with sub. \\1 refers to string matched by the first capture group (.+), which is any character one or more times following a - at the end:
sub('.+-(.+)', '\\1', test)
# [1] "Pomme" "Poire" "Fraise"
This also works with str_replace if that is already loaded:
library(stringr)
str_replace(test, '.+-(.+)', '\\1')
# [1] "Pomme" "Poire" "Fraise"
Third option would be using strsplit and extract the second word from each element of the list (similar to word from #akrun's answer):
sapply(strsplit(test, '-'), `[`, 2)
# [1] "Pomme" "Poire" "Fraise"
stringr also has str_split variant to this:
str_split(test, '-', simplify = TRUE)[,2]
# [1] "Pomme" "Poire" "Fraise"
We can use sub to match characters (.*) until the - and in the replacement specify ""
sub(".*-", "", test)
Or another option is word
library(stringr)
word(test, 2, sep="-")
I think the other answers might be what you're looking for, but if you don't want to lose the original context you can try something like this:
library(tidyverse)
tibble(test) %>%
separate(test, c("first", "last"), remove = F)
This will return a dataframe containing the original strings plus components, which might be more useful down the road:
# A tibble: 3 x 3
test first last
<chr> <chr> <chr>
1 Pierre-Pomme Pierre Pomme
2 Jean-Poire Jean Poire
3 Michel-Fraise Michel Fraise
For some reason the responses here didn't work for my particular string. I found this response more helpful (i.e., using Stringr's lookbehind function): stringr str_extract capture group capturing everything.

stringr: extract words containing a specific word

Consider this simple example
dataframe <- data_frame(text = c('WAFF;WOFF;WIFF200;WIFF12',
'WUFF;WEFF;WIFF2;BIGWIFF'))
> dataframe
# A tibble: 2 x 1
text
<chr>
1 WAFF;WOFF;WIFF200;WIFF12
2 WUFF;WEFF;WIFF2;BIGWIFF
Here I want to extract the words containing WIFF, that is I want to end up with a dataframe like this
> output
# A tibble: 2 x 1
text
<chr>
1 WIFF200;WIFF12
2 WIFF2;BIGWIFF
I tried to use
dataframe %>%
mutate( mystring = str_extract(text, regex('\bwiff\b', ignore_case=TRUE)))
but this only retuns NAs. Any ideas?
Thanks!
A classic, non-regex approach via base R would be,
sapply(strsplit(me$text, ';', fixed = TRUE), function(i)
paste(grep('WIFF', i, value = TRUE, fixed = TRUE), collapse = ';'))
#[1] "WIFF200;WIFF12" "WIFF2;BIGWIFF"
You seem to want to remove all words containing WIFF and the trailing ; if there is any. Use
> dataframedataframe <- data.frame(text = c('WAFF;WOFF;WIFF200;WIFF12', 'WUFF;WEFF;WIFF2;BIGWIFF'))
> dataframe$text <- str_replace_all(dataframe$text, "(?i)\\b(?!\\w*WIFF)\\w+;?", "")
> dataframe
text
1 WIFF200;WIFF12
2 WIFF2;BIGWIFF
The pattern (?i)\\b(?!\\w*WIFF)\\w+;? matches:
(?i) - a case insensitive inline modifier
\\b - a word boundary
(?!\\w*WIFF) - the negative lookahead fails any match where a word contains WIFF anywhere inside it
\\w+ - 1 or more word chars
;? - an optional ; (? matches 1 or 0 occurrences of the pattern it modifies)
If for some reason you want to use str_extract, note that your regex could not work because \bWIFF\b matches a whole word WIFF and nothing else. You do not have such words in your DF. You may use "(?i)\\b\\w*WIFF\\w*\\b" to match any words with WIFF inside (case insensitively) and use str_extract_all to get multiple occurrences, and do not forget to join the matches into a single "string":
> df <- data.frame(text = c('WAFF;WOFF;WIFF200;WIFF12', 'WUFF;WEFF;WIFF2;BIGWIFF'))
> res <- str_extract_all(df$text, "(?i)\\b\\w*WIFF\\w*\\b")
> res
[[1]]
[1] "WIFF200" "WIFF12"
[[2]]
[1] "WIFF2" "BIGWIFF"
> df$text <- sapply(res, function(s) paste(s, collapse=';'))
> df
text
1 WIFF200;WIFF12
2 WIFF2;BIGWIFF
You may "shrink" the code by placing str_extract_all into the sapply function, I separated them for better visibility.

Remove part of a string

How do I remove part of a string? For example in ATGAS_1121 I want to remove everything before _.
Use regular expressions. In this case, you can use gsub:
gsub("^.*?_","_","ATGAS_1121")
[1] "_1121"
This regular expression matches the beginning of the string (^), any character (.) repeated zero or more times (*), and underscore (_). The ? makes the match "lazy" so that it only matches are far as the first underscore. That match is replaced with just an underscore. See ?regex for more details and references
You can use a built-in for this, strsplit:
> s = "TGAS_1121"
> s1 = unlist(strsplit(s, split='_', fixed=TRUE))[2]
> s1
[1] "1121"
strsplit returns both pieces of the string parsed on the split parameter as a list. That's probably not what you want, so wrap the call in unlist, then index that array so that only the second of the two elements in the vector are returned.
Finally, the fixed parameter should be set to TRUE to indicate that the split parameter is not a regular expression, but a literal matching character.
If you're a Tidyverse kind of person, here's the stringr solution:
R> library(stringr)
R> strings = c("TGAS_1121", "MGAS_1432", "ATGAS_1121")
R> strings %>% str_replace(".*_", "_")
[1] "_1121" "_1432" "_1121"
# Or:
R> strings %>% str_replace("^[A-Z]*", "")
[1] "_1121" "_1432" "_1121"
Here's the strsplit solution if s is a vector:
> s <- c("TGAS_1121", "MGAS_1432")
> s1 <- sapply(strsplit(s, split='_', fixed=TRUE), function(x) (x[2]))
> s1
[1] "1121" "1432"
Maybe the most intuitive solution is probably to use the stringr function str_remove which is even easier than str_replace as it has only 1 argument instead of 2.
The only tricky part in your example is that you want to keep the underscore but its possible: You must match the regular expression until it finds the specified string pattern (?=pattern).
See example:
strings = c("TGAS_1121", "MGAS_1432", "ATGAS_1121")
strings %>% stringr::str_remove(".+?(?=_)")
[1] "_1121" "_1432" "_1121"
Here the strsplit solution for a dataframe using dplyr package
col1 = c("TGAS_1121", "MGAS_1432", "ATGAS_1121")
col2 = c("T", "M", "A")
df = data.frame(col1, col2)
df
col1 col2
1 TGAS_1121 T
2 MGAS_1432 M
3 ATGAS_1121 A
df<-mutate(df,col1=as.character(col1))
df2<-mutate(df,col1=sapply(strsplit(df$col1, split='_', fixed=TRUE),function(x) (x[2])))
df2
col1 col2
1 1121 T
2 1432 M
3 1121 A

Resources