Regex lines with exactly 4 semicolons - r

I want to filter lines with exactly 4 semicolons in it.
More or less semicolons should not be processed. I'm using regex/grep:
POSITIVE Example:
VES_I.MG;A;97;13;1
NEGATIVE Example:
VES_I.MG;A;97;13;1;2

For something this straightforward, I would actually just suggest counting the semicolons and subsetting based on that numeric vector.
A fast way to do this is with stri_count* from the "stringi" package:
library(stringi)
v <- c("VES_I.MG;A;97;13;1", "VES_I.MG;A;97;13;1;2") ## An example vector
stri_count_fixed(v, ";") ## How many semicolons?
# [1] 4 5
v[stri_count_fixed(v, ";") == 4] ## Just keep when count == 4
# [1] "VES_I.MG;A;97;13;1"

^(?=([^;]*;){4}[^;]*$).*$
You can try this with grep -P if you have the support for it.See demo.
http://regex101.com/r/lZ5mN8/22

[EDIT: Fixed stupid bug...]
The following will work with grep or any regex engine:
^[^;]*;[^;]*;[^;]*;[^;]*;[^;]*$
When used in a command line, make sure you put it inside quotes (" on Windows; either kind on *nix) so that special characters aren't interpreted by the shell.

If you have awk available, you can also try:
awk -F';' 'NF==5' file
just replace the 5 with n + 1. which n is your target count, for example the 4 in your question.

You don't need to use lookaheads and also you don't need to enable perl=TRUE parameter.
> v <- c("VES_I.MG;A;97;13;1", "VES_I.MG;A;97;13;1;2")
> grep("^(?:[^;]*;){4}[^;]*$", v)
[1] 1
> grep("^(?:[^;]*;){4}[^;]*$", v, value=TRUE)
[1] "VES_I.MG;A;97;13;1"

To match exactly four semicolons in a line, grep using the regex ^([^;]*;){4}[^;]*$:
grep -P "^([^;]*;){4}[^;]*$" ./input.txt

This could be done without regular expressions by using count.fields. The first line gives the counts and the second line reads in the lines and reduces it to those lines with 5 fields. The final line parses the fields out and converts it to a data frame with 4 columns.
cnt <- count.fields("myfile.dat", sep = ";")
L <- readLines("myfile.dat")[cnt == 5]
read.table(text = L, sep = ";")

Related

Remove first "." from values in R

I have a dataset with different values in R. Some values are like 11.474 and others like 1.034.496 in the same column. I would like to change the values with two dots from 1.034.496 to 1034.496. Is there anyone who could help me please?
Thanks for the help!
Use gsub with Perl regexes:
df <- data.frame(a = c('11.474', '1.034.496', '1.234.034.496'))
df$a = gsub('[.](?=.*[.])', '', df$a, perl = TRUE)
print(df)
## a
## 1 11.474
## 2 1034.496
## 3 1234034.496
Here, [.](?=.*[.]) is a literal dot (has to be escaped like so \. or put into a character class like so: [.]), followed by a literal dot using positive lookahead: (?=PATTERN).
I guess there must be other smarter regex approaches than the below one, but here is my attempt
> ifelse(lengths(gregexpr("\\.",v))>1,sub("\\.","",v),v)
[1] "11.474" "1034.496"
where
v <- c("11.474","1.034.496")

Regexpr not working as expected

For the following string <10.16;13.05) I want to match only the first number (sometimes the first number does not exist, i.e. <;13.05)). I used the following regular expression:
grep("[0-9]+\\.*[0-9]*(?=;)","<10.16;13.05)",value=T,perl=T)
However, the result is not "10.16" but "<10.16;13.05)". Could anyone please help me with this one? Thanks.
You could also use strsplit here with minimum regex, i.e.
x <- '<10.16;13.05)'
as.numeric(gsub('<(.*)', '\\1', unlist(strsplit(x, ';', fixed = TRUE))[1]))
#[1] 10.16
x <- '<;13.05)'
as.numeric(gsub('<(.*)', '\\1', unlist(strsplit(x, ';', fixed = TRUE))[1]))
#[1] NA
I believe you are using the wrong regex function. grep just tells you whether the patern was found, it does not extract it.
Try instead
regmatches("<10.16;13.05)", regexpr("\\d*\\.\\d*", "<10.16;13.05)"))

Using regular expression in string replacement

I have a broken csv file that I am attempting to read into R and repair using a regular expression.
The reason it is broken is that it contains some fields which include a comma but does not wrap those fields in double quotes. So I have to use a regular expression to find these fields, and wrap them in double quotes.
Here is an example of the data source:
DataField1,DataField2,Price
ID1,Value1,
ID2,Value2,$500.00
ID3,Value3,$1,250.00
So you can see that in the third row, the Price field contains a comma but it is not wrapped in double quotes. This breaks the read.table function.
My approach is to use readLines and str_replace_all to wrap the price with commas in double quotes. But I am not good at regular expressions and stuck.
vector <- read.Lines(file)
vector_temp <- str_replace_all(vector, ",\\$[0-9]+,\\d{3}\\.\\d{2}", ",\"\\$[0-9]+,\\d{3}\\.\\d{2}\"")
I want the output to be:
DataField1,DataField2,Price
ID1,Value1,
ID2,Value2,$500.00
ID3,Value3,"$1,250.00"
With this format, I can read into R.
Appreciate any help!
lines <- readLines(textConnection(object="DataField1,DataField2,Price
ID1,Value1,
ID2,Value2,$500.00
ID3,Value3,$1,250.00"))
library(stringi)
library(tidyverse)
stri_split_regex(lines, ",", n=3, simplify=TRUE) %>%
as_data_frame() %>%
docxtractr::assign_colnames(1)
## DataField1 DataField2 Price
## 1 ID1 Value1
## 2 ID2 Value2 $500.00
## 3 ID3 Value3 $1,250.00
from there you can readr::write_csv() or write.csv()
The extra facilities in the stringi or stringr packages do not seem needed. gsub seems perfectly suited for this. You just need understand about capture-groups with paired parentheses (brackets to Brits) and the use of the double-backslash_n convention for referring to capture-group matches in the replacement argument:
txt <- "DataField1,DataField2,Price, extra
ID1,Value1, ,
ID2,Value2,$500.00,
ID3,Value3,$1,250.00, o"
vector<- gsub("([$][0-9]{1,3}([,]([0-9]{3})){0,10}([.][0-9]{0,2}))" , "\"\\1\"", readLines(textConnection(txt)) )
> read.csv(text=vector)
DataField1 DataField2 Price extra
1 ID1 Value1
2 ID2 Value2 $500.00
3 ID3 Value3 $1,250.00 o
You are putting quotes around specific sequence of digits possibly repeated(commas digits) and possible period and 2 digits . There might be earlier SO questions about formatting as "currency".
Here are some solutions:
1) read.pattern This uses read.pattern in the gsubfn package to read in a file (assumed to be called sc.csv) such that the capture groups, i.e. the parenthesized portions, of the pattern are the fields. This will read in the file and process it all in one step so it is not necessary to use readLines first.
^(.*?), that begins the pattern will match everything from the start until the first comma. Then (.*?), will match to the next comma and finally (.*)$ will match everything else to the end. Normally * is greedy, i.e. it matches as much as it can, but the question mark after it makes it ungreedy. We needed to specify perl=TRUE so that it uses perl regular expressions since by default gsubfn uses tcl regular expressions based on Henry Spencer's regex parser which does not support *? . If you would rather have character columns instead of factor then add the as.is=TRUE argument to read.pattern.
The final line of code removes the $ and , characters from the Price column and converts it to numeric. (Omit this line if you actually want it formatted.)
library(gsubfn)
DF <- read.pattern("sc.csv", pattern = "^(.*?),(.*?),(.*)$", perl = TRUE, header = TRUE)
DF$Price <- as.numeric(gsub("[$,]", "", DF$Price)) ##
giving:
> DF
DataField1 DataField2 Price
1 ID1 Value1 NA
2 ID2 Value2 500
3 ID3 Value3 1250
2) sub This uses very simple regular expression (just a single character match) and no packages. Using vector as defined in the question this replaces the first two commas with semicolons. Then it can be read in using sep = ";"
read.table(text = sub(",", ";", sub(",", ";", vector)), header = TRUE, sep = ";")
Add the line marked ## in (1) if you want numeric prices.

Finding number of r's in the vector (Both R and r) before the first u

rquote <- "R's internals are irrefutably intriguing"
chars <- strsplit(rquote, split = "")[[1]]
in the above code we need to find the number of r's(R and r) in rquote
You could use substrings.
## find position of first 'u'
u1 <- regexpr("u", rquote, fixed = TRUE)
## get count of all 'r' or 'R' before 'u1'
lengths(gregexpr("r", substr(rquote, 1, u1), ignore.case = TRUE))
# [1] 5
This follows what you ask for in the title of the post. If you want the count of all the "r", case insensitive, then simplify the above to
lengths(gregexpr("r", rquote, ignore.case = TRUE))
# [1] 6
Then there's always stringi
library(stringi)
## count before first 'u'
stri_count_regex(stri_sub(rquote, 1, stri_locate_first_regex(rquote, "u")[,1]), "r|R")
# [1] 5
## count all R or r
stri_count_regex(rquote, "r|R")
# [1] 6
To get the number of R's before the first u, you need to make an intermediate step. (You probably don't need to. I'm sure akrun knows some incredibly cool regular expression to get the job done, but it won't be as easy to understand as this).
rquote <- "R's internals are irrefutably intriguing"
before_u <- gsub("u[[:print:]]+$", "", rquote)
length(stringr::str_extract_all(before_u, "(R|r)")[[1]])
You may try this,
> length(str_extract_all(rquote, '[Rr]')[[1]])
[1] 6
To get the count of all r's before the first u
> length(str_extract_all(rquote, perl('u.*(*SKIP)(*F)|[Rr]'))[[1]])
[1] 5
EDIT: Just saw before the first u. In that case, we can get the position of the first 'u' from either which or match.
Then use grepl in the 'chars' up to the position (ind) to find the logical index of 'R' with ignore.case=TRUE and use sum using the strsplit output from the OP's code.
ind <- which(chars=='u')[1]
Or
ind <- match('u', chars)
sum(grepl('r', chars[seq(ind)], ignore.case=TRUE))
#[1] 5
Or we can use two gsubs on the original string ('rquote'). First one removes the characters starting with u until the end of the string (u.$) and the second matches all characters except R, r ([^Rr]) and replace it with ''. We can use nchar to get count of the characters remaining.
nchar(gsub('[^Rr]', '', sub('u.*$', '', rquote)))
#[1] 5
Or if we want to count the 'r' in the entire string, gregexpr to get the position of matching characters from the original string ('rquote') and get the length
length(gregexpr('[rR]', rquote)[[1]])
#[1] 6

Remove all text before colon

I have a file containing a certain number of lines. Each line looks like this:
TF_list_to_test10004/Nus_k0.345_t0.1_e0.1.adj:PKMYT1
I would like to remove all before ":" character in order to retain only PKMYT1 that is a gene name.
Since I'm not an expert in regex scripting can anyone help me to do this using Unix (sed or awk) or in R?
Here are two ways of doing it in R:
foo <- "TF_list_to_test10004/Nus_k0.345_t0.1_e0.1.adj:PKMYT1"
# Remove all before and up to ":":
gsub(".*:","",foo)
# Extract everything behind ":":
regmatches(foo,gregexpr("(?<=:).*",foo,perl=TRUE))
A simple regular expression used with gsub():
x <- "TF_list_to_test10004/Nus_k0.345_t0.1_e0.1.adj:PKMYT1"
gsub(".*:", "", x)
"PKMYT1"
See ?regex or ?gsub for more help.
There are certainly more than 2 ways in R. Here's another.
unlist(lapply(strsplit(foo, ':', fixed = TRUE), '[', 2))
If the string has a constant length I imagine substr would be faster than this or regex methods.
Using sed:
sed 's/.*://' < your_input_file > output_file
This will replace anything followed by a colon with nothing, so it'll remove everything up to and including the last colon on each line (because * is greedy by default).
As per Josh O'Brien's comment, if you wanted to only replace up to and including the first colon, do this:
sed "s/[^:]*://"
That will match anything that isn't a colon, followed by one colon, and replace with nothing.
Note that for both of these patterns they'll stop on the first match on each line. If you want to make a replace happen for every match on a line, add the 'g' (global) option to the end of the command.
Also note that on linux (but not on OSX) you can edit a file in-place with -i eg:
sed -i 's/.*://' your_file
Solution using str_remove from the stringr package:
str_remove("TF_list_to_test10004/Nus_k0.345_t0.1_e0.1.adj:PKMYT1", ".*:")
[1] "PKMYT1"
You can use awk like this:
awk -F: '{print $2}' /your/file
Some very simple move that I missed from the best response #Sacha Epskamp was to use the sub function, in this case to take everything before the ":"(instead of removing it), so it was very simple:
foo <- "TF_list_to_test10004/Nus_k0.345_t0.1_e0.1.adj:PKMYT1"
# 1st, as she did to remove all before and up to ":":
gsub(".*:","",foo)
# 2nd, to keep everything before and up to ":":
gsub(":.*","",foo)
Basically, the same thing, just change the ":" position inside the sub argument. Hope it will help.
If you have GNU coreutils available use cut:
cut -d: -f2 infile
I was working on a similar issue. John's and Josh O'Brien's advice did the trick. I started with this tibble:
library(dplyr)
my_tibble <- tibble(Col1=c("ABC:Content","BCDE:MoreContent","FG:Conent:with:colons"))
It looks like:
| Col1
1 | ABC:Content
2 | BCDE:MoreContent
3 | FG:Content:with:colons
I needed to create this tibble:
| Col1 | Col2 | Col3
1 | ABC:Content | ABC | Content
2 | BCDE:MoreContent | BCDE | MoreContent
3 | FG:Content:with:colons| FG | Content:with:colons
And did so with this code (R version 3.4.2).
my_tibble2 <- mutate(my_tibble
,Col2 = unlist(lapply(strsplit(Col1, ':',fixed = TRUE), '[', 1))
,Col3 = gsub("^[^:]*:", "", Col1))
Below are 2 equivalent solutions:
The first uses perl's -a autosplit feature to split each line into fields using :, populate the F fields array, and print the 2nd field $F[1] (counted starting from field 0)
perl -F: -lane 'print $F[1]' file
The second uses a regular expression to substitute s/// from ^ the beginning of the line, .*: any characters ending with a colon, with nothing
perl -pe 's/^.*://' file

Resources