Splitting a file name into name,extension - r

I have the name of a file like this: name1.csv and I would like to extract two substrings of this string. One that stores the name1 in one variable and other that stores the extension, csv, without the dot in another variable.
I have been searching if there is a function like indexOf of Java that allows to do that kind of manipulation, but I have not found anything at all.
Any help?

Use strsplit:
R> strsplit("name1.csv", "\\.")[[1]]
[1] "name1" "csv"
R>
Note that you a) need to escape the dot (as it is a metacharacter for regular expressions) and b) deal with the fact that strsplit() returns a list of which typically only the first element is of interest.
A more general solution involves regular expressions where you can extract the matches.
For the special case of filenames you also have:
R> library(tools) # unless already loaded, comes with base R
R> file_ext("name1.csv")
[1] "csv"
R>
and
R> file_path_sans_ext("name1.csv")
[1] "name1"
R>
as these are such a common tasks (cf basename in shell etc).

Use strsplit():
http://stat.ethz.ch/R-manual/R-devel/library/base/html/strsplit.html
Example:
> strsplit('name1.csv', '[.]')[[1]]
[1] "name1" "csv"
Note that second argument is a regular expression, that's why you can't just pass single dot (it will be interpreted as "any character").

Using regular expression, you can do this for example
regmatches(x='name1.csv',gregexpr('[.]','name1.csv'),invert=TRUE)
[[1]]
[1] "name1" "csv"

Related

Extract substring using regular expression in R

I am new to regular expression and have read http://www.gastonsanchez.com/Handling_and_Processing_Strings_in_R.pdf regex documents. I know similar questions have been posted previously, but I still had a difficult time trying to figuring out my case.
I have a vector of string filenames, try to extract substring, and save as new filenames. The filenames follow the the pattern below:
\w_\w_(substring to extract)_\d_\d_Month_Date_Year_Hour_Min_Sec_(AM or PM)
For example, ABC_DG_MS-15-0452-268_206_281_12_1_2017_1_53_11_PM, ABC_RE_SP56-01_A_206_281_12_1_2017_1_52_34_AM, the substring will be MS-15-0452-268 and SP56-01_A
I used
map(strsplit(filenames, '_'),3)
but failed, because the new filenames could have _, too.
I turned to regular expression for advanced matching, and come up with this
gsub("^[^\n]+_\\d_\\d_\\d_\\d_(AM | PM)$", "", filenames)
still did not get what I needed.
You may use
filenames <- c('ABC_DG_MS-15-0452-268_206_281_12_1_2017_1_53_11_PM', 'ABC_RE_SP56-01_A_206_281_12_1_2017_1_52_34_AM')
gsub('^(?:[^_]+_){2}(.+?)_\\d+.*', '\\1', filenames)
Which yields
[1] "MS-15-0452-268" "SP56-01_A"
The pattern here is
^ # start of the string
(?:[^_]+_){2} # not _, twice
(.+?) # anything lazily afterwards
_\\d+ # until there's _\d+
.* # consume the rest of the string
This pattern is replaced by the first captured group and hence the filename in question.
Call me a hack. But if that is guaranteed to be the format of all my strings, then I would just use strsplit to hack the name apart, then only keep what I wanted:
string <- 'ABC_DG_MS-15-0452-268_206_281_12_1_2017_1_53_11_PM'
string_bits <- strsplit(string, '_')[[1]]
file_name<- string_bits[3]
file_name
[1] "MS-15-0452-268"
And if you had a list of many file names, you could remove the explicit [[1]] use sapply() to get the third element of every one:
sapply(string_bits, "[[", 3)

Split string WITHOUT regex

I'm sure I used to know this, and I'm sure this is covered somewhere but since I can't find any Google/SO hits for this title search there probably should be one..
I want to split a string without using regex, e.g.
str = "abcx*defx*ghi"
Of course we can use stringr::str_split or strsplit with argument 'x[*]', but how can we just suppress regex entirely?
The argument fixed=TRUE can be useful in this instance
strsplit(str, "x*", fixed=TRUE)[[1]]
#[1] "abc" "def" "ghi"
Since the question also mentions a stringr::str_split, a stringr way might be of help, too.
You may use str_split with fixed(<YOUR_DELIMITER_STRING_HERE>, ignore_case = FALSE) or coll(pattern, ignore_case = FALSE, locale = "en", ...). See the stringr docs:
fixed: Compare literal bytes in the string. This is very fast, but not usually what you want for non-ASCII character sets.
coll Compare strings respecting standard collation rules
See the following R demo:
> str_split(str, fixed("x*"))
[[1]]
[1] "abc" "def" "ghi"
Collations are better illustrated with a letter that can have two representations:
> x <- c("Str1\u00e1Str2", "Str3a\u0301Str4")
> str_split(x, fixed("\u00e1"), simplify=TRUE)
[,1] [,2]
[1,] "Str1" "Str2"
[2,] "Str3áStr4" ""
> str_split(x, coll("\u00e1"), simplify=TRUE)
[,1] [,2]
[1,] "Str1" "Str2"
[2,] "Str3" "Str4"
A note about fixed():
fixed(x) only matches the exact sequence of bytes specified by x. This is a very limited “pattern”, but the restriction can make matching much faster. Beware using fixed() with non-English data. It is problematic because there are often multiple ways of representing the same character. For example, there are two ways to define “á”: either as a single character or as an “a” plus an accent.
...
coll(x) looks for a match to x using human-language collation rules, and is particularly important if you want to do case insensitive matching. Collation rules differ around the world, so you’ll also need to supply a locale parameter.
Simply wrap the regex inside fixed() to stop it being treated as a regex inside stringr::str_split()
Example
Normally, stringr::str_split() will treat the pattern as a regular expression, meaning certain characters have special meanings, which can cause errors if those regular expressions are not valid, e.g.:
library(stringr)
str_split("abcdefg[[[klmnop", "[[[")
Error in stri_split_regex(string, pattern, n = n, simplify = simplify, :
Missing closing bracket on a bracket expression. (U_REGEX_MISSING_CLOSE_BRACKET)
But if we simply wrap the pattern we are splitting by inside fixed(), it treat's it as a string literal, rather than a regular expression:
str_split("abcdefg[[[klmnop", fixed("[[["))
[[1]]
[1] "abcdefg" "klmnop"

Prevent grep in R from treating "." as a letter

I have a character vector that contains text similar to the following:
text <- c("ABc.def.xYz", "ge", "lmo.qrstu")
I would like to remove everything before a .:
> "xYz" "ge" "qrstu"
However, the grep function seems to be treating . as a letter:
pattern <- "([A-Z]|[a-z])+$"
grep(pattern, text, value = T)
> "ABc.def.xYz" "ge" "lmo.qrstu"
The pattern works elsewhere, such as on regexpal.
How can I get grep to behave as expected?
grep is for finding the pattern. It returns the index of the vector that matches a pattern. If, value=TRUE is specified, it returns the value. From the description, it seems that you want to remove substring instead of returning a subset of the initial vector.
If you need to remove the substring, you can use sub
sub('.*\\.', '', text)
#[1] "xYz" "ge" "qrstu"
As the first argument, we match a pattern i.e. '.*\\.'. It matches one of more characters (.*) followed by a dot (\\.). The \\ is needed to escape the . to treat it as that symbol instead of any character. This will match until the last . character in the string. We replace that matched pattern with a '' as the replacement argument and thereby remove the substring.
grep doesn't do any replacements. It searches for matches and returns the indices (or the value if you specify value=T) that give a match. The results you're getting are just saying that those meet your criteria at some point in the string. If you added something that doesn't meet the criteria anywhere into your text vector (for example: "9", "#$%23", ...) then it wouldn't return those when you called grep on it.
If you want it just to return the matched portion you should look at the regmatches function. However for your purposes it seems like sub or gsub should do what you want.
gsub(".*\\.", "", text)
I would suggest reading the help page for regexs ?regex. The wikipedia page is a decent read as well but note that R's regexs are a little different than some others. https://en.wikipedia.org/wiki/Regular_expression
You may try str_extract function from stringr package.
str_extract(text, "[^.]*$")
This would match all the non-dot characters exists at the last.
Your pattern does work, the problem is that grep does something different than what you are thinking it does.
Let's first use your pattern with str_extract_all from the package stringr.
library(stringr)
str_extract_all(text, pattern ="([A-Z]|[a-z])+$")
[[1]]
[1] "xYz"
[[2]]
[1] "ge"
[[3]]
[1] "qrstu"
Notice that the results came as you expected!
The problem you are having is that grep will give you the complete element that matches you regular expression and not only the matching part of the element. For example, in the example below, grep will return you the first element because it matches "a":
grep(pattern = "a", x = c("abcdef", "bcdf"), value = TRUE)
[1] "abcdef"

Extract websites links from a text in R

I have multiple texts that each may consist references to one or more web links. for example:
text1= "s#1212a as www.abcd.com asasa11".
How do I extract:
"www.abcd.com"
from this text in R? In other words I am looking to extract patterns that start with www and end with .com
regmatches This approach uses regexpr/grepgexpr and regmatches. I expanded the test data to include more examples.
text1 <- c("s#1212a www.abcd.com www.cats.com",
"www.boo.com",
"asdf",
"blargwww.test.comasdf")
# Regular expressions take some practice.
# check out ?regex or the wikipedia page on regular expressions
# for more info on creating them yourself.
pattern <- "www\\..*?\\.com"
# Get information about where the pattern matches text1
m <- gregexpr(pattern, text1)
# Extract the matches from text1
regmatches(text1, m)
Which gives
> regmatches(text1, m) ##
[[1]]
[1] "www.abcd.com" "www.cats.com"
[[2]]
[1] "www.boo.com"
[[3]]
character(0)
[[4]]
[1] "www.test.com"
Notice it returns a list. If we want a vector you can just use unlist on the result. This is because we used gregexpr which implies there could be multiple matches in our string. If we know there is at most one match we could use regexpr instead
> m <- regexpr(pattern, text1)
> regmatches(text1, m)
[1] "www.abcd.com" "www.boo.com" "www.test.com"
Notice, however, that this returns all results as a vector and only returns a single result from each string (note that www.cats.com isn't in the results). On the whole, though, I think either of these two methods is preferable to the gsub method because that way will return the entire input if there is no result found. For example take a look:
> gsub(text1, pattern=".*(www\\..*?\\.com).*", replace="\\1")
[1] "www.abcd.com" "www.boo.com" "asdf" "www.test.com"
And that's even after modifying the pattern to be a little more robust. We still get 'asdf' in the results even though it clearly doesn't match the pattern.
Shameless silly self promotion: regmatches was introduced with R 2.14 so if you're stuck with an earlier version of R you might be out of luck. Unless you're able to install the future2.14 package from my github repo which provides some support for functions introduced in 2.14 to earlier versions of R.
strapplyc. An alternative which gives the same result as ## above is:
library(gsubfn)
strapplyc(test1, pattern)
The regular expression Here is some explanation on how to decipher the regular expression:
pattern <- "www\\..*?\\.com"
Explanation:
www matches the www portion
\\. We need to escape an actual 'dot' character using \\ because a plain . represents "any character" in regular expressions.
.*? The . represents any character, the * tells to match 0 or more times, and the ? following the * tells it to not be greedy. Otherwise "asdf www.cats.com www.dogs.com asdf" would match all of "www.cats.com www.dogs.com" as a single match instead of recognizing that there are two matches in there.
\\. Once again we need to escape an actual dot character
com This part matches the ending 'com' that we want to match
Putting it all together it says: start with www. then match any characters until you reach the first ".com"
Check out the gsub function:
x = "s#1212a as www.abcd.com asasa11"
gsub(x=x, pattern=".*(www.*com).*", replace="\\1")
The basic idea is to surround the txt you want to retain in parenthesis, then replace the entire line with it. The replace parameter of gsub "\\1" refers to what was found in the parenthesis.
The solutions here are great and in base. For those that want a quick solution you can use qdap's genXtract. This functions basically takes a left and a right element(s) and it will extract everything in between. By setting with = TRUE it will include those elements:
text1 <- c("s#1212a www.abcd.com www.cats.com",
"www.boo.com",
"asdf",
"http://www.talkstats.com/ and http://stackoverflow.com/",
"blargwww.test.comasdf")
library(qdap)
genXtract(text1, "www.", ".com", with=TRUE)
## > genXtract(text1, "www.", ".com", with=TRUE)
## $`www. : .com1`
## [1] "www.abcd.com" "www.cats.com"
##
## $`www. : .com2`
## [1] "www.boo.com"
##
## $`www. : .com3`
## character(0)
##
## $`www. : .com4`
## [1] "www.talkstats.com"
##
## $`www. : .com5`
## [1] "www.test.com"
PS if you look at code for the function it is a wrapper for Dason's solution.

Splitting strings in R and extracting information from lists

I have the following row names in my data:
column_01.1
column_01.2
column_01.3
column_02.1
column_02.2
I can split these rownames with the following command:
strsplit(rownames(my_data),split= "\\.")
and get the list:
[[1]]
[1] "column_01" "1"
[[2]]
[1] "column_01" "2"
[[3]]
[1] "column_01" "3"
...
But since I want characters out of the first part and completely discard the second part, like this:
column_01
column_01
column_01
column_02
column_02
I have run out of tricks to extract only this part of the information. I've tried some options with unlist() and as.data.frame(), but no luck. Or is there an easier way to split the strings? I do not want to use as.character(substring(rownames(my_data),1,9)) as the location of the "." can change (while it would work for this example).
You can map [ to get the first elements:
sapply(strsplit(rownames(my_data),split= "\\."),'[',1)
...or (better) use regular expressions:
gsub('\\..*$','',rownames(my_data))
(translation: find all matches of (dot-character, something, end-of-string) and replace with empty string)
Since I like the stringr package, I thought I'd throw this out there:
str_replace(rownames(my_data), "(^column_.+)\\.\\d+", "\\1")
(I'm not great with regex so the ^ might be better outside the parenthesis)

Resources