I have a list of file names as such:
"A/B/file.jpeg"
"A/C/file2.jpeg"
"B/C/file3.jpeg"
and a couple of variations of such.
My question is how would I be able to add a "new" or any characters into each of these file names after the second "/" such that the length of the string/name doesn't matter just that it is placed after the second "/"
Results would ideally be:
"A/B/newfile.jpeg"
"A/B/newfile2.jpeg" etc.
Thanks!
Another possible solution, based on stringr::str_replace:
library(stringr)
l <- c("A/B/file.jpeg", "A/B/file2.jpeg", "A/B/file3.jpeg")
str_replace(l, "\\/(?=file)", "\\/new")
#> [1] "A/B/newfile.jpeg" "A/B/newfile2.jpeg" "A/B/newfile3.jpeg"
Using gsub.
gsub('(file)', 'new\\1', x)
# [1] "A/B/newfile.jpeg" "A/C/newfile2.jpeg" "B/C/newfile3.jpeg"
Data:
x <- c("A/B/file.jpeg", "A/C/file2.jpeg", "B/C/file3.jpeg")
Related
I'd like to extract the characters 120497 and 120542 from the vector below so that I have something like this c("120497","120542"). I think I could perform this task by extracting everything after "-t" and before ".html"
data<-c("mies-are-going-straight-to-hell-t120497.html?sid=0e4851bc16db",
"oss-on-wall-street-cryptocurrency-t120542.html?sid=1c1328efb1e39b40123679e173f184a1")
Thanks!
str_extract(data, "\\d+(?=.html)")
[1] "120497" "120542"
If we consider the numbers to be the firsts then:
sub(".*?(\\d+).*", "\\1", data)
[1] "120497" "120542"
I would like to substitute the strings in the list by cutting each string after the first 3-digit number.
a <- c("MTH314PHY410","LB471LB472","PHY472CHM141")
I would like for it to look something like
a <- c("MTH314","LB471","PHY472")
I have tried something like
b <- gsub("[100-999].*","",a)
but it returns c("MTH","LB","PHY") without the first number
A possible solution, based on stringr::str_remove:
library(stringr)
a <- c("MTH314PHY410","LB471LB472","PHY472CHM141")
str_remove(a, "(?<=\\d{3}).*")
#> [1] "MTH314" "LB471" "PHY472"
c("MTH314PHY410","LB471LB472","PHY472CHM141") %>%
stringr::str_extract('.+?\\d{3}')
[1] "MTH314" "LB471" "PHY472"
I have a dataframe with a column of URLs from which I want to remove everything after the first question mark. Some URLs have no question mark, and I want these to remain unchanged. In short, I want to strip off all the tracking. This is a sample URL.
https://www.dummy.com/2017/11/29/four-questions-we-have-about-stuff/?utm_source=exacttarget&utm_medium=newsletter&utm_term=dummydotcom-dummycomnewsletter&utm_content=na-readblog-blogpost&utm_campaign=dummy
This is the result I'm looking for.
https://www.dummy.com/2017/11/29/four-questions-we-have-about-stuff/
Assuming your dataframe is called df and it has a column in it named url:
df$url <- sub('\\?.*', '', df$url)
With strsplit:
url <- "https://www.dummy.com/2017/11/29/four-questions-we-have-about-stuff/?utm_source=exacttarget&utm_medium=newsletter&utm_term=dummydotcom-dummycomnewsletter&utm_content=na-readblog-blogpost&utm_campaign=dummy"
result <- strsplit(url, "\\?")[[1]][1]
Output:
> result
[1] "https://www.dummy.com/2017/11/29/four-questions-we-have-about-stuff/"
And here is an example of using it on a vector rather than a single string:
strings <- c("here?string", "another?string", "stringnoquestion", "one?more")
> sapply(strsplit(strings, "\\?"), function(x) x[1])
[1] "here" "another" "stringnoquestion" "one"
strsplit returns a list because it is written to work for vectors as well as singular elements. So in the first example the [[1]] was accessing the first element of the list and then the [1] was accessing the first element of that, the url before the ?.
Here is the first example broken out in to steps:
# Returns a list of length one
> strsplit(url, "\\?")
[[1]]
[1] "https://www.dummy.com/2017/11/29/four-questions-we-have-about-stuff/"
[2] "utm_source=exacttarget&utm_medium=newsletter&utm_term=dummydotcom-dummycomnewsletter&utm_content=na-readblog-blogpost&utm_campaign=dummy"
# Each element of the list is a vector
> strsplit(url, "\\?")[[1]]
[1] "https://www.dummy.com/2017/11/29/four-questions-we-have-about-stuff/"
[2] "utm_source=exacttarget&utm_medium=newsletter&utm_term=dummydotcom-dummycomnewsletter&utm_content=na-readblog-blogpost&utm_campaign=dummy"
# The first element of that vector
> strsplit(url, "\\?")[[1]][1]
[1] "https://www.dummy.com/2017/11/29/four-questions-we-have-about-stuff/"
I have a dataframe df with some urls. There are subcategories within the slashes in the URLs I want to extract with stringr and str_extract
My data looks like
Text URL
Hello www.facebook.com/group1/bla/exy/1234
Test www.facebook.com/group2/fssas/eda/1234
Text www.facebook.com/group-sdja/sdsds/adeds/23234
Texter www.facebook.com/blablabla/sdksds/sdsad
I now want to extract everything after .com/ and the next /
I tried suburlpattern <- "^.com//{1,20}//$"
and df$categories <- str_extract(df$URL, suburlpattern)
But I only end up with NA in df$categories
Any idea what I am doing wrong here? Is it my regex code?
Any help is highly appreciated! Many thanks beforehand.
If you want to use str_extract, you need a regex that will get the value you need into the whole match, and you will need a (?<=[.]com/) lookbehind:
(?<=[.]com/)[^/]+
See the regex demo.
Details:
(?<=[.]com/) - the current location must be preceded with .com/ substring
[^/]+ - matches 1 or more characters other than /.
R demo:
> URL = c("www.facebook.com/group1/bla/exy/1234", "www.facebook.com/group2/fssas/eda/1234","www.facebook.com/group-sdja/sdsds/adeds/23234", "www.facebook.com/blablabla/sdksds/sdsad")
> df <- data.frame(URL)
> library(stringr)
> res <- str_extract(df$URL, "(?<=[.]com/)[^/]+")
> res
[1] "group1" "group2" "group-sdja" "blablabla"
this will return everything between the first set of forward slashes
library(stringr)
str_match("www.facebook.com/blablabla/sdksds/sdsad", "^[^/]+/(.+?)/")[2]
[1] "blablabla"
This works
library(stringr)
data <- c("www.facebook.com/group1/bla/exy/1234",
"www.facebook.com/group2/fssas/eda/1234",
"www.facebook.com/group-sdja/sdsds/adeds/23234",
"www.facebook.com/blablabla/sdksds/sdsad")
suburlpattern <- "/(.*?)/"
categories <- str_extract(data, suburlpattern)
str_sub(categories, start = 2, end = -2)
Results:
[1] "group1" "group2" "group-sdja" "blablabla"
Will only get you what's between the first and second slashes... but that seems to be what you want.
I have a question about the use of gsub. The rownames of my data, have the same partial names. See below:
> rownames(test)
[1] "U2OS.EV.2.7.9" "U2OS.PIM.2.7.9" "U2OS.WDR.2.7.9" "U2OS.MYC.2.7.9"
[5] "U2OS.OBX.2.7.9" "U2OS.EV.18.6.9" "U2O2.PIM.18.6.9" "U2OS.WDR.18.6.9"
[9] "U2OS.MYC.18.6.9" "U2OS.OBX.18.6.9" "X1.U2OS...OBX" "X2.U2OS...MYC"
[13] "X3.U2OS...WDR82" "X4.U2OS...PIM" "X5.U2OS...EV" "exp1.U2OS.EV"
[17] "exp1.U2OS.MYC" "EXP1.U20S..PIM1" "EXP1.U2OS.WDR82" "EXP1.U20S.OBX"
[21] "EXP2.U2OS.EV" "EXP2.U2OS.MYC" "EXP2.U2OS.PIM1" "EXP2.U2OS.WDR82"
[25] "EXP2.U2OS.OBX"
In my previous question, I asked if there is a way to get the same names for the same partial names. See this question: Replacing rownames of data frame by a sub-string
The answer is a very nice solution. The function gsub is used in this way:
transfecties = gsub(".*(MYC|EV|PIM|WDR|OBX).*", "\\1", rownames(test)
Now, I have another problem, the program I run with R (Galaxy) doesn't recognize the | characters. My question is, is there another way to get to the same solution without using this |?
Thanks!
If you don't want to use the "|" character, you can try something like :
Rnames <-
c( "U2OS.EV.2.7.9", "U2OS.PIM.2.7.9", "U2OS.WDR.2.7.9", "U2OS.MYC.2.7.9" ,
"U2OS.OBX.2.7.9" , "U2OS.EV.18.6.9" ,"U2O2.PIM.18.6.9" ,"U2OS.WDR.18.6.9" )
Rlevels <- c("MYC","EV","PIM","WDR","OBX")
tmp <- sapply(Rlevels,grepl,Rnames)
apply(tmp,1,function(i)colnames(tmp)[i])
[1] "EV" "PIM" "WDR" "MYC" "OBX" "EV" "PIM" "WDR"
But I would seriously consider mentioning this to the team of galaxy, as it seems to be rather awkward not to be able to use the symbol for OR...
I wouldn't recommend doing this in general in R as it is far less efficient than the solution #csgillespie provided, but an alternative is to loop over the various strings you want to match and do the replacements on each string separately, i.e. search for "MYN" and replace only in those rownames that match "MYN".
Here is an example using the x data from #csgillespie's Answer:
x <- c("U2OS.EV.2.7.9", "U2OS.PIM.2.7.9", "U2OS.WDR.2.7.9", "U2OS.MYC.2.7.9",
"U2OS.OBX.2.7.9", "U2OS.EV.18.6.9", "U2O2.PIM.18.6.9","U2OS.WDR.18.6.9",
"U2OS.MYC.18.6.9","U2OS.OBX.18.6.9", "X1.U2OS...OBX","X2.U2OS...MYC")
Copy the data so we have something to compare with later (this just for the example):
x2 <- x
Then create a list of strings you want to match on:
matches <- c("MYC","EV","PIM","WDR","OBX")
Then we loop over the values in matches and do three things (numbered ##X in the code):
Create the regular expression by pasting together the current match string i with the other bits of the regular expression we want to use,
Using grepl() we return a logical indicator for those elements of x2 that contain the string i
We then use the same style gsub() call as you were already shown, but use only the elements of x2 that matched the string, and replace only those elements.
The loop is:
for(i in matches) {
rgexp <- paste(".*(", i, ").*", sep = "") ## 1
ind <- grepl(rgexp, x) ## 2
x2[ind] <- gsub(rgexp, "\\1", x2[ind]) ## 3
}
x2
Which gives:
> x2
[1] "EV" "PIM" "WDR" "MYC" "OBX" "EV" "PIM" "WDR" "MYC" "OBX" "OBX" "MYC"