Change column in dataframe based on regex in R - r

I have a large dataframe with a column displaying different profiles:
PROFILE NTHREADS TIME
profAsuffix 1 3.12
profAanother 2 1.9
profAyetanother 3
...
profBsuffix 1 4.1
profBanother 1 3.9
...
I want to rename all profA* pattern combining them in one name (profA) and do the same with profB*. Until now, I do it as:
data$PROFILE <- as.factor(data$PROFILE)
levels(data$PROFILE)[levels(data$PROFILE)=="profAsuffix"] <- "profA"
levels(data$PROFILE)[levels(data$PROFILE)=="profAanother"] <- "profA"
levels(data$PROFILE)[levels(data$PROFILE)=="profAyetanother"] <- "profA"
And so on. But this time I have too many differents suffixes, so I wonder if I can use grepl or a similar approach to do the same thing.

We can use sub
data$PROFILE <- sub("^([a-z]+[A-B]).*", "\\1", data$PROFILE)

Related

How to deselect many variables without removing specific variables in dplyr

Say there is a data frame that has a structure like this:
df <- data.frame(x.1 = rnorm(n=100),
x.2 = rnorm(n=100),
x.3 = rnorm(n=100),
x.special = rnorm(n=100),
x.y.z = rnorm(n=100))
Inspecting the head, we get this output:
x.1 x.2 x.3 x.special x.y.z
1 1.01014580 -1.4047666 1.50374721 -0.8339784 -0.0831983
2 0.44307253 -0.4695634 -0.71951820 1.5758893 1.2163749
3 -0.87051845 0.1793721 -0.26838489 -1.0477929 -1.0813926
4 -0.28491936 0.4186763 -0.07494088 -0.2177471 0.3490200
5 -0.03769566 -0.3656822 0.12478667 -0.7975811 -0.4481193
6 -0.83808036 0.6842561 0.71231627 -0.3348798 1.7418141
Suppose I want to remove all the numbered variables but keep the x.special and x.y.z variables. I know that I can easily deselect with:
df %>%
select(-x.1,
-x.2,
-x.3)
However for something like 50 or 100 variables like this, it would become cumbersome. Similarly, I know I can pick patterns like so:
df %>%
select(-contains("x."))
But this of course removes everything because the special variables have the . name. Is there a more intelligent way of picking these variables? I feel like there is an option for finding the numeric variable in the name.
# use regex to remove these colums...
colsBool <- !grepl(x=names(df), pattern="\\d")
Result:
> head(df[, colsBool])
x.special x.y.z
1 1.1145156 -0.4911891
2 0.7059937 0.4500111
3 -0.6566422 1.6085353
4 -0.6322514 -0.8017260
5 0.4785106 0.6014765
6 -0.8508830 -0.5078307
Regular expressions are your best friend in this situation.
For instance, if you wanted to remove columns whose last value is a number, just do !grepl(pattern = "\\d$",...), the $ sign at the end of the expression will match only columns ending with a number. The ! sign in front of the grepl() expression negates the values in the match, that is, a TRUE becomes FALSE and vice-versa.

Apply a particular function in all files of a folder using R

I have developed a particular R function named DNAdupstability for some Biological analysis which requires input using as fasta file (.fasta/.txt) which returns a dataframe in this format:
Sequence Position8 Position9 Position10 Position11 Position12 Position13
1 1 -1.473571 -1.473571 -1.462143 -1.412143 -1.412143 -1.371429
Position14 Position15 Position16 Position17 Position18 Position19 Position20
1 -1.372143 -1.4 -1.428571 -1.439286 -1.430714 -1.420714 -1.397143
This is a random dataframe and it continues to n positions on the basis of the input sequence. I have a folder named Random_fasta which has 1333 equal length but different fasta sequences. The developed function DNAdupstability gives the desired outcome for a single fasta sequence (the above mentioned dataframe) from the folder Random_fasta, but now I want to carry out analysis of all the other 1332 sequences using the same DNAdupstability function and a form a combined dataframe similar to this format for all the sequences
Sequence Position8 Position9 Position10 Position11 Position12 Position13
1 1 -1.434286 -1.434286 -1.446429 -1.435714 -1.445714 -1.509286
2 2 -1.522143 -1.492143 -1.463571 -1.435714 -1.492857 -1.544286
3 3 -1.232857 -1.265000 -1.333571 -1.328571 -1.330000 -1.329286
4 4 -1.799286 -1.799286 -1.799286 -1.799286 -1.730714 -1.735714
5 5 -1.547143 -1.507143 -1.535714 -1.530714 -1.478571 -1.450714
Position14 Position15 Position16 Position17 Position18 Position19 Position20
1 -1.452143 -1.402143 -1.390000 -1.457143 -1.509286 -1.498571 -1.458571
2 -1.544286 -1.544286 -1.544286 -1.544286 -1.601429 -1.715000 -1.755000
3 -1.340000 -1.328571 -1.333571 -1.344286 -1.384286 -1.446429 -1.486429
4 -1.667143 -1.605000 -1.536429 -1.486429 -1.536429 -1.605000 -1.600000
5 -1.450714 -1.450714 -1.412143 -1.372143 -1.434286 -1.531429 -1.615000
So that I could calculate the position-wise mean which will then be further used for some visualization using ggplot2. Is there any way that I could apply the same functions in all the files of the folder particularly using R and get the desired combined dataframe? Any help will be greatly appreciated!
One option is to recursively return all the files from the main folder with list.files, then apply the custom fuction by looping over the files, and convert to a single data.frame with do.call(rbind
files <- list.files('path/to/your/folder', recursive = TRUE,
pattern = "\\.txt$", full.names = TRUE)
lst1 <- lapply(files, DNAdupstability)
out <- do.call(rbind, lst1)
Or we can use map from purrr with _dfr to combine all the output from the list to a single data.frame
library(purrr)
out <- map_dfr(files, DNAdupstability)

Extract and match sets from list of filenames

I have a dataset of 4000+ images. For the purpose of figuring out the code, I moved a small subset of them to another folder.
The files look like this:
folder
[1] "r01c01f01p01-ch3.tiff" "r01c01f01p01-ch4.tiff" "r01c01f02p01-ch1.tiff"
[4] "r01c01f03p01-ch2.tiff" "r01c01f03p01-ch3.tiff" "r01c01f04p01-ch2.tiff"
[7] "r01c01f04p01-ch4.tiff" "r01c01f05p01-ch1.tiff" "r01c01f05p01-ch2.tiff"
[10] "r01c01f06p01-ch2.tiff" "r01c01f06p01-ch4.tiff" "r01c01f09p01-ch3.tiff"
[13] "r01c01f09p01-ch4.tiff" "r01c01f10p01-ch1.tiff" "r01c01f10p01-ch4.tiff"
[16] "r01c01f11p01-ch1.tiff" "r01c01f11p01-ch2.tiff" "r01c01f11p01-ch3.tiff"
[19] "r01c01f11p01-ch4.tiff" "r01c02f10p01-ch1.tiff" "r01c02f10p01-ch2.tiff"
[22] "r01c02f10p01-ch3.tiff" "r01c02f10p01-ch4.tiff"
I cannot remove the name prior to the -ch# as that information is important. What I want to do, however, is to filter this list of images, and return only sets (ie: r01c02f10p01) which have all four ch values (ch1-4).
I was originally thinking that we could approach the issue along the lines of this:
ch1 <- dir(path="/Desktop/cp/complete//", pattern="ch1")
ch2 <- dir(path="/Desktop/cp/complete//", pattern="ch2")
ch3 <- dir(path="/Desktop/cp/complete//", pattern="ch3")
ch4 <- dir(path="/Desktop/cp/complete//", pattern="ch4")
Applying this list with the file.remove function, similar to this:
final2 <- dir(path="/Desktop/cp1/Images//", pattern="ch5")
file.remove(folder,final2)
However, creating new variables for each ch value fragments out each file. I am unsure how to use these to actually distinguish whether an individual image has all four ch values to meaningfully filter my images. I'm kind of at a loss, as the other sources I've seen have issues that don't quite match this problem.
Earlier, I was able to remove the all images with ch5 from my image set like this. I was thinking this may be helpful in trying to filter only images which have ch1-ch4, but I'm not sure how to proceed.
##Create folder variable which has all image files
folder <- list.files(getwd())
##Create final2 variable which has all image files ending in ch5
final2 <- dir(path="/Desktop/cp1/Images//", pattern="ch5")
##Remove final2 from folder
file.remove(folder,final2)
To summarize: I expect to filter files from a random assortment without complete ch values (ie: maybe only ch1 and ch2, or ch3 and ch4, or ch1, ch2, ch3, and ch4), to an assortment which only contains files which have a complete set (four files with ch1, ch2, ch3, and ch4).
Starting with a vector of filenames like you would get from list.files or something similar, you can create a data frame of filenames, use regex to extract the alphanumeric part at the beginning and the number that follows "-ch". Then check that all elements of an expected set (I put this in ch_set, but there might be another way you need to do this) occur in each group's set of CH values.
# assume this is the vector of file names that comes from list.files
# or something comparable
files <- c("r01c01f01p01-ch3.tiff", "r01c01f01p01-ch4.tiff", "r01c01f02p01-ch1.tiff", "r01c01f03p01-ch2.tiff", "r01c01f03p01-ch3.tiff", "r01c01f04p01-ch2.tiff", "r01c01f04p01-ch4.tiff", "r01c01f05p01-ch1.tiff", "r01c01f05p01-ch2.tiff", "r01c01f06p01-ch2.tiff", "r01c01f06p01-ch4.tiff", "r01c01f09p01-ch3.tiff", "r01c01f09p01-ch4.tiff", "r01c01f10p01-ch1.tiff", "r01c01f10p01-ch4.tiff", "r01c01f11p01-ch1.tiff", "r01c01f11p01-ch2.tiff", "r01c01f11p01-ch3.tiff", "r01c01f11p01-ch4.tiff", "r01c02f10p01-ch1.tiff", "r01c02f10p01-ch2.tiff", "r01c02f10p01-ch3.tiff", "r01c02f10p01-ch4.tiff")
library(dplyr)
ch_set <- 1:4
files_to_keep <- data.frame(filename = files, stringsAsFactors = FALSE) %>%
tidyr::extract(filename, into = c("group", "ch"), regex = "(^[\\w\\d]+)\\-ch(\\d)", remove = FALSE) %>%
mutate(ch = as.numeric(ch)) %>%
group_by(group) %>%
filter(all(ch_set %in% ch))
files_to_keep
#> # A tibble: 8 x 3
#> # Groups: group [2]
#> filename group ch
#> <chr> <chr> <dbl>
#> 1 r01c01f11p01-ch1.tiff r01c01f11p01 1
#> 2 r01c01f11p01-ch2.tiff r01c01f11p01 2
#> 3 r01c01f11p01-ch3.tiff r01c01f11p01 3
#> 4 r01c01f11p01-ch4.tiff r01c01f11p01 4
#> 5 r01c02f10p01-ch1.tiff r01c02f10p01 1
#> 6 r01c02f10p01-ch2.tiff r01c02f10p01 2
#> 7 r01c02f10p01-ch3.tiff r01c02f10p01 3
#> 8 r01c02f10p01-ch4.tiff r01c02f10p01 4
Now that you have a dataframe of the complete groups, just pull the matching filenames back out:
files_to_keep$filename
#> [1] "r01c01f11p01-ch1.tiff" "r01c01f11p01-ch2.tiff" "r01c01f11p01-ch3.tiff"
#> [4] "r01c01f11p01-ch4.tiff" "r01c02f10p01-ch1.tiff" "r01c02f10p01-ch2.tiff"
#> [7] "r01c02f10p01-ch3.tiff" "r01c02f10p01-ch4.tiff"
One thing to note is that this worked without the mutate line where I converted ch to numeric—i.e. comparing character versions of those numbers to regular numeric version of them—because under the hood, %in% converts to matching types. That didn't seem totally safe if you needed to scale this, so I converted to have them in matching types.

Substring (variable length) values in entire column of dataframe

I have looked for this tirelessly with no luck. I am coming from a Java background and new to R. (On a side note, I am loving R, but disliking string operations in it as well as the documentation - maybe that's just a Java bias.)
Anyhow, I have a dataframe with a single column, it is composed of a latitude and longitude numbers seperated by a colon e.g. ROAD:_:-87.4968190989999:38.7414455360001
I would like to create 2 new data frames where each will have the separate lat and long numbers.
I have successfully written a piece of code where I use for loops (but I know this is inefficient - and that there has to be another way)
Here is a snippet of the inefficient code:
length <- length(fromLatLong)
for (i in 1:length){
fromLat[i] <- strsplit(fromLatLong[i] ,":")[[1]][4]
}
for (i in 1:length){
fromLong[i] <- strsplit(fromLatLong[i] ,":")[[1]][3]
}
for (i in 1:length){
toLat[i] <- strsplit(toLatLong[i] ,":")[[1]][4]
}
for (i in 1:length){
toLong[i] <- strsplit(toLatLong[i] ,":")[[1]][3]
}
Here is how I tried to optimize it using mutate, but I only get the first value copied over to all rows as such:
fromLat = mutate(fromLatLong, FROM_NODE_ID = (strsplit(as.character(fromLatLong$FROM_NODE_ID),":")[[1]][4]))
fromLong = mutate(fromLatLong, FROM_NODE_ID = (strsplit(fromLatLong$FROM_NODE_ID,":")[[1]][3]))
toLat = mutate(toLatLong, TO_NODE_ID = (strsplit(toLatLong$TO_NODE_ID,":")[[1]][4]))
toLong = mutate(toLatLong, TO_NODE_ID = (strsplit(toLatLong$TO_NODE_ID,":")[[1]][3]))
And here is the result:
FROM_NODE_ID
1
38.7414455360001
2
38.7414455360001
3
38.7414455360001
4
38.7414455360001
5
38.7414455360001
6
38.7414455360001
7
38.7414455360001
8
38.7414455360001
9
38.7414455360001
I would appriciete your help on this. Thanks
You can use the map_chr function of the purrr package. For instance:
fromLat = mutate(fromLatLong, FROM_NODE_ID = map_chr(FROM_NODE_ID, ~ strsplit(as.character(.x),":")[[1]][4]))
The following expression will produce a data frame with each of the colon-delimited components as a separate column. You can then break this up into separate data frames or do whatever else you want with it.
as.data.frame(t(matrix(unlist(strsplit(fromLatLong$coords, ":", fixed=TRUE), recursive=FALSE), nrow=4)),stringsAsFactors=FALSE)
(Assuming the column name of your values in the data frame is coords.)

Extracting outputs from lapply to a dataframe

I have some R code which performs some data extraction operation on all files in the current directory, using the following code:
files <- list.files(".", pattern="*.tts")
results <- lapply(files, data_for_time, "17/06/2006 12:00:00")
The output from lapply is the following (extracted using dput()) - basically a list full of vectors:
list(c("amer", "14.5"), c("appl", "14.2"), c("brec", "13.1"),
c("camb", "13.5"), c("camo", "30.1"), c("cari", "13.8"),
c("chio", "21.1"), c("dung", "9.4"), c("east", "11.8"), c("exmo",
"12.1"), c("farb", "14.7"), c("hard", "15.6"), c("herm",
"24.3"), c("hero", "13.3"), c("hert", "11.8"), c("hung",
"26"), c("lizr", "14"), c("maid", "30.4"), c("mart", "8.8"
), c("newb", "14.7"), c("newl", "14.3"), c("oxfr", "13.9"
), c("padt", "10.3"), c("pbil", "13.6"), c("pmtg", "11.1"
), c("pmth", "11.7"), c("pool", "14.6"), c("prae", "11.9"
), c("ral2", "12.2"), c("sano", "15.3"), c("scil", "36.2"
), c("sham", "12.9"), c("stra", "30.9"), c("stro", "14.7"
), c("taut", "13.7"), c("tedd", "22.3"), c("wari", "12.7"
), c("weiw", "13.6"), c("weyb", "8.4"))
However, I would like to then deal with this output as a dataframe with two columns: one for the alphabetic code ("amer", "appl" etc) and one for the number (14.5, 14.2 etc).
Unfortunately, as.data.frame doesn't seem to work with this input of nested vectors inside a list. How should I go about converting this? Do I need to change the way that my function data_for_time returns its values? At the moment it just returns c(name, value). Or is there a nice way to convert from this sort of output to a dataframe?
Try this if results were your list:
> as.data.frame(do.call(rbind, results))
V1 V2
1 amer 14.5
2 appl 14.2
3 brec 13.1
4 camb 13.5
...
One option might be to use the ldply function from the plyr package, which will stitch things back into a data frame for you.
A trivial example of it's use:
ldply(1:10,.fun = function(x){c(runif(1),"a")})
V1 V2
1 0.406373084755614 a
2 0.456838687881827 a
3 0.681300171650946 a
4 0.294320539338514 a
5 0.811559669673443 a
6 0.340881009353325 a
7 0.134072444401681 a
8 0.00850683846510947 a
9 0.326008745934814 a
10 0.90791508089751 a
But note that if you're mixing variable types with c(), you probably will want to alter your function to return simply data.frame(name= name,value = value) instead of c(name,value). Otherwise everything will be coerced to character (as it is in my example above).
inp <- list(c("amer", "14.5"), c("appl", "14.2"), .... # did not see need to copy all
data.frame( first= sapply( inp, "[", 1),
second =as.numeric( sapply( inp, "[", 2) ) )
first second
1 amer 14.5
2 appl 14.2
3 brec 13.1
4 camb 13.5
5 camo 30.1
6 cari 13.8
snipped output
Because and forNelton took the response I was in the process of giving and Joran took the only other reasonable response I could think of and since I'm supposed to be writing a paper here's a ridiculous answer:
#I named your list LIST
LIST2 <- LIST[[1]]
lapply(2:length(LIST), function(i) {LIST2 <<- rbind(LIST2, LIST[[i]])})
data.frame(LIST2)

Resources