Optimize calls to mutate and summarise? - r

I have this R script:
rm(list = ls())
library(tidyr)
suppressWarnings(library(dplyr))
outFile = "zFinal.lua"
cat("\014\n")
cat(file = outFile, sep = "")
filea <- read.csv("csva.csv", strip.white = TRUE)
fileb <- read.csv("csvb.csv", strip.white = TRUE, sep = ";", header=FALSE)
df <-
merge(filea, fileb, by.x = c(3), by.y = c(1)) %>%
subset(select = c(1, 3, 6, 2)) %>%
arrange(ColA, ColB, V2) %>%
group_by(ColA) %>%
mutate(V2 = paste0('"', V2, "#", ColB, '"')) %>%
summarise(ID = paste(V2, collapse = ", ", sep=";")) %>%
mutate(ID = paste0('["', ColA, '"] = {', ID, '},')) %>%
mutate(ID = paste0('\t\t', ID))
df <- df[c("ID")]
cat("\n\tmyTable = {\n", file = outFile, append = TRUE, sep = "\n")
write.table(df, append = TRUE, file = outFile, sep = ",", quote = FALSE, row.names = FALSE, col.names = FALSE)
cat("\n\t}", file = outFile, append = TRUE, sep = "\n")
# Done
cat("\nDONE.", sep = "\n")
As you can see, this script opens csva.csv and csvb.csv.
This is csva.csv:
ID,ColA,ColB,ColC,ColD
2,3,100,1,1
3,7,300,1,1
5,7,200,1,1
11,22,900,1,1
14,27,500,1,1
16,30,400,1,1
20,36,900,1,1
23,39,800,1,1
24,42,700,1,1
29,49,800,1,1
45,3,200,1,1
And this is csvb.csv:
100;file1
200;file2
300;file3
400;file4
This is the output file that my script and the csv files produce:
myTable = {
["3"] = {"file1#100", "file2#200"},
["7"] = {"file2#200", "file3#300"},
["30"] = {"file4#400"},
}
This output file is exactly what I want. It's perfect.
This is what the script does. I'm not sure I can explain this very well so if I don't do a good job at that, please skip this section.
For each line in csva.csv, if ColC (csva) contains a number that is contained in Column 1 (csvb), then the output file should contain a line like this:
["3"] = {"file1#100", "file2#200"},
So, in the above example, the first line in ColA (csva) contains number 3 and colB for that line is 100. In csvb, column 1 contains 100 and column 2 contains file1#100.
Because csva contains another number 3 in ColA (the last line), this is also processed and output to the same line.
Ok so my script runs very well indeed and produces perfect output. The problem is it takes too long to run. csva and csvb in my question here are only a few lines long so the output is instant.
However, the data I have to work with in the real world - csva is over 300,000 lines and csvb is over 900,000 lines. So the script takes a long, long time to run (too long to make it feasible). It does work beautifully but it takes far too long to run.
From commenting out lines gradually, it seems that the slowdown is with mutate and summarise. Without those lines, the script runs in about 30 seconds. But with mutate and summarise, it takes hours.
I'm not too advanced with R so how can I make my script run faster possibly by improving my syntax or providing faster alternatives to mutate and summarise?

Here is a more compact version of your code in base R that should offer something of a performance boost.
(Edited to match the data provided by wibeasley.)
ds_a$file_name <- ds_b$file_name[match(ds_a$ColB, ds_b$ColB)]
ds_a <- ds_a[!is.na(ds_a$file_name), -4]
ds_a <- ds_a[order(ds_a$ColB),]
ds_a$file_name <- paste0('"', ds_a$file_name, "#", ds_a$ColB, '"')
res <- tapply(ds_a$file_name, ds_a$ColA, FUN = paste, collapse = ", ", sep=";")
res <- paste0("\t\t[\"", names(res), "\"] = {", res, "},", collapse = "\n")
cat("\n\tmyTable = {", res, "\t}", sep = "\n\n")
Outputting:
myTable = {
["3"] = {"file1#100", "file2#200"},
["7"] = {"file2#200", "file3#300"},
["30"] = {"file4#400"},
}

Here's a dplyr approach that closely follows yours. The real differences are that rows and columns are dropped from the object as soon as possible so there's less baggage to move around.
I'm making some guesses what will actually help with the large datasets. Please report back what the before & after durations are. I like how you said which calls were taking the longest; reporting the new bottles would help too.
If this isn't fast enough, the next easiest move is probably move to sqldf (which uses SQLite under the cover) or data.table. Both require learning a different syntax (unless you already know sql), but could be worth your time in the long run.
# Pretend this info is being read from a file
str_a <-
"ID,ColA,ColB,ColC,ColD
2,3,100,1,1
3,7,300,1,1
5,7,200,1,1
11,22,900,1,1
14,27,500,1,1
16,30,400,1,1
20,36,900,1,1
23,39,800,1,1
24,42,700,1,1
29,49,800,1,1
45,3,200,1,1"
str_b <-
"100;file1
200;file2
300;file3
400;file4"
# Declare the desired columns and their data types.
# Include only the columns needed. Use the smaller 'integer' data type where possible.
col_types_a <- readr::cols_only(
`ID` = readr::col_integer(),
`ColA` = readr::col_integer(),
`ColB` = readr::col_integer(),
`ColC` = readr::col_integer()
# `ColD` = readr::col_integer() # Exclude columns never used
)
col_types_b <- readr::cols_only(
`ColB` = readr::col_integer(),
`file_name` = readr::col_character()
)
# Read the file into a tibble
ds_a <- readr::read_csv(str_a, col_types = col_types_a)
ds_b <- readr::read_delim(str_b, delim = ";", col_names = c("ColB", "file_name"), col_types = col_types_b)
ds_a %>%
dplyr::select( # Quickly drop as many columns as possible; avoid reading if possible
ID,
ColB,
ColA
) %>%
dplyr::left_join(ds_b, by = "ColB") %>% # Join the two datasets
tidyr::drop_na(file_name) %>% # Dump the records you'll never use
dplyr::mutate( # Create the hybrid column
entry = paste0('"', file_name, "#", ColB, '"')
) %>%
dplyr::select( # Dump the unneeded columns
-ID,
-file_name
) %>%
dplyr::group_by(ColA) %>% # Create a bunch of subdatasets
dplyr::arrange(ColB, entry) %>% # Sorting inside the group usually is faster?
dplyr::summarise(
entry = paste(entry, collapse = ", ", sep = ";")
) %>%
dplyr::ungroup() %>% # Stack all the subsets on top of each other
dplyr::mutate( # Mush the two columns
entry = paste0('\t\t["', ColA, '"] = {', entry, '},')
) %>%
dplyr::pull(entry) %>% # Isolate the desired vector
paste(collapse = "\n") %>% # Combine all the elements into one.
cat()
result:
["3"] = {"file1#100", "file2#200"},
["7"] = {"file2#200", "file3#300"},
["30"] = {"file4#400"},

you could try to load your table as a data.table instead. usually data.tables are faster in their operations than data.frames
library(data.table)
filea <- fread("csva.csv")
just check that it is still a data.table before you come to the mutate function (just print it, you will see the obvious difference to the data.frame).

Here's another solution that leverages data.table's performance while still staying within your dplyr knowledge. I'm not sure there's much room to improve within only 10 seconds, but theoretically this could help larger datasets where the cost to create the indexes is amortized over a longer stretch of execution.
The dtplyr package is translating the dplyr verbs (that are familiar to you) to data.table syntax under the hood. That's leveraging the keys, which should improve the performance, especially with joining and grouping.
The dtplyr::lazy_dt feature might help optimize the dplyr-to-data.table translation.
Finally, vroom replaces readr, mostly out of curiosity. But it's independent from the other changes, and it sounds like that's never been a bottleneck
col_types_a <- vroom::cols_only(
`ID` = vroom::col_integer(),
`ColA` = vroom::col_integer(),
`ColB` = vroom::col_integer(),
`ColC` = vroom::col_integer()
# `ColD` = vroom::col_integer() # Leave out this column b/c it's never used
)
col_types_b <- vroom::cols_only(
`ColB` = vroom::col_integer(),
`file_name` = vroom::col_character()
)
ds_a <- vroom::vroom(str_a, col_types = col_types_a)
ds_b <- vroom::vroom(str_b, delim = ";", col_names = c("ColB", "file_name"), col_types = col_types_b)
# ds_a <- data.table::setDT(ds_a, key = c("ColB", "ColA"))
# ds_b <- data.table::setDT(ds_b, key = "ColB")
ds_a <- dtplyr::lazy_dt(ds_a, key_by = c("ColB", "ColA")) # New line 1
ds_b <- dtplyr::lazy_dt(ds_b, key_by = "ColB") # New line 2
ds_a %>%
dplyr::select( # Quickly drop as many columns as possible; avoid reading if possible
ID,
ColB,
ColA
) %>%
dplyr::inner_join(ds_b, by = "ColB") %>% # New line 3 (replaces left join)
# tidyr::drop_na(file_name) %>% # Remove this line
# dplyr::filter(!is.na(file_name)) %>% # Alternative w/ left join
dplyr::mutate(
entry = paste0('"', file_name, "#", ColB, '"')
) %>%
dplyr::select( # Dump the uneeded columns
-ID,
-file_name
) %>%
dplyr::group_by(ColA) %>%
dplyr::arrange(ColB, entry) %>% # Sort inside the group usually helps
dplyr::summarise(
entry = paste(entry, collapse = ", ", sep=";")
) %>%
dplyr::ungroup() %>%
dplyr::mutate(
entry = paste0('\t\t["', ColA, '"] = {', entry, '},')
) %>%
dplyr::pull(entry) %>% # Isolate the desired vector
paste(collapse = "\n") %>%
cat()

Related

iterate reading/mutating csv files in R purr

I have a folder of csv files in R that will need to loop through, clean, and create in columns based on information in the file name. I am trying to use purr and this is what I have done so far.
# get file names
files_names <- list.files("data/", recursive = TRUE, full.names = TRUE)
# inspect
files_names
[1] "data/BOC_All_ATMImage_(Aug 2020).txt" "data/BOC_All_ATMImage_(Aug 2021).txt" "data/BOC_All_ATMImage_(Feb 2021).txt"
[4] "data/BOC_All_ATMImage_(May 2021).txt" "data/BOC_All_ATMImage_(Nov 2020).txt" "data/BOC_All_ATMImage_(Nov 2021).txt"
# extract month/year inside brackets and convert to snakecase
# this will be used later to create column names
names_data <- files_names %>%
str_extract(., "(?<=\\().*?(?=\\))") %>%
str_to_lower() %>%
str_replace(., " ", "_")
column_names
[1] "aug_2020" "aug_2021" "feb_2021" "may_2021" "nov_2020" "nov_2021"
now loop through the csvs, read each csv, do some data cleaning and create columns
mc_data <-
map(files_names,
~ read_csv(.x, guess_max = 50000) %>%
janitor::clean_names() %>%
mutate(month_year = str_extract(.x, "(?<=\\().*?(?=\\))"),
date_dmy = paste0(day, "-", month_year),
date = dmy(date_dmy),
fsa = str_sub(postal_code, start = 1, end=3),
?? = 1) %>%
select(-date_dmy),
.id = "group"
)
I need to mutate one more column and that column has to named based on this names_data extracted. I currently have this as ?? in the fake code above. names_data follows the same order as the file path so the idea is to do it in one loop and save each data after it has been cleaned.
We can use glue syntax and map2. Perhaps:
mc_data <-
map2(files_names, column_names,
~ read_csv(.x, guess_max = 50000) %>%
janitor::clean_names() %>%
mutate(month_year = str_extract(.x, "(?<=\\().*?(?=\\))"),
date_dmy = paste0(day, "-", month_year),
date = dmy(date_dmy),
fsa = str_sub(postal_code, start = 1, end=3),
'{.y}' := 1) %>%
select(-date_dmy),
.id = "group"
)

Clever way to avoid for loop in R

I have a data file that follows roughly this format:
HEADER:001,v1,v2,v3...,v10
v1,v2,v3,STATUS,v5...v6
.
.
.
HEADER:006,v1,v2,v3...v10
HEADER:012,v1,v2,v3...v10
v1,v2,v3,STATUS,v5...v6
v1,v2,v3,STATUS,v5...v6
.
.
.
etc
where each block or chunk of data leads off with a comma separated line that includes the header and a unique (not necessarily sequential) number, and then there may be 0 or more lines that are identified by the STATUS keyword in the body of the chunk.
I am reading this block in using readLines and then splitting it into header lines and status lines to be read in as CSV separately, since they have a different number of variables:
datablocks <- readLines(filename, skipNul = T)
headers <- datablocks[grepl("HEADER", datablocks, useBytes = T)]
headers <- read.csv(text=headers, header= F, stringsAsFactors = F)
statuses <- datablocks[grepl("STATUS", datablocks, useBytes = T)]
statuses <- read.csv(text=statuses, header= F, stringsAsFactors = F)
Eventually, I would like to inner join this data, so that the variables from the header are included in each status line:
all <- headers %>% inner_join(statuses, by = c("ID" = "ID"))
But I need a way to add the unique ID of the header to each status line below it, until the next header. The only way I can think of doing this is with a for loop that runs over the initial full text datablock:
header_id <- NA
for(i in seq(1:length(datablocks))) {
is_header_line <- str_extract(datablocks[i], "HEADER:([^,]*)")
if(!is.na(is_header_line)) {
header_id <- is_header_line
}
datablocks[i] <- paste(datablocks[i], header_id, sep=",")
}
This works fine, but it's ugly, and not very... R-ish. I can't think of a way to vectorize this operation, since it needs to keep an external variable.
Am I missing something obvious here?
Edit
If the input looks literally like this
HEADER:001,a0,b0,c0,d0
e0,f0,g0,STATUS,h0,i0,j0,k0,l0,m0
HEADER:006,a1,b1,c1,d1
HEADER:012,a2,b2,c2,d2
e1,f1,g1,STATUS,h1,i1,j1,k1,l1,m1
e2,f2,g2,STATUS,h2,i2,j2,k2,l2,m2
The output should look like this:
e0,f0,g0,h0,i0,j0,k0,l0,m0,a0,b0,c0,d0,001
e1,f1,g1,h1,i1,j1,k1,l1,m1,a2,b2,c2,d2,012
e2,f2,g2,h2,i2,j2,k2,l2,m2,a2,b2,c2,d2,012
So there needs to be a column propagated from the parent (HEADER) to the children (STATUS) to inner join on.
EDIT:
Thanks for the clarification. The specific input and output makes it dramatically easier to avoid misunderstandings.
Here I use tidyr::separate to separate out the header label from the "a0,b0,c0,d0" part, and tidyr::fill to propagate header info down into the following status rows.
library(tidyverse)
read_table(col_names = "text",
"HEADER:001,a0,b0,c0,d0
e0,f0,g0,STATUS,h0,i0,j0,k0,l0,m0
HEADER:006,a1,b1,c1,d1
HEADER:012,a2,b2,c2,d2
e1,f1,g1,STATUS,h1,i1,j1,k1,l1,m1
e2,f2,g2,STATUS,h2,i2,j2,k2,l2,m2") %>%
mutate(status_row = str_detect(text, "STATUS"),
header_row = str_detect(text, "HEADER"),
header = if_else(header_row, str_remove(text, "HEADER:"), NA_character_)) %>%
separate(header, c("header", "stub"), sep = ",", extra = "merge") %>%
fill(header, stub) %>%
filter(status_row) %>%
mutate(output = paste(str_remove(text, "STATUS,"), stub, header, sep = ",")) %>%
select(output)
Result
# A tibble: 3 x 1
output
<chr>
1 e0,f0,g0,h0,i0,j0,k0,l0,m0,a0,b0,c0,d0,001
2 e1,f1,g1,h1,i1,j1,k1,l1,m1,a2,b2,c2,d2,012
3 e2,f2,g2,h2,i2,j2,k2,l2,m2,a2,b2,c2,d2,012

How do I avoid 'NA' values when coercing a .tsv column into numeric via as.numeric?

I have a dataframe with several columns from a .tsv file and want to transform one of them into the 'numeric' type for analysis. However, I keep getting the 'NAs' introduced by coercion warning all the time and do not know exactly why. There is some unnecessary info at the beginning of another column, which is pretty much the only formatting I did.
Originally, I thought the file might have added some extra tabs or spaces, which is why I tried to delete these via giving sub() as an argument.
I should also mention that I get the NA errors also when I do not replace the values and run the dataframe as is:
library(tidyverse)
data_2018 <- read_tsv('teina230.tsv')
data_1995 <- read_csv('OECD_1995.csv')
#get rid of long colname & select only columns containing %GDP
clean_data_2018 <- data_2018 %>%
select('na_item,sector,unit,geo','2018Q1','2018Q2','2018Q3','2018Q4') %>%
rename(country = 'na_item,sector,unit,geo')
clean_data_2018 <- clean_data_2018[grep("PC_GDP", clean_data_2018$'country'), ]
#remove unnecessary info
clean_data_2018 <- clean_data_2018 %>%
mutate(country=gsub('\\GD,S13,PC_GDP,','',country))
clean_data_2018 <- clean_data_2018 %>%
mutate(
'2018Q1'=as.numeric(sub("", "", '2018Q1', fixed = TRUE)),
'2018Q2'=as.numeric(sub(" ", "", '2018Q2', fixed = TRUE)),
'2018Q3'=as.numeric(sub(" ", "", '2018Q3', fixed = TRUE)),
'2018Q4'=as.numeric(sub(" ", "", '2018Q4', fixed = TRUE))
)
Is there another way to get around the problem and convert the column without replacing all the values with 'NA'?
Thanks guys :)
Thanks for the hint #divibisan !
Renaming the columns via rename() actually solved the problem. Here the code which finally worked:
library(tidyverse)
data_2018 <- read_tsv('teina230.tsv')
#get rid of long colname & select only columns containing %GDP
clean_data_2018 <- data_2018 %>%
select('na_item,sector,unit,geo','2018Q1','2018Q2','2018Q3','2018Q4') %>%
rename(country = 'na_item,sector,unit,geo',
quarter_1 = '2018Q1',
quarter_2 = '2018Q2',
quarter_3 = '2018Q3',
quarter_4 = '2018Q4')
clean_data_2018 <- clean_data_2018[grep("PC_GDP", clean_data_2018$'country'), ]
#remove unnecessary info
clean_data_2018 <- clean_data_2018 %>%
mutate(country=gsub('\\GD,S13,PC_GDP,','',country))
clean_data_2018 <- clean_data_2018 %>%
mutate(
quarter_1 = as.numeric(quarter_1),
quarter_2 = as.numeric(quarter_2),
quarter_3 = as.numeric(quarter_3),
quarter_4 = as.numeric(quarter_4)
)

How to add column to multiple data frames based on information from another data frame

Apologies if this question is simple/been answered elsewhere - I have looked but as a newbie I can't seem to find what I need.
I have a data frame (Length) which contains a a unique value which I need to add to different files
View(Length)
File_name Transcript_length <d
1 sample15.fasta.out_alternative.out_contig.copynumber.csv 89229486
2 sample16.fasta.out_alternative.out_contig.copynumber.csv 70908644
3 sample2.fasta.out_alternative.out_contig.copynumber.csv 56017470
4 sample28.fasta.out_alternative.out_contig.copynumber.csv 94888762
5 sample30.fasta.out_alternative.out_contig.copynumber.csv 106260465
6 sample31.fasta.out_alternative.out_contig.copynumber.csv 91189772
I have then imported and began to manipulate these copy.number.csv files but need to add a new column which contains the value corresponding to the file name?
Attempt 1:
#import copynumber data
import2 <- list.files(pattern="*copynumber.csv", full.names = TRUE)
list2env(
lapply(setNames(import2, make.names(gsub("$", "", import))),
read.csv, sep = ""),
envir = .GlobalEnv)
CN_files <- lapply(import2, read.csv, sep = "")
names(CN_files) <- gsub("$", "", import2)
#then manipulate
for (f in 1:length(CN_files)) {
names(CN_files[[f]]) <- c("Family", "Element", "Length", "Fragments", "Copies", "Solo_LTR", "Total_Bp", "Cover")
how do I then add the transcript length values to a new column based on the specific copynumber.csv file provided by the earlier data frame?
Any help greatly appreciated, again, I am new to this, so feel free to give more general advice on how to word a R question etc
I have worked out how to do it outside of the loop as so:
CN_files[[1]] <- CN_files[[1]] %>% mutate(bp = Length$Transcript_length[1])
CN_files[[2]] <- (CN_files[[2]] %>% mutate(bp = Length$Transcript_length[2]))
CN_files[[3]] <- (CN_files[[3]] %>% mutate(bp = Length$Transcript_length[3]))
CN_files[[4]] <- (CN_files[[4]] %>% mutate(bp = Length$Transcript_length[4]))
CN_files[[5]] <- (CN_files[[5]] %>% mutate(bp = Length$Transcript_length[5]))
CN_files[[6]] <- (CN_files[[6]] %>% mutate(bp = Length$Transcript_length[6]))
CN_files[[7]] <- (CN_files[[7]] %>% mutate(bp = Length$Transcript_length[7]))
CN_files[[8]] <- (CN_files[[8]] %>% mutate(bp = Length$Transcript_length[8]))
CN_files[[9]] <- (CN_files[[9]] %>% mutate(bp = Length$Transcript_length[9]))
Nevertheless, this seems quite awkward and non-efficient so again if anyone has any tips on how to approach this better it will be greatly appreciated!
Note, it was known that the order of the files within the list were the same as the 'Length' data file-

recognizing items in values list in R in for loop

I am having some trouble getting R to recognize items in my Values list (in RStudio) in a function call (just referring to it as a generic function here). Here's an example...the following works just fine if I type it in directly:
result <- function(cnv.chr1.S1, cnv.chr1.S2, cnv.chr1.S3)
because cnv.chr1.S1, cnv.chr1.S2, and cnv.chr1.S3 are objects (specifically GRanges objects) that I've created previously.
But as I'm looping over different chromosomes and there are really many more than 3 samples (S1, S2, S3), I've tried the following (simplified here)
chrom <- paste("chr", 1:1, sep = "")
sample.names <- paste("S", 1:3, sep = "")
for (thischrom in chrom)
{
for (sample in sample.names)
{
a <- function(list(paste(paste("cnv", thischrom, sep = "."), sample.names, sep = ".")))
}
}
However, it doesn't work because
paste(paste("cnv", thischrom, sep = "."), sample.names, sep = ".")
just creates a character list of items that have the same names as the items in my Values list. How do I get R to access the appropriate objects in my Values list?
Thanks for any thoughts you might have!
Steve
Are you looking for something like this?
library(dplyr)
chrom <- paste("chr", 1:1, sep = "")
sample.names <- paste("S", 1:2, sep = "")
cnv.chr1.S1 = c(1, 2)
cnv.chr1.S2 = c(2, 3)
result =
data_frame(chrom = chrom) %>%
merge(data_frame(sample.names = sample.names) ) %>%
rowwise %>%
mutate(object =
paste("cnv", chrom, sample.names, sep = ".") %>%
parse(text = .) %>%
eval %>%
list)

Resources