Reduce unnecessary repeated reading from files in nested for loop R - r

I'm writing some R code to handle pairs of files, an Excel and a csv (Imotions.txt). I need extract a column from the Excel and merge it to the csv, in pairs. Below is my abbreviated script: My script is now in polynomial time, and keeps repeating the body of the nested for loop 4 times instead of just doing it once.
Basically is there a general way to think about running some code over a paired set of files that I can translate to this and other languages?
excel_files <- list.files(pattern = ".xlsx" , full.names = TRUE)
imotion_files <-list.files(pattern = 'Imotions.txt', full.names = TRUE)
for (imotion_file in imotion_files) {
for (excel_file in excel_files) {
filename <- paste(sub("_Imotions.txt", "", imotion_file))
raw_data <- extract_raw_data(imotion_file)
event_data <- extract_event_data(imotion_file)
#convert times to milliseconds
latency_ms <- as.data.frame(
sapply(
df_col_only_ones$latency,
convert_to_ms,
raw_data_first_timestamp = raw_data_first_timestamp
)
)
#read in paradigm data
paradigm_data <- read_excel(path = excel_file, range = "H30:H328")
merged <- bind_cols(latency_ms, paradigm_data)
print(paste("writing = ", filename))
write.table(
merged,
file = paste(filename, "_EVENT", ".txt", sep = ""),
sep = '\t',
col.names = TRUE,
row.names = FALSE,
quote = FALSE
)
}
}

It is not entirely clear about some operations. Here is a an option in tidyverse
library(dplyr)
library(tidyr)
library(purrr)
library(stringr)
out <- crossing(excel_files, imotion_files) %>%
mutate(filename = str_remove(imotion_file, "_Imotions.txt"),
raw_data = map(imotion_files, extract_raw_data),
event_data = map(imption_filess, extract_event_data),
paradigm_data = map(excel_files, ~
read_excel(.x, range = "H30:H328") %>%
bind_cols(latency_ms, .))
Based on the OP's code, latency_ms can be created outside the loop once and used it while binding the columns

Based on the naming of raw_data_first_timestamp, I'm assuming it's created by the extract_raw_data function - otherwise you can move the latency_ms outside the loop entirely, as akrun mentioned.
If you don't want to use tidyverse, see the modified version of your code at bottom. Notice that the loops have been broken out to cut down on duplicated actions.
Some general tips to improve efficiency when working with loops:
Before attempting to improve nested loop efficiencies, consider whether the loops can be broken out so that data from earlier loops is stored for usage in later loops. This can also be done with nested loops and variables tracking whether data has already been set, but it's usually simpler to break the loops out and negate the need for the tracking variables.
Create variables and call functions before the loop where possible. Depending on the language and/or compiler (if one is used), variable creation outside loops may not help with efficiency, but it's still good practice.
Variables and functions which must be created or called inside loops should be done in the highest scope - or the outermost loop - possible.
Disclaimer - I have never used R, so there may be syntax errors.
excel_files <- list.files(pattern = ".xlsx" , full.names = TRUE)
imotion_files <-list.files(pattern = 'Imotions.txt', full.names = TRUE)
paradigm_data_list <- vector("list", length(excel_files))
for (i in 1:length(excel_files)) {
#read in paradigm data
paradigm_data_list[[i]] <- read_excel(path = excel_files[[i]], range = "H30:H328")
}
for (imotion_file in imotion_files) {
filename <- paste(sub("_Imotions.txt", "", imotion_file))
raw_data <- extract_raw_data(imotion_file)
event_data <- extract_event_data(imotion_file)
#convert times to milliseconds
latency_ms <- as.data.frame(
sapply(
df_col_only_ones$latency,
convert_to_ms,
raw_data_first_timestamp = raw_data_first_timestamp
)
)
for (paradigm_data in paradigm_data_list) {
merged <- bind_cols(latency_ms, paradigm_data)
print(paste("writing = ", filename))
write.table(
merged,
file = paste(filename, "_EVENT", ".txt", sep = ""),
sep = '\t',
col.names = TRUE,
row.names = FALSE,
quote = FALSE
)
}
}

Related

How to create a for loop to open, mutate and save .csv files using R?

I have a several .csv files that have to be reformatted and saved again using an R script.
The function that is needed to do the changes and the reformating of the files, is already established and works perfectly fine. But as there are always lots of documents to change, I would like to have a for lLoop so that I don't have to adapt my code for every single document. But unfortunately I don't have experience in the use of loops using R so far.
My code looks like this at the moment:
setwd("C:/users/Desktop/Raw/.")
df <- read.csv("A1.csv", sep= ",")
new_df <- wrap_frame(df, nr = 61, rownames = "", unique_names = FALSE)
write.csv(new_df, "C:/users/Desktop/Data/A1.csv", row.names = FALSE)
The original .csv files are always called the same way with a letter (A to Z) followed by a number from 1 to 12. The number of the .csv files to change may adapt. But their names are always following the mentioned rules.
I would be very grateful, if somebody could help me with this issue!
You can get a vector with all filenames that exist in your folder (as this folder contains no other files than those you want to edit) with
setwd( "C:/users/Desktop/Raw/" )
files <- Sys.glob( "*.csv" )
and then process them one by one with
for( i in files )
{
df <- read.csv( i )
new_df <- wrap_frame(df, nr = 61, rownames = "", unique_names = FALSE)
write.csv(new_df, paste( "C:/users/Desktop/Data/", i, sep = "" ), row.names = FALSE)
}
Try out:
# vector of file names
my.files <- paste0(c(outer(LETTERS, 1:12, FUN = "paste0")),
".csv")
# for loop
for (i in seq_along(my.files)) {
df <- read.csv(my.files[i], sep= ",") # open
new_df <- wrap_frame(df, nr = 61, rownames = "", unique_names = FALSE) # mutate
write.csv(new_df, paste0("C:/users/Desktop/Data/", my.files[i]),
row.names = FALSE) # save
}

Using lapply to apply a function over read-in list of files and saving output as new list of files

I'm quite new at R and a bit stuck on what I feel is likely a common operation to do. I have a number of files (57 with ~1.5 billion rows cumulatively by 6 columns) that I need to perform basic functions on. I'm able to read these files in and perform the calculations I need no problem but I'm tripping up in the final output. I envision the function working on 1 file at a time, outputting the worked file and moving onto the next.
After calculations I would like to output 57 new .txt files named after the file the input data first came from. So far I'm able to perform the calculations on smaller test datasets and spit out 1 appended .txt file but this isn't what I want as a final output.
#list filenames
files <- list.files(path=, pattern="*.txt", full.names=TRUE, recursive=FALSE)
#begin looping process
loop_output = lapply(files,
function(x) {
#Load 'x' file in
DF<- read.table(x, header = FALSE, sep= "\t")
#Call calculated height average a name
R_ref= 1647.038203
#Add column names to .las data
colnames(DF) <- c("X","Y","Z","I","A","FC")
#Calculate return
DF$R_calc <- (R_ref - DF$Z)/cos(DF$A*pi/180)
#Calculate intensity
DF$Ir_calc <- DF$I * (DF$R_calc^2/R_ref^2)
#Output new .txt with calcuated columns
write.table(DF, file=, row.names = FALSE, col.names = FALSE, append = TRUE,fileEncoding = "UTF-8")
})
My latest code endeavors have been to mess around with the intial lapply/sapply function as so:
#begin looping process
loop_output = sapply(names(files),
function(x) {
As well as the output line:
#Output new .csv with calcuated columns
write.table(DF, file=paste0(names(DF), "txt", sep="."),
row.names = FALSE, col.names = FALSE, append = TRUE,fileEncoding = "UTF-8")
From what I've been reading the file naming function during write.table output may be one of the keys I don't have fully aligned yet with the rest of the script. I've been viewing a lot of other asked questions that I felt were applicable:
Using lapply to apply a function over list of data frames and saving output to files with different names
Write list of data.frames to separate CSV files with lapply
to no luck. I deeply appreciate any insights or paths towards the right direction on inputting x number of files, performing the same function on each, then outputting the same x number of files. Thank you.
The reason the output is directed to the same file is probably that file = paste0(names(DF), "txt", sep=".") returns the same value for every iteration. That is, DF must have the same column names in every iteration, therefore names(DF) will be the same, and paste0(names(DF), "txt", sep=".") will be the same. Along with the append = TRUE option the result is that all output is written to the same file.
Inside the anonymous function, x is the name of the input file. Instead of using names(DF) as a basis for the output file name you could do some transformation of this character string.
example.
Given
x <- "/foo/raw_data.csv"
Inside the function you could do something like this
infile <- x
outfile <- file.path(dirname(infile), gsub('raw', 'clean', basename(infile)))
outfile
[1] "/foo/clean_data.csv"
Then use the new name for output, with append = FALSE (unless you need it to be true)
write.table(DF, file = outfile, row.names = FALSE, col.names = FALSE, append = FALSE, fileEncoding = "UTF-8")
Using your code, this is the general idea:
require(purrr)
#list filenames
files <- list.files(path=, pattern="*.txt", full.names=TRUE, recursive=FALSE)
#Call calculated height average a name
R_ref= 1647.038203
dfTransform <- function(file){
colnames(file) <- c("X","Y","Z","I","A","FC")
#Calculate return
file$R_calc <- (R_ref - file$Z)/cos(file$A*pi/180)
#Calculate intensity
file$Ir_calc <- file$I * (file$R_calc^2/R_ref^2)
return(file)
}
output <- files %>% map(read.table,header = FALSE, sep= "\t") %>%
map(dfTransform) %>%
map(write.table, file=paste0(names(DF), "txt", sep="."),
row.names = FALSE, col.names = FALSE, append = TRUE,fileEncoding = "UTF-8")

write results sequentially in a loop in r

I have a bunt of single files which need to apply a test. I need to find the way to write automatically results of each file into a file. Here is what I do:
library(ape)
stud_files <- list.files("path/dir/data",full.names = T)
for (f in stud_files) {
df <- read.table(f, header=TRUE, sep=";")
df_xts <- as.xts(df$cola, order.by = as.Date(df$colb,"%m/%d/%Y"))
pet <- testa(df_xts)
res <- data.frame(estimate = pet$estimate,
p.value=pet$p.value,
logi = pet$alternative)
write.dna(res,file = "res_testa.xls",format = "sequential")
}
This loop works well, except the last command which aim to write the results of each file consecutively, it saved only the last performance. And the results save as string, not a table as I define above (data.frame). Any idea in this case? Thanks in advance
Check help(write.dna).
write.dna(x, file, format = "interleaved", append = FALSE,
nbcol = 6, colsep = " ", colw = 10, indent = NULL,
blocksep = 1)
append a logical, if TRUE the data are appended to the file without
erasing the data possibly existing in the file, otherwise the file (if
it exists) is overwritten (FALSE the default).
Set append = TRUE and you should be all set.
As some of the comments point out, however, you are probably better off generating your table, and then writing it all at once to a file. Unless you have billions of files, you likely won't run out of memory.
Here is how I would approach this.
library(ape)
library(data.table)
stud_files <- list.files("path/dir/data",full.names = T)
sumfunc <- function(f) {
df <- read.table(f, header=TRUE, sep=";")
df_xts <- as.xts(df$cola, order.by = as.Date(df$colb,"%m/%d/%Y"))
pet <- testa(df_xts)
res <- data.table(estimate = pet$estimate,
p.value=pet$p.value,
logi = pet$alternative)
return(res)
}
lres <- lapply(stud_files, sumfunc)
dat <- rbindlist(lres)
write.table(dat,
file = "res_testa.csv",
sep = ",",
quote = FALSE,
row.names = FALSE)

How do I loop through a number of variables and perform operations on them in R?

Suppose that I have 30 tsv files of twitter data, say Google, Facebook and LlinkedIn, etc. I want to perform a set of operations on all of them, and was wondering if I can do so using a loop.
Specifically, I know that I can create variables using a loop, such as
index = c("fb", "goog", "lkdn")
for (i in 1:length(index)){
file_name = paste(names[i], ".data", sep = "")
assign(file_name, read.delim(paste(index$report_id[i],
"-tweets.tsv", sep = ""), header = T,
stringsAsFactors = F))
}
But how do I perform operations to all these data files in the loop? For example, if I want to order the datafiles using data[order(data[,4]), ], how do I make sure that the data file name is changed in each iteration of the loop? Thanks!
Build a function that does all of the operations you need it to do and then create a loop calling that function instead. If you insist on using assign to create lots of variables (not a great practice for this very reason) then try something like:
files <- dir("path/to/files", pattern = "*.tsv")
fileFunction <- function(x){
df <- read.delim(x, sep = "\t", header = T, stringsAsFactors = F)
df <- df[order(df[,4]),]
return(df)
}
for (a in files){
assign(a, fileFunction(a))
}

R: Dynamically create a variable name

I'm looking to create multiple data frames using a for loop and then stitch them together with merge().
I'm able to create my data frames using assign(paste(), blah). But then, in the same for loop, I need to delete the first column of each of these data frames.
Here's the relevant bits of my code:
for (j in 1:3)
{
#This is to create each data frame
#This works
assign(paste(platform, j, "df", sep = "_"), read.csv(file = paste(masterfilename, extension, sep = "."), header = FALSE, skip = 1, nrows = 100))
#This is to delete first column
#This does not work
assign(paste(platform, j, "df$V1", sep = "_"), NULL)
}
In the first situation I'm assigning my variables to a data frame, so they inherit that type. But in the second situation, I'm assigning it to NULL.
Does anyone have any suggestions on how I can work this out? Also, is there a more elegant solution than assign(), which seems to bog down my code? Thanks,
n.i.
assign can be used to build variable names, but "name$V1" isn't a variable name. The $ is an operator in R so you're trying to build a function call and you can't do that with assign. In fact, in this case it's best to avoid assign completely. You con't need to create a bunch of different variables. If you data.frames are related, just keep them in a list.
mydfs <- lapply(1:3, function(j) {
df<- read.csv(file = paste(masterfilename, extension, sep = "."),
header = FALSE, skip = 1, nrows = 100))
df$V1<-NULL
df
})
Now you can access them with mydfs[[1]], mydfs[[2]], etc. And you can run functions overall data.sets with any of the *apply family of functions.
As #joran pointed out in his comment, the proper way of doing this would be using a list. But if you want to stick to assign you can replace your second statement with
assign(paste(platform, j, "df", sep = "_"),
get(paste(platform, j, "df", sep = "_"))[
2:length(get(paste(platform, j, "df", sep = "_")))]
If you wanted to use a list instead, your code to read the data frames would look like
dfs <- replicate(3,
read.csv(file = paste(masterfilename, extension, sep = "."),
header = FALSE, skip = 1, nrows = 100), simplify = FALSE)
Note you can use replicate because your call to read.csv does not depend on j in the loop. Then you can remove the first column of each
dfs <- lapply(dfs, function(d) d[-1])
Or, combining everything in one command
dfs <- replicate(3,
read.csv(file = paste(masterfilename, extension, sep = "."),
header = FALSE, skip = 1, nrows = 100)[-1], simplify = FALSE)

Resources