R: Add column to several files in folder and save - r

I am trying to apply the same function to all csv files (identical structure) in a folder - adding two new columns based on 'old' columns, adding 0.05 to each variable and then saving it under the same name in the same folder as csv. Should be easy and there are several examples here for doing that, mostly using lapply, however, I keep running into an error:
Error in `$<-.data.frame`(`*tmp*`, "LAT", value = numeric(0)) : replacement has 0 rows, data has 3
This is my code:
my_files <- list.files(path="C:/PATH", pattern=".csv", full.names=T, recursive=FALSE)
add_col <- function(my_files) {
mpa <- read.csv(my_files, header=T)
mpa$LAT <- mpa$lat_bin + 0.05
mpa$LON <- mpa$lon_bin + 0.05
return(mpa)
write.csv(mpa,
append = FALSE,
quote = FALSE,
sep = ",",
row.names = FALSE,
col.names = TRUE)
}
I am unsure how best to do that for a large amounts of files.
Here is some sample code for the files
Df1 <- data.frame(lat_bin = c(50,40,70,6,8,4),lon_bin = (c(1,5,2,4,9,11)))
Df2 <- data.frame(lat_bin = c(66, 77, 82, 65, 88, 43),lon_bin = (c(2,3,4,5,11,51)))
Df3 <- data.frame(lat_bin = c(43,46,55,67,1,11),lon_bin = (c(7,6,5,9,11,15)))
write.csv(Df1, "data_1.csv", row.names=F)
write.csv(Df2, "data_2.csv", row.names=F)
write.csv(Df3, "data_3.csv", row.names=F)

Simply change parameters where function receives one file and you pass entire list of files inside lapply. As info, lappy is perhaps most popular of the apply family of functions that receives a list/vector input and returns an equal-length list where each input list element is passed into a function.
Specifically here res returns a list of dataframes equal to the number of files in my_files, each with column value changes. Also, write.csv had a missing file name, but below saves new csv files with _new suffix (double slashes to escape period, special character in regex).
my_files <- list.files(path="C:/PATH", pattern=".csv", full.names=T,
recursive=FALSE)
add_col <- function(one_file) {
mpa <- read.csv(one_file, header=T)
mpa$LAT <- mpa$lat_bin + 0.05
mpa$LON <- mpa$lon_bin + 0.05
write.csv(mpa,
file = sub("\\.csv", "_new\\.csv", one_file),
append = FALSE,
quote = FALSE,
sep = ",",
row.names = FALSE,
col.names = TRUE)
return(mpa)
}
res <- lapply(my_files, function(i) add_col(i)) # LONGER VERSION
res <- lapply(my_files, add_col) # SHORTER VERSION

Related

Loop through files in R and select rows by string

I have a large number of CSV files. I need to extract relevant data from each file, and compile all of the relevant data into a new file.
I have been copying/pasting the code below and changing relevant details (e.g., file name) to repeat the same process for many CSV files. After that, I use cbind()/write.xlsx() to combine all of the relevant data and write it to an excel file. I need a more efficient method to accomplish this task.
How can I:
incorporate a loop that imports a large number of CSV files (to replace #1 below)
select relevant rows based on a string instead of entering specific row numbers
(to replace # 2 below)
combine all of the relevant data into a single data frame with each file's data in one column
library(tidyr)
# 1 - import raw data files
file1 <- read.csv ("1.csv", header = FALSE, sep = "\n")
# 2 - select relevant rows
file1 <- as.data.frame(file1[c(41:155),])
colnames(file1) <- c("file1")
#separate components of each line from raw csv file / isolate data
temp1 <- separate(file1, file1, into = c("Text", "IntNum", "Data", sep = "\\s"))
temp1 <- temp1$Data
temp1 <- as.data.frame(temp1)
If the number of relevant rows in each file is the same, you could do it like this. Option 1 shows a solution using a loop, option 2 shows a solution using sapply.
In a first step I generate three csv-files to make the code reproducible. The start row in each file is defined by "start", the end row by "end". I then get a list with the names of these files with dir().
#make csv-files, target vector always same length (3)
set.seed(1)
for (i in 1:3) {
df <- data.frame(x = c(rep(0, sample(1:10,1)), "begin",
paste0("dat", i),
"end",rep(0, sample(1:10, 1))))
write.csv(df, file = paste0("file", i, ".csv"), quote = FALSE, row.names = FALSE)
}
#get list of file names
allFiles <- dir(pattern = glob2rx("*.csv"))
Option 1 - loop
For the loop you could first initialize a result data frame ("outDF") with the number of columns set to the number of csv-files and the number of rows set to the length of the target vector in each file ("start" to "end"). You can then loop over the files and fill the data frame. The start and end rows can be indexed using which().
#initialise result data frame
outDF <- data.frame(matrix(nrow = 3, ncol = length(allFiles),
dimnames = list(NULL, allFiles)))
#loop over csv files
for (iFile in allFiles) {
idat <- read.csv(iFile, stringsAsFactors = FALSE) #read csv
outDF[, iFile] <- idat[which(idat$x == "start"):which(idat$x == "end"),]
}
Option 2 - sapply
Instead of a loop you could use sapply with a custom function to extract the relevant rows in each file. This returns a matrix which you could then transform into a dataframe.
out <- sapply(allFiles, FUN = function(x) {
idat <- read.csv(x, stringsAsFactors = FALSE)
return(idat[which(idat$x == "start"):which(idat$x == "end"),])
})
outDF <- as.data.frame(out)
If the number of rows between "start" and "end" differs between files, the above options won´t work. In this case you could generate a data frame by first using lapply() (similar to option 2) to generate a result list (with different lengths of the list elements) and then padding shorter lists with NAs before transforming the result into a dataframe again.
#make csv-files with with target vector of different lengths (3:12)
set.seed(1)
for (i in 1:3) {
df <- data.frame(x = c(rep(0, sample(1:10,1)), "start",
rep(paste0("dat", i), sample(1:10,1)),
"end",rep(0, sample(1:10, 1))))
write.csv(df, file = paste0("file", i, ".csv"), quote = FALSE, row.names = FALSE)
}
#lapply
out <- lapply(allFiles, FUN = function(x) {
idat = read.csv(x, stringsAsFactors = FALSE)
return(idat[which(idat$x == "start"):which(idat$x == "end"),])
})
out <- lapply(out, `length<-`, max(lengths(out)))
outDF <- do.call(cbind, out)

How to create a for loop to open, mutate and save .csv files using R?

I have a several .csv files that have to be reformatted and saved again using an R script.
The function that is needed to do the changes and the reformating of the files, is already established and works perfectly fine. But as there are always lots of documents to change, I would like to have a for lLoop so that I don't have to adapt my code for every single document. But unfortunately I don't have experience in the use of loops using R so far.
My code looks like this at the moment:
setwd("C:/users/Desktop/Raw/.")
df <- read.csv("A1.csv", sep= ",")
new_df <- wrap_frame(df, nr = 61, rownames = "", unique_names = FALSE)
write.csv(new_df, "C:/users/Desktop/Data/A1.csv", row.names = FALSE)
The original .csv files are always called the same way with a letter (A to Z) followed by a number from 1 to 12. The number of the .csv files to change may adapt. But their names are always following the mentioned rules.
I would be very grateful, if somebody could help me with this issue!
You can get a vector with all filenames that exist in your folder (as this folder contains no other files than those you want to edit) with
setwd( "C:/users/Desktop/Raw/" )
files <- Sys.glob( "*.csv" )
and then process them one by one with
for( i in files )
{
df <- read.csv( i )
new_df <- wrap_frame(df, nr = 61, rownames = "", unique_names = FALSE)
write.csv(new_df, paste( "C:/users/Desktop/Data/", i, sep = "" ), row.names = FALSE)
}
Try out:
# vector of file names
my.files <- paste0(c(outer(LETTERS, 1:12, FUN = "paste0")),
".csv")
# for loop
for (i in seq_along(my.files)) {
df <- read.csv(my.files[i], sep= ",") # open
new_df <- wrap_frame(df, nr = 61, rownames = "", unique_names = FALSE) # mutate
write.csv(new_df, paste0("C:/users/Desktop/Data/", my.files[i]),
row.names = FALSE) # save
}

How can I quickly find all the files in a directory that are missing a first row?

I have a folder of files that are in .csv format. They have blank lines in them that are necessary (this indicates an absence of a measure from a LiDAR unit, which is good and needs to stay in). But occasionally, the first row is empty this throws off the code and the package and everything aborts.
Right now I have to open each .csv and see if the first line is empty.
I would like to do one of the following, but am at a loss how to:
1) write a code that quickly scans through all of the files in the directory and tells me which ones are missing the first line
2) be able to skip the empty lines that are only at the beginning--which can vary, sometimes more than one line is empty
3) have a code that cycles through all of the .csv files and inserts a dummy first line of numbers so the files all import no problem.
Thanks!
Here's a bit of code that does 1 and 2 above. I'm not sure why you'd want to insert dummy line(s) given the ability to do 1 and 2; it's straightforward to do, but usually it's not a good idea to modify raw data files.
# Create some test files
cat("x,y", "1,2", sep="\n", file = "blank0.csv")
cat("", "x,y", "1,2", sep="\n", file = "blank1.csv")
cat("", "", "x,y", "1,2", sep="\n", file = "blank2.csv")
files <- list.files(pattern = "*.csv", full.names = TRUE)
for(i in seq_along(files)) {
filedata <- readLines(files[i])
lines_to_skip <- min(which(filedata != "")) - 1
cat(i, files[i], lines_to_skip, "\n")
x <- read.csv(files[i], skip = lines_to_skip)
}
This prints
1 ./blank0.csv 0
2 ./blank1.csv 1
3 ./blank2.csv 2
and reads in each dataset correctly.
I believe that the two functions that follow can do what you want/need.
First, a function to determine the files with a second line blank.
second_blank <- function(path = ".", pattern = "\\.csv"){
fls <- list.files(path = path, pattern = pattern)
second <- sapply(fls, function(f) readLines(f, n = 2)[2])
which(nchar(gsub(",", "", second)) == 0)
}
Then, a function to read in the files with such lines, one at a time. Note that I assume that the first line is the columns header and that at least the second line is left blank. There is a dots argument, ..., for you to pass other arguments to read.table, such as stringsAsFactors = FALSE.
skip_blank <- function(file, ...){
header <- readLines(file, n = 1)
header <- strsplit(header, ",")[[1]]
count <- 1L
while(TRUE){
txt <- scan(file, what = "character", skip = count, nlines = 1)
if(nchar(gsub(",", "", txt)) > 0) break
count <- count + 1L
}
dat <- read.table(file, skip = count, header = TRUE, sep = ",", dec = ".", fill = TRUE, ...)
names(dat) <- header
dat
}
Now, an example usage.
second_blank(pattern = "csv") # a first run as an example usage
inx <- second_blank() # this will be needed later
fl_names <- list.files(pattern = "\\.csv") # get all the CSV files
df_list <- lapply(fl_names[inx], skip_blank) # read the problem ones
names(df_list) <- fl_names[inx] # tidy up the result list
df_list

Using lapply to apply a function over read-in list of files and saving output as new list of files

I'm quite new at R and a bit stuck on what I feel is likely a common operation to do. I have a number of files (57 with ~1.5 billion rows cumulatively by 6 columns) that I need to perform basic functions on. I'm able to read these files in and perform the calculations I need no problem but I'm tripping up in the final output. I envision the function working on 1 file at a time, outputting the worked file and moving onto the next.
After calculations I would like to output 57 new .txt files named after the file the input data first came from. So far I'm able to perform the calculations on smaller test datasets and spit out 1 appended .txt file but this isn't what I want as a final output.
#list filenames
files <- list.files(path=, pattern="*.txt", full.names=TRUE, recursive=FALSE)
#begin looping process
loop_output = lapply(files,
function(x) {
#Load 'x' file in
DF<- read.table(x, header = FALSE, sep= "\t")
#Call calculated height average a name
R_ref= 1647.038203
#Add column names to .las data
colnames(DF) <- c("X","Y","Z","I","A","FC")
#Calculate return
DF$R_calc <- (R_ref - DF$Z)/cos(DF$A*pi/180)
#Calculate intensity
DF$Ir_calc <- DF$I * (DF$R_calc^2/R_ref^2)
#Output new .txt with calcuated columns
write.table(DF, file=, row.names = FALSE, col.names = FALSE, append = TRUE,fileEncoding = "UTF-8")
})
My latest code endeavors have been to mess around with the intial lapply/sapply function as so:
#begin looping process
loop_output = sapply(names(files),
function(x) {
As well as the output line:
#Output new .csv with calcuated columns
write.table(DF, file=paste0(names(DF), "txt", sep="."),
row.names = FALSE, col.names = FALSE, append = TRUE,fileEncoding = "UTF-8")
From what I've been reading the file naming function during write.table output may be one of the keys I don't have fully aligned yet with the rest of the script. I've been viewing a lot of other asked questions that I felt were applicable:
Using lapply to apply a function over list of data frames and saving output to files with different names
Write list of data.frames to separate CSV files with lapply
to no luck. I deeply appreciate any insights or paths towards the right direction on inputting x number of files, performing the same function on each, then outputting the same x number of files. Thank you.
The reason the output is directed to the same file is probably that file = paste0(names(DF), "txt", sep=".") returns the same value for every iteration. That is, DF must have the same column names in every iteration, therefore names(DF) will be the same, and paste0(names(DF), "txt", sep=".") will be the same. Along with the append = TRUE option the result is that all output is written to the same file.
Inside the anonymous function, x is the name of the input file. Instead of using names(DF) as a basis for the output file name you could do some transformation of this character string.
example.
Given
x <- "/foo/raw_data.csv"
Inside the function you could do something like this
infile <- x
outfile <- file.path(dirname(infile), gsub('raw', 'clean', basename(infile)))
outfile
[1] "/foo/clean_data.csv"
Then use the new name for output, with append = FALSE (unless you need it to be true)
write.table(DF, file = outfile, row.names = FALSE, col.names = FALSE, append = FALSE, fileEncoding = "UTF-8")
Using your code, this is the general idea:
require(purrr)
#list filenames
files <- list.files(path=, pattern="*.txt", full.names=TRUE, recursive=FALSE)
#Call calculated height average a name
R_ref= 1647.038203
dfTransform <- function(file){
colnames(file) <- c("X","Y","Z","I","A","FC")
#Calculate return
file$R_calc <- (R_ref - file$Z)/cos(file$A*pi/180)
#Calculate intensity
file$Ir_calc <- file$I * (file$R_calc^2/R_ref^2)
return(file)
}
output <- files %>% map(read.table,header = FALSE, sep= "\t") %>%
map(dfTransform) %>%
map(write.table, file=paste0(names(DF), "txt", sep="."),
row.names = FALSE, col.names = FALSE, append = TRUE,fileEncoding = "UTF-8")

R script for extracting rows from several text files

I have 900 text files in my directory as seen in the following figure below
each file consists of data in the following format
667869 667869.000000
580083 580083.000000
316133 316133.000000
11065 11065.000000
I would like to extract fourth row from each text file and store the values in an array, any suggestions are welcome
This sounds more like a StackOverflow question, similar to
Importing multiple .csv files into R
You can try something like:
setwd("/path/to/files")
files <- list.files(path = getwd(), recursive = FALSE)
head(files)
myfiles = lapply(files, function(x) read.csv(file = x, header = TRUE))
mydata = lapply(myfiles, FUN = function(df){df[4,]})
str(mydata)
do.call(rbind, mydata)
A lazy answer is:
array <- c()
for (file in dir()) {
row4 <- read.table(file,
header = FALSE,
row.names = NULL,
skip = 3, # Skip the 1st 3 rows
nrows = 1, # Read only the next row after skipping the 1st 3 rows
sep = "\t") # change the separator if it is not "\t"
array <- cbind(array, row4)
}
You can further keep the name of the files
colnames(array) <- dir()

Resources