How do I create a loop in a file path? - r

UPDATE
Thanks for the suggestions. This is how for I got so far, but I still don't find how I can get the loop to work within the file path name.
setwd("//tsclient/C/Users/xxx")
folders <- list.files("TEST")
--> This gives me a list of my folder names
for(f in folders){
setwd("//tsclient/C/xxx/[f]")
files[f] <- list.files("//tsclient/C/Users/xxx/TEST/[f]", pattern="*.TXT")
mergedfile[f] <- do.call(rbind, lapply(files[f], read.table))
write.table(mergedfile[f], "//tsclient/C/Users/xxx/[f].txt", sep="\t")
}
I have around 100 folders, each containing multiple txt files. I want to create 1 merged file per folder and save that elsewhere. However, I do not want to manually adapt the folder name in my code for each folder.
I created the following code to load in all files from a single folder (which works) and merge these files.
setwd("//tsclient/C/xxx")
files <- list.files("//tsclient/C/Users/foldername", pattern="*.TXT")
file.list <- lapply(files, read.table)
setattr(file.list, "names", files)
masterfilesales <- rbindlist(file.list, idcol="id")[, id := substr(id,1,4)]
write.table(masterfilesales, "//tsclient/C/Users/xxx/datasets/foldername.txt", sep="\t")
If I wanted to do this manually, I would every time have to adapt "foldername". The foldernames contain numeric values, containing 100 numbers between 2500 and 5000 (always 4 digits).
I looked into repeat loops, but those don't run using it within a file path.
If anyone could direct me in a good direction, I would be very grateful.

Related

reading multiple csv files using data.table doesn't work when given files path, possible bug?

I want to read multiple csv files where I only read two columns from each. So my code is this:
library(data.table)
files <- list.files(pattern="C:\\Users\\XYZ\\PROJECT\\NAME\\venv\\RawCSV_firstBatch\\*.csv")
temp <- lapply(files, function(x) fread(x, select = c("screenNames", "retweetUserScreenName")))
data <- rbindlist(temp)
This yields character(0). However when I move those csv files out to where my script is, and change the files to this:
files <- list.files(pattern="*.csv")
#....
My dir() output is this:
[1] "adjaceny_list.R" "cleanusrnms_firstbatch"
[3] "RawCSV_firstBatch" "username_cutter.py"
everything gets read. Could you help me track down what's exactly going on please? The folder that contains these csv files are in same directory where the script is. SO even if I do patterm= "RawCSV_firstBatch\\*.csv" same problem.
EDIT:
also did:
files <- list.files(path="C:\\Users\\XYZ\\PROJECT\\NAME\\venv\\RawCSV_firstBatch\\",pattern="*.csv")
#and
files <- list.files(pattern="C:/Users/XYZ/PROJECT/NAME/venv/RawCSV_firstBatch/*.csv")
Both yielded empty data frame.
#NelsonGon mentioned a workaround:
Do something like: list.files("./path/folder",pattern="*.csv$") Use ..
or . as required.(Not sure about using actual path). Can also utilise
~
So that works. Thank you. (sorry have 2 days limit before I tick this as answer)

R script to open folders then identify a file, rename it, and read it

I have recently learned to code with R and I sort of manage to handle the data within files but I can't get it to manipulate the files themselves. Here is my problem:
I'd like to open successively, in my working directory "Laurent/R", the 3 folders that are within it ("gene_1", "gene_2", "gene_3").
In each folder, I want one specific .csv file (the one containing the specific word "Cq") to be renamed as "gene_x_Cq" (and then to move these 3 renamed files in a new folder (is that necessary?)).
I want then to be able to successively open these 3 .csv files (with read.csv i suppose) to manipulate the data within them.
I've looked at different functions like list.file, unlist, file.rename but i'm sure they are appropriate and I can't figure out how to use them in my case.
Can anyone help ? (I use a Mac)
Thanks
Laurent
Here's a potential solution. If you don't understand something, just shout out and ask!
setwd("Your own file path/Laurent")
library(stringr)
# list all .csv files
csvfiles <- list.files(recursive = T, pattern = "\\.csv")
csvfiles
# Pick out files that have cq in them, ensuring that you ignore uppercase/lowercase
cq.files <- csvfiles[str_detect(csvfiles, fixed("cq", ignore_case = T))]
# Get gene number for both files - using "2" here because gene folder is at the second level in the file path
gene.nb <- str_sub(word(cq.files, 2, 2, sep = "/"), 6, 6)
gene.nb
# create a new folder to place new files into
dir.create("R/genefiles")
# This will copy files, not move them. To move them, use file.rename - but be careful, I'd try file.copy first.
cq.files <- file.copy(cq.files,
paste0("R/genefiles/gene_", gene.nb, "_", "Cq", ".csv"))
# Now to work with all files in the new folder
library(purrr)
genefiles <- list.files("R/genefiles", full.names = T)
# This will bring in all data into one dataframe. If you want them brought in as separate dataframes,
# use something like gene1 <- read.csv("R/genefiles/gene_1_Cq.csv")
files <- map_dfr(genefiles, read.csv)

How do I read thru multiple files in different folders and store them seperately base on the folder from which they've been retrieved?

The main idea is that I have two folders/paths now in my local machine. In each folder, I have multiple csv files files I want to read into my R. However, instead of appending them all together into one files I want all folder1 files being in file1 and all folder2 files being in file2. I only know how to append them all together, but not know how to append them into two separate files. Below are my code so far.
dirs<-list("path/folder1","path/folder2")
data<-list()
for(dir in dirs){
##read in the list of files in each folder
flist<-list.files(path=dir,pattern = "\\.csv$")
## a second for loop to read thru what's inside each folder
for (file in flist){message("working on",file)
indata<-fread(paste0(dir,file))
data<-rbind(data,indata)}
}
So far, I think the data keeps everything into one file. so How do I do to make it save them into two different files?
The quickest option I can think of is to try using data[[dir]] to make each directory's data its own object in the data list. Then you can access them with data$`path1` etc.
dirs<-list("path/folder1","path/folder2")
data<-list()
for(dir in dirs){
##read in the list of files in each folder
flist<-list.files(path=dir,pattern = "\\.csv$")
## a second for loop to read thru what's inside each folder
for (file in flist){message("working on",file)
indata<-fread(paste0(dir,file))
data[[dir]]<-rbind(data[[dir]],indata)}
}
(However, it might be much nicer (and faster) to use lapply instead of for loops)
You could assign your read in files into new R objects named by your folder number. I changed list() to c() for dirs for easier assignment with assign(). And moved the data <- list() into the first loop so it gets overwritten after each folder is completed.
dirs<-c("path/folder1","path/folder2")
for(dir in 1:length(dirs)){
##read in the list of files in each folder
flist<-list.files(path=dirs[dir], pattern = "\\.csv$")
data <- list()
## a second for loop to read thru what's inside each folder
for (file in flist){message("working on", file)
indata<-read.csv(paste0(dirs[dir],"/",file))
data<-rbind(data,indata)
assign(paste0("data_",dir), data)
}
}

To stack up results in one masterfile in R

Using this script I have created a specific folder for each csv file and then saved all my further analysis results in this folder. The name of the folder and csv file are same. The csv files are stored in the main/master directory.
Now, I have created a csv file in each of these folders which contains a list of all the fitted values.
I would now like to do the following:
Set the working directory to the particular filename
Read fitted values file
Add a row/column stating the name of the site/ unique ID
Add it to the masterfile which is stored in the main directory with a title specifying site name/filename. It can be stacked by rows or by columns it doesn't really matter.
Come to the main directory to pick the next file
Repeat the loop
Using the merge(), rbind(), cbind() combines all the data under one column name. I want to keep all the sites separate for comparison at a later on stage.
This is what I'm using at the moment and I'm lost on how to proceed further.
setwd( "path") # main directory
path <-"path" # need this for convenience while switching back to main directory
# import all files and create a character type array
files <- list.files(path=path, pattern="*.csv")
for(i in seq(1, length(files), by = 1)){
fileName <- read.csv(files[i]) # repeat to set the required working directory
base <- strsplit(files[i], ".csv")[[1]] # getting the filename
setwd(file.path(path, base)) # setting the working directory to the same filename
master <- read.csv(paste(base,"_fiited_values curve.csv"))
# read the fitted value csv file for the site and store it in a list
}
I want to construct a for loop to make one master file with the files in different directories. I do not want to merge all under one column name.
For example, If I have 50 similar csv files and each had two columns of data, I would like to have one csv file which accommodates all of it; but in its original format rather than appending to the existing row/column. So then I will have 100 columns of data.
Please tell me what further information can I provide?
for reading a group of files, from a number of different directories, with pathnames patha pathb pathc:
paths = c('patha','pathb','pathc')
files = unlist(sapply(paths, function(path) list.files(path,pattern = "*.csv", full.names = TRUE)))
listContainingAllFiles = lapply(files, read.csv)
If you want to be really quick about it, you can grab fread from data.table:
library(data.table)
listContainingAllFiles = lapply(files, fread)
Either way this will give you a list of all objects, kept separate. If you want to join them together vertically/horizontally, then:
do.call(rbind, listContainingAllFiles)
do.call(cbind, listContainingAllFiles)
EDIT: NOTE, the latter makes no sense unless your rows actually mean something when they're corresponding. It makes far more sense to just create a field tracking what location the data is from.
if you want to include the names of the files as the method of determining sample location (I don't see where you're getting this info from in your example), then you want to do this as you read in the files, so:
listContainingAllFiles = lapply(files,
function(file) data.frame(filename = file,
read.csv(file)))
then later you can split that column to get your details (Assuming of course you have a standard naming convention)

How to use R to Iterate through Subfolders and bind CSV files of the same ID?

I am stuck. I need a way to iterate through a bunch of subfolders in a directory, pull out 4 .csv files , bind the contents of those 4 .csv files, then write out the new .csv to a new directory using the name of the initial subfolder as the name of the new .csv.
I know R could do this. But I am stuck at how to iterate across the subfolders and bind the csv files together. My obstacle is that each subfolder contains the same 4 .csv files using the same 8-digit id. For example, subfolder A contains 09061234.csv, 09061345.csv, 09061456.csv, and 09061560.csv. subfolder B contains 9061234.csv, 09061345.csv, 09061456.csv, and 09061560.csv. (...). There are 42 subfolders, and hence 168 csv files with the same names. I want to compact the files down to 42.
I can use list.files to retrieve all the subfolders. But then what?
##Get Files from directory
TF = "H:/working/TC/TMS/Counts/June09"
##List Sub folders
SF <- list.files(TF)
##List of File names inside folders
FN <- list.files(SF)
#Returns list of 168 filenames
###?????###
#How to iterate through each subfolder, read each 8-digit integer id file,
#bind them all together into one single csv,
#Then write to new directory using
#the name of the subfolder as the name of the new csv?
There is probably a way to do this easily but I am a noob with R. Something involving functions, paste and write.table perhaps? Any hints/help/suggestions is greatly appreciated. Thanks!
You can use recursive=T option for list.files,
lapply(c('1234' ,'1345','1456','1560'),function(x){
sources.files <- list.files(path=TF,
recursive=T,
pattern=paste('*09061*',x,'*.csv',sep='')
,full.names=T)
## ou read all files with the id and bind them
dat <- do.call(rbind,lapply(sources.files,read.csv))
### write the file for the
write(dat,paste('agg',x,'.csv',sep='')
}
After some tweaking of agstudy's code, I came up with the solution I was ultimately after. There were a couple of missing pieces that are more due to the nature of my specific problem, so I am leaving agstudy's answer as "accepted".
Turns out a function really wasn't needed. At least not for now. If I need to perform this same task again, I will create a function out of it. For now, I can solve this particular problem without it.
Also, for my instance, I needed a conditional "if" statement to handle any non-csv files that may have lived in the subfolders. By adding an if statement, R throws warnings and skips any files that are not comma-separated.
Code:
##Define directory path##
TF = "H:/working/TC/TMS/Counts/June09"
##List of subfolder files where file name starts with "0906"##
SF <- list.files(TF,recursive=T, pattern=paste("*09061*",x,'*.csv',sep=""))
##Define the list of files to search for##
x <- (c('1234' ,'1345','1456','1560')
##Create a conditional to skip over the non-csv files in each folder##
if (is.integer(x)){
sources.files <- list.files(TF, recursive=T,full.names=T)}
dat <- do.call(rbind,lapply(sources.files,read.csv))
#the warnings thrown are ok--these are generated due to the fact that some of the folders contain .xls files
write.table(dat,file="H:/working/TC/TMS/June09Output/June09Batched.csv",row.names=FALSE,sep=",")

Resources