R - Use Glob Pattern to Extract text files from multiple directories - r

I'm trying to work out a way of extracting text files from multiple directories using fs:dir_ls and vroom.
The directory structure is essentially M:/instrument/project/experiment/measurements/time_stamp/raw_data/files*.txt.
Ideally, I want to be able to define the path to the experiment level then let the pattern take care of the rest, for example -
fs::dir_ls(path="M:/instrument/project/", glob = "experiment_*/2021-04-11*/raw_data/files*.txt", invert = TRUE, recurse = TRUE),
So I'm reading in all the .txt files across multiple experiment directories in one go, however, when I try this approach it returns all the files from the project level rather than those from the specific folders described by the pattern.
I've looked through the other SO questions on the topic covered here: Pattern matching using a wildcard, R list files with multiple conditions, list.files pattern argument in R, extended regular expression use, and grep using a character vector with multiple patterns, but haven't been able to apply them to my particular problem.
Any help is appreciated, I realise the answer is likely staring me in the face, I just need help seeing it.
Thanks

You can try the following with list.files :
files <- list.files('M:/Operetta/LED_Wound/operetta_export/plate_variability[540]/robot_seed_wide_plate_1[1614]/2021-05-10T113438+0100[1764]/SC_data', pattern = 'arpe19*\\.txt')

Related

Using R to read all files in a specific format and with specific extension

I want to read all the files in the xlsx format starting with a string named "csmom". I have used list.files function. But I do not know how to set double pattern. Please see the code. I want to read all the files starting csmom string and they all should be in .xlsx format.
master1<-list.files(path = "C:/Users/Admin/Documents/csmomentum funal",pattern="^csmom")
master2<-list.files(path = "C:/Users/Admin/Documents/csmomentum funal",pattern="^\\.xlsx$")
#jay.sf solution works for creating a regular expression to pull out the condition that you want.
However, generally speaking if you want to cross two lists to find the subset of elements that are contained in both (in your case the files that satisfy both conditions), you can use intersect().
intersect(master1, master2)
Will show you all the files that satisfy pattern 1 and pattern 2.

Reading multiple csv files from a folder with R using regex

I wish to use R to read multiple csv files from a single folder. If I wanted to read every csv file I could use:
list.files(folder, pattern="*.csv")
See, for example, these questions:
Reading multiple csv files from a folder into a single dataframe in R
Importing multiple .csv files into R
However, I only wish to read one of four subsets of the files at a time. Below is an example grouping of four files each for three models.
JS.N_Nov6_2017_model220_N200.csv
JS.N_Nov6_2017_model221_N200.csv
JS.N_Nov6_2017_model222_N200.csv
my.IDs.alt_Nov6_2017_model220_N200.csv
my.IDs.alt_Nov6_2017_model221_N200.csv
my.IDs.alt_Nov6_2017_model222_N200.csv
parms_Nov6_2017_model220_N200.csv
parms_Nov6_2017_model221_N200.csv
parms_Nov6_2017_model222_N200.csv
supN_Nov6_2017_model220_N200.csv
supN_Nov6_2017_model221_N200.csv
supN_Nov6_2017_model222_N200.csv
If I only wish to read, for example, the parms files I try the following, which does not work:
list.files(folder, pattern="parm*.csv")
I am assuming that I may need to use regex to read a given group of the four groups present, but I do not know.
How can I read each of the four groups separately?
EDIT
I am unsure whether I would have been able to obtain the solution from answers to this question:
Listing all files matching a full-path pattern in R
I may have had to spend a fair bit of time brushing up on regex to apply those answers to my problem. The answer provided below by Mako212 is outstanding.
A quick REGEX 101 explanation:
For the case of matching the beginning and end of the string, which is all you need to do here, the following prinicples apply to match files that are .csv and start with parm:
list.files(folder, pattern="^parm.*?\\.csv")
^ asserts we're at the beginning of the string, so ^parm means match parm, but only if it's at the beginning of the string.
.*? means match anything up until the next part of the pattern matches. In this case, match until we see a period \\.
. means match any character in REGEX, so we need to escape it with \\ to match the literal . (note that in R you need the double escape \\, in other languages a single escape \ is sufficienct).
Finally csv means match csv after the .. If we were going to be really thorough, we might use \\.csv$ using the $ to indicate the end of the string. You'd need the dollar sign if you had other files with an extension like .csv2. \\.csv would match .csv2, where as \\.csv$ would not.
In your case, you could simply replace parm in the REGEX pattern with JS, my, or supN to select one of your other file types.
Finally, if you wanted to match a subset of your total file list, you could use the | logical "or" operator:
list.files(folder, pattern = "^(parm|JS|supN).*?\\.csv")
Which would return all the file names except the ones that start with my
The list.files statement shown in the question is using globs but list.files accepts regular expressions, not globs.
Sys.glob To use globs use Sys.glob like this:
olddir <- setwd(folder)
parm <- lapply(Sys.glob("parm*.csv"), read.csv)
parm is now a list of data frames read in from those files.
glob2rx Note that the glob2rx function can be used to convert globs to regular expressions:
parm <- lapply(list.files(folder, pattern = glob2rx("parm*.csv")), read.csv)

Using index number of file in directory

I'm using the list.files function in R. I know how to tell it to access all files in a directory, such as:
list.files("directory", full.names=TRUE)
But I don't really know how to subset the directory. If I just want list.files to list the 2nd, 5th, and 6th files in the directory, is there a way to tell list.files to only list those files? I've been thinking about whether it's possible to use the files' indices within the directory but I can't figure out how to do it. It's okay if I can only do this with consecutive files (such as 1:3) but non-consecutive would be even better.
The context of the question is that this is for a problem for a class, so I'm not worried about the files in the directory changing or being deleted.
If you store the list.files to an object say object you can see that it is just an atomic vector of class character (nothing more nothing less!). You can subset it with the regex syntax for character strings (and functions that uses regex like grep or grepl) or just with the regular subsetting operators [ or (most important) by combining both techniques.
For your example:
object[c(2,5,6)]
or exclude with:
object[-c(2,5,6)]
or if you want to find all names that start with the shuttle string with:
object[grepl("^shuttle", object)]
or with the following code if you want to find all .csv files:
object[grepl(".csv$", object)]
possibilities are huge.

Using R to list all files with a specified extension

I'm very new to R and am working on updating an R script to iterate through a series of .dbf tables created using ArcGIS and produce a series of graphs.
I have a directory, C:\Scratch, that will contain all of my .dbf files. However, when ArcGIS creates these tables, it also includes a .dbf.xml file. I want to remove these .dbf.xml files from my file list and thus my iteration. I've tried searching and experimenting with regular expressions to no avail. This is the basic expression I'm using (Excluding all of the various experimentation):
files <- list.files(pattern = "dbf")
Can anyone give me some direction?
files <- list.files(pattern = "\\.dbf$")
$ at the end means that this is end of string. "dbf$" will work too, but adding \\. (. is special character in regular expressions so you need to escape it) ensure that you match only files with extension .dbf (in case you have e.g. .adbf files).
Try this which uses globs rather than regular expressions so it will only pick out the file names that end in .dbf
filenames <- Sys.glob("*.dbf")
Peg the pattern to find "\\.dbf" at the end of the string using the $ character:
list.files(pattern = "\\.dbf$")
Gives you the list of files with full path:
Sys.glob(file.path(file_dir, "*.dbf")) ## file_dir = file containing directory
I am not very good in using sophisticated regular expressions, so I'd do such task in the following way:
files <- list.files()
dbf.files <- files[-grep(".xml", files, fixed=T)]
First line just lists all files from working dir. Second one drops everything containing ".xml" (grep returns indices of such strings in 'files' vector; subsetting with negative indices removes corresponding entries from vector).
"fixed" argument for grep function is just my whim, as I usually want it to peform crude pattern matching without Perl-style fancy regexprs, which may cause surprise for me.
I'm aware that such solution simply reflects drawbacks in my education, but for a novice it may be useful =) at least it's easy.

using R to copy files

As part of a larger task performed in R run under windows, I would like to copy selected files between directories. Is it possible to give within R a command like cp patha/filea*.csv pathb (notice the wildcard, for extra spice)?
I don't think there is a direct way (shy of shelling-out), but something like the following usually works for me.
flist <- list.files("patha", "^filea.+[.]csv$", full.names = TRUE)
file.copy(flist, "pathb")
Notes:
I purposely decomposed in two steps, they can be combined.
See the regular expression: R uses true regex, and also separates the file pattern from the path, in two separate arguments.
note the ^ and $ (beg/end of string) in the regex -- this is a common gotcha, as these are implicit to wildcard-type patterns, but required with regexes (lest some file names which match the wildcard pattern but also start and/or end with additional text be selected as well).
In the Windows world, people will typically add the ignore.case = TRUE argument to list.files, in order to emulate the fact that directory searches are case insensitive with this OS.
R's glob2rx() function provides a convenient way to convert wildcard patterns to regular expressions. For example fpattern = glob2rx('filea*.csv') returns a different but equivalent regex.
You can
use system() to fire off a command as if it was on shell, incl globbing
use list.files() aka dir() to do the globbing / reg.exp matching yourself and the copy the files individually
use file.copy on individual files as shown in mjv's answer

Resources