Uploading multiple pictures to R - r

I'm trying to upload multiple images to do some machine learning in R. I can upload a single image just fine, but when I try to upload multiple images using either lapply or a for loop, I get the following error: "Error in wrap.url(file, load.image.internal) : File not found". I did a check to make sure the files do exist, my WD is set correctly and R recognizes that the files and directory do exist. No matter what I change, the error is always the same. It doesn't change the outcome if I list the path from the drive it originates in or from the WD onward. I've asked many people for help with no success. I've posted my code using lapply and a for loop below. I'm still relatively new to R so if there is something I'm missing I'd greatly appreciate knowing. Also, I'm using imager here to load the files.
eggs2015 <- list()
file_list <- list.files(path="~/Grad School/Thesis Work/Machine Learning R/a2015_experimental_clustering_R/*.jpg", pattern="*.jpg", full.names = TRUE)
for (i in 1:length(file_list)){
Path <- paste0("a2015_experimental_clustering_R",file_list[i])
eggs2015 <- c(eggs2015, list(load.image(Path)))
}
names(eggs2015) <- file_list
eggs2015 <- list.files(path = "~/Grad School/Thesis Work/Machine Learning R/2015_experimental_clustering_R", pattern = ".jpg", all.files = TRUE, full.names = TRUE)
eggs2015 <- lapply(list, FUN = load.image("~/Grad School/Thesis Work/Machine Learning R/a2015_experimental_clustering_R/*.jpg"))
eggs2015 <- as.data.frame(eggs2015)

Personally for this kind of operation I prefer to use sapply so I can identify images with the original file names later on (if needed):
FilesToRead <- list.files(path = "~/Grad School/Thesis Work/Machine Learning R/2015_experimental_clustering_R", pattern = ".jpg", all.files = TRUE, full.names = TRUE)
ListOfImages <- sapply(FilesToRead, FUN = load.image, simplify = FALSE, USE.NAMES = TRUE)
should work and give you a list of elements with your images using the file paths as names
Or using lapply (sapply is just a wrapper for lapply)
ListOfImages <- lapply(FilesToRead, FUN = load.image)
As you can see, your code just needed a little tweaking.
Hope it helps

Related

Error in file(file, "rt") : cannot open the connection - Unsure of what to do

I am currently working through Coursera's R Programming course and have hit a bit of a snag with this assignment. I have been getting various errors (not I'm not totally sure I've nailed down) but this is a new one and no matter what I do I can't seem to shake it.
Whenever I run the below code it comes back with
Error in file(file, "rt") : cannot open the connection
pollutantmean <- function (directory, pollutant, id){
files<- list.files(path = directory, "/", full.names = TRUE)
dat <- data.frame()
dat <- sapply(file = directory,"/", read.csv)
mean(dat["pollutant"], na.rm = TRUE)
}
I have tried numerous different solutions posted here on SO for this issue but none of it has worked. I made sure that I am running after setting the working directory to the folder with all of the CSV files and I can see all of the files in the file pane. I have also moved that working directory around a few times since some of the suggestions were to put it on the desktop, etc. but none of that has worked. I am currently running R Studio as an admin but that does not seem to have done anything and I have also modified the permissions on the specdata file to ensure there's no weird restrictions there. Any help is appreciated.
Here are two possible implementations:
# list all files in "directory", read them, combine and then take mean of "pollutant" column
pollutantmean_1 <- function (directory){
files <- list.files(path = directory, full.names = TRUE)
dat <- lapply(file = directory, read.csv)
dat <- data.table::rbindlist(dat) |> as.data.frame()
mean(dat[, 'pollutant' ], na.rm = TRUE)
}
# list all files in "directory", read them, take the mean of "pollutant" column for each file and return them
pollutantmean_2 <- function (directory){
files <- list.files(path = directory, full.names = TRUE)
dat <- lapply(file = directory, read.csv)
pollutant_means <- sapply(dat, function(x) mean(x[ , 'pollutant' ], na.rm = TRUE))
names(pollutant_means) <- basename(files)
pollutant_means
}

How to load multiple csv files into seperate objects(dataframes) in R based on filename?

I know how to load a whole folder of .csv files quite easily using:
csv_files = list.files(pattern ="*.csv")
myfiles = lapply(csv_files, read.delim, header = FALSE)
From which I can then easily iterate over 'myfiles' them and do whatever I wish. The problem I have is this simply loads all the .csv files in the working directory.
What I would like to do is be able to assign the files to objects in the script based on the filename.
Say, for example, in one directory I have the files; file001, file002, file003 and exfile001, exfile002, exfile003.
I want to be able to load them in such away that
file_object <- file...
exfile_object <- exfile...
So that when I execute the script it essentially does whatever i've programmed it to do for file_object(assigned as file001 in this example) & exfile_object(assigned as exfile001 in this example). Then goes on to continue in this way for the rest of the files in the directory (eg. file002, exfile002, file003, exfile003).
I know how to do it in MATLAB, but am just getting to grips with R.
I thought perhaps getting them into seperate lists using the list.files function may work by just changing working directory in script, but it seems messy and would involve re-writing things in my case...
Thanks!
Solution for anyone curious...
files <- list.files(pattern = ".*csv")
for(file in 1:length(files)) {
file_name <- paste(c("file00",file), collapse = " ")
file_name <- gsub(" ", "", file_name, fixed = TRUE)
ex_file_name <- paste(c("exfile00",file), collapse = " ")
ex_file_name <- gsub(" ", "", ex_file_name, fixed = TRUE)
file_object <- read.csv(file = paste(file_name, ".csv", sep=""),fileEncoding="UTF-8-BOM")
exfile_object <- read.csv(file = paste(ex_file_name, ".csv", sep=""),fileEncoding="UTF-8-BOM")
}
Essentially build the filename within the loop, then passs it to the readcsv function on each iteration.
If your list of frames, myfiles is named using this:
names(myfiles) <- gsub(".csv", "", csv_files)
then you can do
list2env(myfiles, globalenv())
to convert those individual frames to separate objects in the global environment.

Specifying pathname in map_dfr

The structure of my directory is as follows:
Extant_Data -> Data -> Raw
-> course_enrollment
-> frpm
I have a few different function to to read in some text files and excel files respectively.
read_fun = function(path){
test = read.delim(path, sep="\t", header=TRUE, fill = TRUE, colClasses = c(rep("character",23)))
test
}
read_fun_frpm= function(path){
test = read_excel(path, sheet = 2, col_names = frpm_names)
}
I feed this into map_dfr so that the function reads in each of the files and rowbinds them.
allfiles = list.files(path = "Extant_Data/Data/Raw/course_enrollment",
pattern = "CourseEnrollment.txt",
full.names=FALSE,
recursive = T)
# Rowbind all the course enrollment data
# !!! BUT I HAVE set the working directory to a subdirectory so that it finds those files
setwd("/Extant_Data/Data/Raw/course_enrollment")
course_combined <- map_dfr(allfiles,read_fun)
allfiles = list.files(path = "Extant_Data/Data/Raw/frpm/post12",
pattern = "frpm*",
full.names=FALSE,
recursive = T)
# Rowbind all the course enrollment data
# !!!I have to change the directory AGAIN
setwd(""Extant_Data/Data/Raw/frpm/post12")
frpm_combined <- map_dfr(allfiles,read_fun_frpm)
As mentioned in the comments, I have to keep changing the working directory so that map_dfr can locate the files. I don't think this is best practice, how might I work around this so I don't have to keep changing the directory? Any suggestions appreciated. Sorry it's hard to provide a re-producible example.
Note: This throws an error.
frpm_combined <- map_dfr(allfiles,read_fun_frpm('Extant_Data/Data/Raw/frpm/post12'))

R rename files keeping part of original name

I'm trying to rename all files in a folder (about 7,000 files) files with just a portion of their original name.
The initial fip code is a 4 or 5 digit code that identifies counties, and is different for every file in the folder. The rest of the name in the original files is the state_county_lat_lon of every file.
For example:
Original name:
"5081_Illinois_Jefferson_-88.9255_38.3024_-88.75_38.25.wth"
"7083_Illinois_Jersey_-90.3424_39.0953_-90.25_39.25.wth"
"11085_Illinois_Jo_Daviess_-90.196_42.3686_-90.25_42.25.wth"
"13087_Illinois_Johnson_-88.8788_37.4559_-88.75_37.25.wth"
"17089_Illinois_Kane_-88.4342_41.9418_-88.25_41.75.wth"
And I need it to rename with just the initial code (fips):
"5081.wth"
"7083.wth"
"11085.wth"
"13087.wth"
"17089.wth"
I've tried by using the list.files and file.rename functions, but I do not know how to identify the code name out of he full name. Some kind of a "wildcard" could work, but don't know how to apply those properly because they all have the same pattern but differ in content.
This is what I've tried this far:
setwd("C:/Users/xxx")
Files <- list.files(path = "C:/Users/xxx", pattern = "fips_*.wth" all.files = TRUE)
newName <- paste("fips",".wth", sep = "")
for (x in length(Files)) {
file.rename(nFiles,newName)}
I've also tried with the "sub" function as follows:
setwd("C:/Users/xxxx")
Files <- list.files(path = "C:/Users/xxxx", all.files = TRUE)
for (x in length(Files)) {
sub("_*", ".wth", Files)}
but get Error in as.character(x) :
cannot coerce type 'closure' to vector of type 'character'
OR
setwd("C:/Users/xxxx")
Files <- list.files(path = "C:/Users/xxxx", all.files = TRUE)
for (x in length(Files)) {
sub("^(\\d+)_.*", "\\1.wth", file)}
Which runs without errors but does nothing to the names in the file.
I could use any help.
Thanks
Here is my example.
Preparation for data to use;
dir.create("test_dir")
data_sets <- c("5081_Illinois_Jefferson_-88.9255_38.3024_-88.75_38.25.wth",
"7083_Illinois_Jersey_-90.3424_39.0953_-90.25_39.25.wth",
"11085_Illinois_Jo_Daviess_-90.196_42.3686_-90.25_42.25.wth",
"13087_Illinois_Johnson_-88.8788_37.4559_-88.75_37.25.wth",
"17089_Illinois_Kane_-88.4342_41.9418_-88.25_41.75.wth")
setwd("test_dir")
file.create(data_sets)
Rename the files;
Files <- list.files(all.files = TRUE, pattern = ".wth")
newName <- sub("^(\\d+)_.*", "\\1.wth", Files)
file.rename(Files, newName)

r looping through folders and searching for file and then concatenate data

I have a base folder and it has many folders in it. I want to go to each folder, find a file that has name table_amzn.csv (if exists) and then read all of those files in R and put all files in a single dataframe one after other. I have verified that all files have same columns. I know how to read CSVs into R. But how could i loop over all the folders within a base folder and concatenate data
This also can be straightforward in base R:
## change `dir` to whatever your 'base folder' actually is
dir <- '~/base_folder'
ff <- list.files(dir, pattern = "table_amzn.csv", recursive = TRUE, full.names = TRUE)
out <- do.call(rbind, lapply(ff, read.csv))
In the event that your columns are the same but for whatever reason (typo, etc) have different column names, you could modify the above like:
out <- do.call(rbind, lapply(ff, read.csv, header = FALSE, skip = 1))
names(out) <- c('stub1', 'stub2') # whatever they should be
Here is an implementation that was recently added to the package rio:
files <- list.files(pattern = "table_amzn.csv", recursive = TRUE, full.names = TRUE)
devtools::install_github("leeper/rio")
library(rio)
df <- import_list(files, rbind = TRUE)
This will load all the objects in files to a single data.frame object. Alternatively, if you call with rbind = FALSE then a list of data.frames is returned

Resources