I'm new to R studio and was not well aware of this portal T&C, so was blocked for questing for 5 days.
I have a code for importing multiple files from any directory to R.
Using this code for doing so, but the problem is this code runs sometime and sometime it gets failed with mentioned error.
I tried to found the solution of this but yet not found any solution.
library(data.table)
t = setwd("/home/dp/vishan/olp_data/19164/1/")
files <- file.info(list.files(path = t,pattern = "", full.names=TRUE))
files = rownames(files)[files$size > 0]
temp <- lapply(files, fread, sep=",")
Error:
Error in FUN(X[[i]], ...) :
'input' can not be a directory name, but must be a single character string containing a file name, a command, full path to a file, a URL starting 'http[s]://', 'ftp[s]://' or 'file://', or the input data itself.
Thanks in advance!
try using
files <- file.info(list.files(path = t,pattern = "", full.names=TRUE))
files <- subset(files, !isdir & size > 0)
temp <- lapply(rownames(files), fread, sep=',')
since list.files also shows directories. The data.frame you create in files can be easily subset on the isdir column which indicates if this is a directory or a file.
Related
I am working under some unfortunate circumstances, and need to read in a csv file from within 2 zip folders. What I mean by this is that the file path looks something like this:
//path/folder1.zip/folder2.zip/wanttoread.csv
I tried mimicking the slick work of this problem found here: Extract certain files from .zip , but have had no luck so far. Specifically, when I ran something similar on my end, I got an error message reading
Error in fread(x, sep = ",", header = TRUE, stringsAsFactors = FALSE) :
embedded nul in string:
followed by a bunch of encoded nonsense.
Any ideas on how to handle this problem? Thanks in advance!
Here's an approach using tempdir():
temp<-tempdir(check = TRUE) #Create temporary directory to extract into
unzip("folder1.zip",exdir = temp) #Unzip outer archive to temp directory
unzip(file.path(temp,"folder2.zip"), #Use file.path to generate the path to the inner archive
exdir = file.path(temp,"temp2")) #Extract to a subfolder inside temp
#This covers the case when the outer archive might also have a file named wanttoread.csv
list.files(file.path(temp,"temp2")) #We can see the .csv file is now there
#[1] "wanttoread.csv"
read.csv(file.path(temp,"temp2","wanttoread.csv")) #Read it in
# Var1 Var2
#1 Hello obewanjacobi
I am using this answer to load in a folder of Excel Files:
# Get the list of files
#----------------------------#
folder <- "path/to/files"
fileList <- dir(folder, recursive=TRUE) # grep through these, if you are not loading them all
# use platform appropriate separator
files <- paste(folder, fileList, sep=.Platform$file.sep)
So far, so good.
# Load them in
#----------------------------#
# Method 1:
invisible(sapply(files, source, local=TRUE))
#-- OR --#
# Method 2:
sapply(files, function(f) eval(parse(text=f)))
But the source function (Method 1) gives me the error:
Error in source("C:/Users/Username/filename.xlsx") :
C:/Users/filename :1:3: unexpected input
1: PK
^
For method 2 get the error:
Error in parse(text = f) : <text>:1:3: unexpected '/'
1: C:/
^
EDIT: I tried circumventing the issue by setting the working directory to the directory of the folder, but that did not help.
Any ideas why this happens?
EDIT 2: It works when doing the following:
How can I read multiple (excel) files into R?
setwd("...")
library(readxl)
file.list <- list.files(pattern='*.xlsx')
df.list <- lapply(file.list, read_excel)
just to provide a proper answer outside of the comment section...
If your target is to read many Excel files, you shouldn't use source.
source is dedicated to run external R code.
If you need to read many Excel files you can use the following code and the support of one of these libraries: readxl, openxlsx, tidyxl (with unpivotr).
filelist <- dir(folder, recursive = TRUE, full.names = TRUE, pattern = ".xlsx$|.xls$", ignore.case = TRUE)
l_df <- lapply(filelist, readxl::read_excel)
Note that we are using dir to list the full paths (full.names = TRUE) of all the files that ends with .xlsx, .xls (pattern = ".xlsx$|.xls$"), .XLSX, .XLS (ignore.case = TRUE) in the folder folder and all its subfolders (recursive = TRUE).
readxl is integrated with tidyverse. It is pretty easy to use. It is most likely what you're looking for.
Personally, I advice to use openxlsx if you need to write (rather than read) customized Excel files with many specific features.
tidyxl is the best package I've seen to read Excel files, but it may be rather complicated to use. However, it's really careful in the types preservation.
With the support of unpivotr it allows you to handle complicated Excel structures.
For example, when you find multiple headers and multiple left index columns.
I am trying to read in a number of Excel files into R using read.xlsx using the xlsx package but when I do so I am getting the following error:
Error in loadWorkbook(file) : Cannot find id100.xlsx
First I list the files in the directory:
> files <- list.files(datDir, pattern = ".xlsx")
Then I use read.xlsx to read them all in:
for (i in seq_along(files)) {
assign(paste("id", i, sep = "."), read.xlsx(files[i],1,as.data.frame=TRUE,
header=FALSE, stringsAsFactors=FALSE, na.strings=" "))
}
I checked to see if the file was even in the list and it is:
> files
[1] "id100.xlsx" "id101.xlsx" etc...
> files[1]
[1] "id100.xlsx"
I have used this code many times before today and for some reason it is just not working. I keep getting that error. Does anyone have any suggestions?
Thanks!
If your working directory is different from datDir you should use full.names=T like this:
files <- list.files(datDir, pattern = ".xlsx",full.names=T)
I am trying and failing to write a process that will download a .zip archive, extract a particular Excel file from that archive, and load that Excel file into my R workspace without ever writing any of those files (the .zip or the .xls) to my hard drive.
I have written a version of this process that works for zipped .csvs, but it doesn't work for .xls. Here's how that version goes, using one of the URLs I'm targeting in my current project and using readWorksheetFromFile() instead of read.csv() at the appropriate moment:
library(XLConnect)
waed.old.link <- "http://eventdata.parusanalytics.com/data.dir/pitf.world.19950101-20121231.xls.zip"
waed.old.file <- "pitf.world.19950101-20121231.xls"
tmp <- tempfile()
download.file(waed.old.link, tmp)
tmp2 <- tempfile()
tmp2 <- unz(tmp, waed.old.file)
WAED.old <- readWorksheetFromFile(tmp2, sheet = 1, startRow = 3, startCol = 1, endCol = 73)
unlink(tmp)
unlink(tmp2)
And here's what pops up after line 8, the one that tries to ingest the spreadsheet as WAED.old:
Error in path.expand(filename) : invalid 'path' argument
I also tried read_excel() at that step and got the same result:
> WAED.old <- read_excel(tmp2, skip = 2)
Error in file.exists(path) : invalid 'file' argument
I gather that this has something to do with pointing readWorksheetFromFile() at a connection rather than a file, but I'm not sure that's right, and I don't know how to fix it if it is. I searched stackoverflow and the web for an answer but couldn't find one that was right on point. I'd really appreciate some help.
As you say, it is because unz returns a connection object for the file within the zip (but does not explicitly unzip that file), while readWorksheetFromFile expects a path to a file.
Use unzip to explicitly unzip the file.
tmp2 <- unzip(zipfile=tmp, files = waed.old.file, exdir=tempdir())
# readWorksheetFromFile(tmp2, ...)
Using Mac OS 10.10.3
RStudio Version 0.98.1103
My working directory is a list of 332 .csv files and I set it correctly. Here's the code:
pollutantmean <- function(directory, pollutant, id = 1:332) {
all_files <- list.files(directory, full.names = T)
dat <- data.frame()
for(i in id) {
dat <- rbind(dat, read.csv(all_files[i]))
}
ds <- (dat[, pollutant], na.rm = TRUE)
mean(ds[, pollutant])
}
Part of the assignment is to get the mean of the first 10 numeric values of a pollutant. To do this, I used the call function (where "spectata" is the directory with 332 .csv files):
pollutantmean(specdata, "Nitrate", 1:10)
The error messages I get are:
**Error in file(file, "rt") : cannot open the connection
** In addition: Warning message: In file(file, "rt") : cannot open file 'NA': No such file or directory
Like many students that have posed questions here, I’m new to programming and to R and still distant from getting any results when calling my function. There are many questions and answers about this coursera assignment in stack overflow but my review of these exchanges hasn't addressed the bug in my code.
Anyone have a suggestion how to fix the bug?
In addition to the other answers is you can try this:
all_files <- list.files(directory, pattern="*.csv", full.names = TRUE)
to avoid select any other kind of file.
or even this strange one
all_files <- paste(directory, "\\", sprintf("%03d", id), ".csv", sep="")
I take the time to answer since the question comes back at every Coursera session.
First, be careful with the typo : Do call pollutantmean("specdata", "Nitrate", 1:10)
instead of pollutantmean(specdata, "Nitrate", 1:10.
Then your working directory should be the parent directory of "specdata" (for exemple, if your path was /dev/specdata, your working directory should have been /dev).
You can get the current working directory with getwd() and set the new one with setwd() (careful there, the path would be relative to the current working directory).
Add a line after all_files <- list.files(directory, full.names = TRUE) (it's a bad habit to use T instead of TRUE):
print(all_files)
Then call your function again, so you will see the content of that object. Then, check where are you working with getwd().
Modify your line no. 5 to dat <- rbind(dat, read.csv(i, comment.char = ""))
This will bind the data of all csv files to 'dat' dataframe.
Based upon the information provided, it can be assumed there are not 332 files in the directory you specify (if one attempts to access an index of a vector that is out of bounds, an NA is returned - hence the error "cannot open file 'NA'"). This is suggestive that the path you are using (which is not provided) points to a directory which does not contain the csv files (presuming there truly are 332 files in that directory). Some suggestions:
Check that the directory you are providing is accurate. Simply do a list.files to see what files exist in the directory you are using.
Use the pattern argument of list.files to be sure you are only going to read the csv files
Loop over the files using the length of the vector returned from list.files, rather than having to code this manually
You can add a sanity check to be sure you are reading all files by printing out each file, or returning a list containing the results and file names