Readin .xlsx file into R - r

Im trying to read an excel file into R. It's about the following file in my cwd:
> list.files()
[1] "Keuren_Op_Afspraak.xlsx"
I installed XLConnect and am doing the following:
library(XLConnect)
demoExcelFile <- system.file("Keuren_Op_Afspraak.xlsx", package = "XLConnect")
wb <- loadWorkbook(demoExcelFile)
But this gives me the error:
Error: FileNotFoundException (Java): File '' could not be found - you may specify to automatically create the file if not existing.
But I dont understand where this is coming from. Any thoughts?

I prefer using the readxl package. It is written in C so it is faster. It also seems to handle large files better. The command would be:
library(readxl)
wb <- read_excel("Keuren_Op_Afspraak.xlsx")

You can also use the xlsx package.
library(xlsx)
wb <- read.xlsx("Keuren_Op_Afspraak.xlsx", sheet = 1)
Edit :#Verena
You can also use this function much faster:
wb <- read.xlsx2("Keuren_Op_Afspraak.xlsx", sheet = 1)

You have to change your code that way:
library(XLConnect)
demoExcelFile <- "Keuren_Op_Afspraak.xlsx"
wb <- loadWorkbook(demoExcelFile)
You probably took the example from here:
http://www.inside-r.org/packages/cran/XLConnect/docs/loadWorkbook
This line
system.file("demoFiles/mtcars.xlsx", package = "XLConnect")
is a way to get sample files that are part of a package. If you download the zip File of XLConnect and look into the folder structure you will see that there is a folder demoFiles that contains mtcars.xlsx. And the parameter package="XLConnect" tells the method to look for the file in this package.
If you type it into the command line it returns the absolute path to the file:
"C:/Users/Expecto/Documents/R/win-library/3.1/XLConnect/demoFiles/mtcars.xlsx"
To use loadWorkbook you simply need to pass the relative or absolute filepath.

Related

Not able to use the readImage function in R

I have downloaded the latest R package and am using RStudio and am trying to convert a pgm image into a csv file and am using a readImage function.
Although any time I do
img <- readImage(file)
where file is the filepath
I get
Error in readImage(file) : could not find function "readImage"
Is there some other pack I need to download or am I using it wrong?
You can use the magick package to read pgm files.
First, you need to do:
install.packages("magick")
Now you call
library(magick)
In my case, I have a pgm file in my R home directory, so I make the file path with:
file <- path.expand("~/cat.pgm")
Now I can read the image and convert it into a matrix of RGB strings by doing:
img <- image_read(file)
ras <- as.raster(img)
mat <- as.matrix(ras)
To write this to csv format, I can do:
write.csv(mat, "cat.csv", row.names = FALSE)
So now I have the image saved as a csv file. To read this back in, and prove it works, I can do:
cat_csv <- read.csv("cat.csv")
cat_ras <- as.raster(as.matrix(cat_csv))
plot(cat_ras)
Note though that the csv file is very large - 9MB, which is one of the reasons why it is rarely a good idea to store an image as csv.
Created on 2022-02-05 by the reprex package (v2.0.1)

Parsing issue, unexpected character when loading a folder

I am using this answer to load in a folder of Excel Files:
# Get the list of files
#----------------------------#
folder <- "path/to/files"
fileList <- dir(folder, recursive=TRUE) # grep through these, if you are not loading them all
# use platform appropriate separator
files <- paste(folder, fileList, sep=.Platform$file.sep)
So far, so good.
# Load them in
#----------------------------#
# Method 1:
invisible(sapply(files, source, local=TRUE))
#-- OR --#
# Method 2:
sapply(files, function(f) eval(parse(text=f)))
But the source function (Method 1) gives me the error:
Error in source("C:/Users/Username/filename.xlsx") :
C:/Users/filename :1:3: unexpected input
1: PK
^
For method 2 get the error:
Error in parse(text = f) : <text>:1:3: unexpected '/'
1: C:/
^
EDIT: I tried circumventing the issue by setting the working directory to the directory of the folder, but that did not help.
Any ideas why this happens?
EDIT 2: It works when doing the following:
How can I read multiple (excel) files into R?
setwd("...")
library(readxl)
file.list <- list.files(pattern='*.xlsx')
df.list <- lapply(file.list, read_excel)
just to provide a proper answer outside of the comment section...
If your target is to read many Excel files, you shouldn't use source.
source is dedicated to run external R code.
If you need to read many Excel files you can use the following code and the support of one of these libraries: readxl, openxlsx, tidyxl (with unpivotr).
filelist <- dir(folder, recursive = TRUE, full.names = TRUE, pattern = ".xlsx$|.xls$", ignore.case = TRUE)
l_df <- lapply(filelist, readxl::read_excel)
Note that we are using dir to list the full paths (full.names = TRUE) of all the files that ends with .xlsx, .xls (pattern = ".xlsx$|.xls$"), .XLSX, .XLS (ignore.case = TRUE) in the folder folder and all its subfolders (recursive = TRUE).
readxl is integrated with tidyverse. It is pretty easy to use. It is most likely what you're looking for.
Personally, I advice to use openxlsx if you need to write (rather than read) customized Excel files with many specific features.
tidyxl is the best package I've seen to read Excel files, but it may be rather complicated to use. However, it's really careful in the types preservation.
With the support of unpivotr it allows you to handle complicated Excel structures.
For example, when you find multiple headers and multiple left index columns.

Can't open .biom file for Phyloseq tree plotting

After trying to read a biom file:
rich_dense_biom <-
system.file("extdata", "D:\sample_otutable.biom", package = "phyloseq")
myData <-
import_biom(rich_dense_biom, treefilename, refseqfilename, parseFunction =
parse_taxonomy_greengenes)
the following errors are showing
Error in read_biom(biom_file = BIOMfilename) :
Both attempts to read input file:
either as JSON (BIOM-v1) or HDF5 (BIOM-v2).
Check file path, file name, file itself, then try again.
Are you sure D:\sample_otutable.biom really exists? And is a system file?
In R for Windows, it is at least safer (if not required?) to separate file paths with \\
This works for me
library("devtools")
install_github("biom", "joey711")
library(biom)
biom.file <-
"C:\\Users\\Mark Miller\\Documents\\R\\win-library\\3.3\\biom\\extdata\\min_dense_otu_table.biom"
my.data <- import_biom(BIOMfilename = biom.file)

R read.xlsx gives me java.io.FileNotFoundException

I am trying to use the R package xlsx to load a file available at this URL:
http://www.plosgenetics.org/article/fetchSingleRepresentation.action?uri=info:doi/10.1371/journal.pgen.1002236.s019
library(xlsx)
filename="/home/avilella/00x/mobile.element.insertions.1000g.journal.pgen.1002236.s019.xlsx"
system(paste("ls -l",filename))
-rw-rw-r-- 1 avilella avilella 2372143 2011-12-11 16:36 /home/avilella/00x/mobile.element.insertions.1000g.journal.pgen.1002236.s019.xlsx
Once downloaded, I try to load it in R using read.xlsx or read.xlsx2:
file <- system.file("mobile.element.insertions.1000g", filename, package = "xlsx")
res <- read.xlsx2(file, 1) # read first sheet
But I get an error:
Error in .jnew("java/io/FileInputStream", file) :
java.io.FileNotFoundException: (No such file or directory)
Any ideas?
1) xlsx package. Try using file.choose which will allow you to interactively navigate to the file and thereby eliminate the possibility of misidentifying it:
fn <- file.choose()
DF <- read.xls(fn, 1)
2) gdata package. If the above still does not work then you might try read.xls in gdata. It uses a perl program rather than java. It can read both xls and xlsx files and can read data right off the net (downloading it into a temporary file and reading it from there in a manner that is transparent to the user):
library(gdata)
URL <- "http://www.plosgenetics.org/article/fetchSingleRepresentation.action?uri=info:doi/10.1371/journal.pgen.1002236.s019"
DF <- read.xls(URL)
?read.xls in gdata has more info.

Using R to download zipped data file, extract, and import data

#EZGraphs on Twitter writes:
"Lots of online csvs are zipped. Is there a way to download, unzip the archive, and load the data to a data.frame using R? #Rstats"
I was also trying to do this today, but ended up just downloading the zip file manually.
I tried something like:
fileName <- "http://www.newcl.org/data/zipfiles/a1.zip"
con1 <- unz(fileName, filename="a1.dat", open = "r")
but I feel as if I'm a long way off.
Any thoughts?
Zip archives are actually more a 'filesystem' with content metadata etc. See help(unzip) for details. So to do what you sketch out above you need to
Create a temp. file name (eg tempfile())
Use download.file() to fetch the file into the temp. file
Use unz() to extract the target file from temp. file
Remove the temp file via unlink()
which in code (thanks for basic example, but this is simpler) looks like
temp <- tempfile()
download.file("http://www.newcl.org/data/zipfiles/a1.zip",temp)
data <- read.table(unz(temp, "a1.dat"))
unlink(temp)
Compressed (.z) or gzipped (.gz) or bzip2ed (.bz2) files are just the file and those you can read directly from a connection. So get the data provider to use that instead :)
Just for the record, I tried translating Dirk's answer into code :-P
temp <- tempfile()
download.file("http://www.newcl.org/data/zipfiles/a1.zip",temp)
con <- unz(temp, "a1.dat")
data <- matrix(scan(con),ncol=4,byrow=TRUE)
unlink(temp)
I used CRAN package "downloader" found at http://cran.r-project.org/web/packages/downloader/index.html . Much easier.
download(url, dest="dataset.zip", mode="wb")
unzip ("dataset.zip", exdir = "./")
For Mac (and I assume Linux)...
If the zip archive contains a single file, you can use the bash command funzip, in conjuction with fread from the data.table package:
library(data.table)
dt <- fread("curl http://www.newcl.org/data/zipfiles/a1.zip | funzip")
In cases where the archive contains multiple files, you can use tar instead to extract a specific file to stdout:
dt <- fread("curl http://www.newcl.org/data/zipfiles/a1.zip | tar -xf- --to-stdout *a1.dat")
Here is an example that works for files which cannot be read in with the read.table function. This example reads a .xls file.
url <-"https://www1.toronto.ca/City_Of_Toronto/Information_Technology/Open_Data/Data_Sets/Assets/Files/fire_stns.zip"
temp <- tempfile()
temp2 <- tempfile()
download.file(url, temp)
unzip(zipfile = temp, exdir = temp2)
data <- read_xls(file.path(temp2, "fire station x_y.xls"))
unlink(c(temp, temp2))
To do this using data.table, I found that the following works. Unfortunately, the link does not work anymore, so I used a link for another data set.
library(data.table)
temp <- tempfile()
download.file("https://www.bls.gov/tus/special.requests/atusact_0315.zip", temp)
timeUse <- fread(unzip(temp, files = "atusact_0315.dat"))
rm(temp)
I know this is possible in a single line since you can pass bash scripts to fread, but I am not sure how to download a .zip file, extract, and pass a single file from that to fread.
Using library(archive) one can also read in a particular csv file within the archive, without having to UNZIP it first; read_csv(archive_read("http://www.newcl.org/data/zipfiles/a1.zip", file = 1), col_types = cols())
which I find more convenient & is faster.
It also supports all major archive formats & is quite a bit faster than the base R untar or unz - it supports tar, ZIP, 7-zip, RAR, CAB, gzip, bzip2, compress, lzma, xz & uuencoded files.
To unzip everything one can use archive_extract("http://www.newcl.org/data/zipfiles/a1.zip", dir=XXX)
This works on all platforms & given the superior performance for me would be the preferred option.
Try this code. It works for me:
unzip(zipfile="<directory and filename>",
exdir="<directory where the content will be extracted>")
Example:
unzip(zipfile="./data/Data.zip",exdir="./data")
rio() would be very suitable for this - it uses the file extension of a file name to determine what kind of file it is, so it will work with a large variety of file types. I've also used unzip() to list the file names within the zip file, so its not necessary to specify the file name(s) manually.
library(rio)
# create a temporary directory
td <- tempdir()
# create a temporary file
tf <- tempfile(tmpdir=td, fileext=".zip")
# download file from internet into temporary location
download.file("http://download.companieshouse.gov.uk/BasicCompanyData-part1.zip", tf)
# list zip archive
file_names <- unzip(tf, list=TRUE)
# extract files from zip file
unzip(tf, exdir=td, overwrite=TRUE)
# use when zip file has only one file
data <- import(file.path(td, file_names$Name[1]))
# use when zip file has multiple files
data_multiple <- lapply(file_names$Name, function(x) import(file.path(td, x)))
# delete the files and directories
unlink(td)
I found that the following worked for me. These steps come from BTD's YouTube video, Managing Zipfile's in R:
zip.url <- "url_address.zip"
dir <- getwd()
zip.file <- "file_name.zip"
zip.combine <- as.character(paste(dir, zip.file, sep = "/"))
download.file(zip.url, destfile = zip.combine)
unzip(zip.file)

Resources