R read.xlsx gives me java.io.FileNotFoundException - r

I am trying to use the R package xlsx to load a file available at this URL:
http://www.plosgenetics.org/article/fetchSingleRepresentation.action?uri=info:doi/10.1371/journal.pgen.1002236.s019
library(xlsx)
filename="/home/avilella/00x/mobile.element.insertions.1000g.journal.pgen.1002236.s019.xlsx"
system(paste("ls -l",filename))
-rw-rw-r-- 1 avilella avilella 2372143 2011-12-11 16:36 /home/avilella/00x/mobile.element.insertions.1000g.journal.pgen.1002236.s019.xlsx
Once downloaded, I try to load it in R using read.xlsx or read.xlsx2:
file <- system.file("mobile.element.insertions.1000g", filename, package = "xlsx")
res <- read.xlsx2(file, 1) # read first sheet
But I get an error:
Error in .jnew("java/io/FileInputStream", file) :
java.io.FileNotFoundException: (No such file or directory)
Any ideas?

1) xlsx package. Try using file.choose which will allow you to interactively navigate to the file and thereby eliminate the possibility of misidentifying it:
fn <- file.choose()
DF <- read.xls(fn, 1)
2) gdata package. If the above still does not work then you might try read.xls in gdata. It uses a perl program rather than java. It can read both xls and xlsx files and can read data right off the net (downloading it into a temporary file and reading it from there in a manner that is transparent to the user):
library(gdata)
URL <- "http://www.plosgenetics.org/article/fetchSingleRepresentation.action?uri=info:doi/10.1371/journal.pgen.1002236.s019"
DF <- read.xls(URL)
?read.xls in gdata has more info.

Related

Read in feather file directly from GitHub in R

How can I read in a .feather file from the web (e.g. GitHub) in R? I can read formats as .csv or .dta from GitHub directly as raw
# CSV
coursedata <- read.csv(file = 'https://raw.githubusercontent.com/MarcoKuehne/seminars_in_applied_economics/main/Data/GF_2020.csv')
# DTA
library(haven)
soep <- read_dta("https://github.com/MarcoKuehne/seminars_in_applied_economics/blob/main/Data/soep_lebensz_en.dta?raw=true")
But the same approach fails for arrow and read_feather.
library(arrow)
digital <- read_feather("https://github.com/MarcoKuehne/seminars_in_applied_economics/blob/main/Data/Digital_Literacy_EN.feather?raw=true")
Is there a direct way or a nested command? Or am I required to download the file manually or programmatically as a temporary file?

Can't open .biom file for Phyloseq tree plotting

After trying to read a biom file:
rich_dense_biom <-
system.file("extdata", "D:\sample_otutable.biom", package = "phyloseq")
myData <-
import_biom(rich_dense_biom, treefilename, refseqfilename, parseFunction =
parse_taxonomy_greengenes)
the following errors are showing
Error in read_biom(biom_file = BIOMfilename) :
Both attempts to read input file:
either as JSON (BIOM-v1) or HDF5 (BIOM-v2).
Check file path, file name, file itself, then try again.
Are you sure D:\sample_otutable.biom really exists? And is a system file?
In R for Windows, it is at least safer (if not required?) to separate file paths with \\
This works for me
library("devtools")
install_github("biom", "joey711")
library(biom)
biom.file <-
"C:\\Users\\Mark Miller\\Documents\\R\\win-library\\3.3\\biom\\extdata\\min_dense_otu_table.biom"
my.data <- import_biom(BIOMfilename = biom.file)

Readin .xlsx file into R

Im trying to read an excel file into R. It's about the following file in my cwd:
> list.files()
[1] "Keuren_Op_Afspraak.xlsx"
I installed XLConnect and am doing the following:
library(XLConnect)
demoExcelFile <- system.file("Keuren_Op_Afspraak.xlsx", package = "XLConnect")
wb <- loadWorkbook(demoExcelFile)
But this gives me the error:
Error: FileNotFoundException (Java): File '' could not be found - you may specify to automatically create the file if not existing.
But I dont understand where this is coming from. Any thoughts?
I prefer using the readxl package. It is written in C so it is faster. It also seems to handle large files better. The command would be:
library(readxl)
wb <- read_excel("Keuren_Op_Afspraak.xlsx")
You can also use the xlsx package.
library(xlsx)
wb <- read.xlsx("Keuren_Op_Afspraak.xlsx", sheet = 1)
Edit :#Verena
You can also use this function much faster:
wb <- read.xlsx2("Keuren_Op_Afspraak.xlsx", sheet = 1)
You have to change your code that way:
library(XLConnect)
demoExcelFile <- "Keuren_Op_Afspraak.xlsx"
wb <- loadWorkbook(demoExcelFile)
You probably took the example from here:
http://www.inside-r.org/packages/cran/XLConnect/docs/loadWorkbook
This line
system.file("demoFiles/mtcars.xlsx", package = "XLConnect")
is a way to get sample files that are part of a package. If you download the zip File of XLConnect and look into the folder structure you will see that there is a folder demoFiles that contains mtcars.xlsx. And the parameter package="XLConnect" tells the method to look for the file in this package.
If you type it into the command line it returns the absolute path to the file:
"C:/Users/Expecto/Documents/R/win-library/3.1/XLConnect/demoFiles/mtcars.xlsx"
To use loadWorkbook you simply need to pass the relative or absolute filepath.

R Reading in a zip data file without unzipping it

I have a very large zip file and i am trying to read it into R without unzipping it like so:
temp <- tempfile("Sales", fileext=c("zip"))
data <- read.table(unz(temp, "Sales.dat"), nrows=10, header=T, quote="\"", sep=",")
Error in open.connection(file, "rt") : cannot open the connection
In addition: Warning message:
In open.connection(file, "rt") :
cannot open zip file 'C:\Users\xxx\AppData\Local\Temp\RtmpyAM9jH\Sales13041760345azip'
If your zip file is called Sales.zip and contains only a file called Sales.dat, I think you can simply do the following (assuming the file is in your working directory):
data <- read.table(unz("Sales.zip", "Sales.dat"), nrows=10, header=T, quote="\"", sep=",")
The methods of the readr package also support compressed files if the file suffix indicates the nature of the file, that is files ending in .gz, .bz2, .xz, or .zip will be automatically uncompressed.
require(readr)
myData <- read_csv("foo.txt.gz")
No need to use unz, as now read.table can handle the zipped file directly:
data <- read.table("Sales.zip", nrows=10, header=T, quote="\"", sep=",")
See this post
This should work just fine if the file is sales.csv.
data <- readr::read_csv(unzip("Sales.zip", "Sales.csv"))
To check the filename without extracting the file. This works
unzip("sales.zip", list = TRUE)
If you have zcat installed on your system (which is the case for linux, macos, and cygwin) you could also use:
zipfile<-"test.zip"
myData <- read.delim(pipe(paste("zcat", zipfile)))
This solution also has the advantage that no temporary files are created.
The gzfile function along with read_csv and read.table can read compressed files.
library(readr)
df = read_csv(gzfile("file.csv.gz"))
library(data.table)
df = read.table(gzfile("file.csv.gz"))
read_csv from the readr package can read compressed files even without using gzfile function.
library(readr)
df = read_csv("file.csv.gz")
read_csv is recommended because it is faster than read.table
In this expression you lost a dot
temp <- tempfile("Sales", fileext=c("zip"))
It should be:
temp <- tempfile("Sales", fileext=c(".zip"))
For remote-based zipped files
samhsa2015 <- fread("curl https://www.opr.princeton.edu/workshops/Downloads/2020Jan_LatentClassAnalysisPratt_samhsa_2015F.zip | funzip")
answer from here: https://stackoverflow.com/a/37824192/12387385)

Using R to download zipped data file, extract, and import data

#EZGraphs on Twitter writes:
"Lots of online csvs are zipped. Is there a way to download, unzip the archive, and load the data to a data.frame using R? #Rstats"
I was also trying to do this today, but ended up just downloading the zip file manually.
I tried something like:
fileName <- "http://www.newcl.org/data/zipfiles/a1.zip"
con1 <- unz(fileName, filename="a1.dat", open = "r")
but I feel as if I'm a long way off.
Any thoughts?
Zip archives are actually more a 'filesystem' with content metadata etc. See help(unzip) for details. So to do what you sketch out above you need to
Create a temp. file name (eg tempfile())
Use download.file() to fetch the file into the temp. file
Use unz() to extract the target file from temp. file
Remove the temp file via unlink()
which in code (thanks for basic example, but this is simpler) looks like
temp <- tempfile()
download.file("http://www.newcl.org/data/zipfiles/a1.zip",temp)
data <- read.table(unz(temp, "a1.dat"))
unlink(temp)
Compressed (.z) or gzipped (.gz) or bzip2ed (.bz2) files are just the file and those you can read directly from a connection. So get the data provider to use that instead :)
Just for the record, I tried translating Dirk's answer into code :-P
temp <- tempfile()
download.file("http://www.newcl.org/data/zipfiles/a1.zip",temp)
con <- unz(temp, "a1.dat")
data <- matrix(scan(con),ncol=4,byrow=TRUE)
unlink(temp)
I used CRAN package "downloader" found at http://cran.r-project.org/web/packages/downloader/index.html . Much easier.
download(url, dest="dataset.zip", mode="wb")
unzip ("dataset.zip", exdir = "./")
For Mac (and I assume Linux)...
If the zip archive contains a single file, you can use the bash command funzip, in conjuction with fread from the data.table package:
library(data.table)
dt <- fread("curl http://www.newcl.org/data/zipfiles/a1.zip | funzip")
In cases where the archive contains multiple files, you can use tar instead to extract a specific file to stdout:
dt <- fread("curl http://www.newcl.org/data/zipfiles/a1.zip | tar -xf- --to-stdout *a1.dat")
Here is an example that works for files which cannot be read in with the read.table function. This example reads a .xls file.
url <-"https://www1.toronto.ca/City_Of_Toronto/Information_Technology/Open_Data/Data_Sets/Assets/Files/fire_stns.zip"
temp <- tempfile()
temp2 <- tempfile()
download.file(url, temp)
unzip(zipfile = temp, exdir = temp2)
data <- read_xls(file.path(temp2, "fire station x_y.xls"))
unlink(c(temp, temp2))
To do this using data.table, I found that the following works. Unfortunately, the link does not work anymore, so I used a link for another data set.
library(data.table)
temp <- tempfile()
download.file("https://www.bls.gov/tus/special.requests/atusact_0315.zip", temp)
timeUse <- fread(unzip(temp, files = "atusact_0315.dat"))
rm(temp)
I know this is possible in a single line since you can pass bash scripts to fread, but I am not sure how to download a .zip file, extract, and pass a single file from that to fread.
Using library(archive) one can also read in a particular csv file within the archive, without having to UNZIP it first; read_csv(archive_read("http://www.newcl.org/data/zipfiles/a1.zip", file = 1), col_types = cols())
which I find more convenient & is faster.
It also supports all major archive formats & is quite a bit faster than the base R untar or unz - it supports tar, ZIP, 7-zip, RAR, CAB, gzip, bzip2, compress, lzma, xz & uuencoded files.
To unzip everything one can use archive_extract("http://www.newcl.org/data/zipfiles/a1.zip", dir=XXX)
This works on all platforms & given the superior performance for me would be the preferred option.
Try this code. It works for me:
unzip(zipfile="<directory and filename>",
exdir="<directory where the content will be extracted>")
Example:
unzip(zipfile="./data/Data.zip",exdir="./data")
rio() would be very suitable for this - it uses the file extension of a file name to determine what kind of file it is, so it will work with a large variety of file types. I've also used unzip() to list the file names within the zip file, so its not necessary to specify the file name(s) manually.
library(rio)
# create a temporary directory
td <- tempdir()
# create a temporary file
tf <- tempfile(tmpdir=td, fileext=".zip")
# download file from internet into temporary location
download.file("http://download.companieshouse.gov.uk/BasicCompanyData-part1.zip", tf)
# list zip archive
file_names <- unzip(tf, list=TRUE)
# extract files from zip file
unzip(tf, exdir=td, overwrite=TRUE)
# use when zip file has only one file
data <- import(file.path(td, file_names$Name[1]))
# use when zip file has multiple files
data_multiple <- lapply(file_names$Name, function(x) import(file.path(td, x)))
# delete the files and directories
unlink(td)
I found that the following worked for me. These steps come from BTD's YouTube video, Managing Zipfile's in R:
zip.url <- "url_address.zip"
dir <- getwd()
zip.file <- "file_name.zip"
zip.combine <- as.character(paste(dir, zip.file, sep = "/"))
download.file(zip.url, destfile = zip.combine)
unzip(zip.file)

Resources