reading a directory of large compressed zip files - r

I was wondering if y'all could help me.
I'm trying to read data from a directory containing 332 compressed zips all with about 1000 lines (or more) of data into a data frame.
I have it so that the directory is set to the file with the zips in it. I figured this way when I tried using the dir() function, I could do something like:
d1 <- read.csv(dir()[1:332])
to read all the data and then find the mean of say "columnA" in that data table by using something like mean(d1$columnA).
Though, so far this has not worked. Any ideas?
I'm operating in R with a Mac OSX.

Related

Vemco Acoustic Telemetry Data (vrl files) in R

Does anyone know a good way to read .vrl files from Vemco acoustic telemetry receivers directly into r as an object. Converting .vrl files to .csv files in the program VUE prior to analyzing the data in r seems like a waste of time if there is a way to bring them in directly. My internet searches have not turned up anything that worked for me.
I figured out a way using glatos to convert all .vrl files to .csv and then reading the .csv files in and binding them.
glatos has to be installed from github.
Convert all .vrl files to .csv files using vrl2csv. The help page has info on finding the path for vueExePath
library(glatos)
vrl2csv(vrl = "VRLFileInput",outDir = "VRLFilesToCSV", vueExePath = "C:/Program Files (x86)/VEMCO/VUE")
The following will pull in all .csv files in the output folder from vrl2csv and rbind them together. I had to add the paste0 function to create the full file path for each .csv in the list.
library(data.table)
AllDetections <- do.call(rbind, lapply(paste0("VRLFilesToCSV/", list.files(path = "VRLFilesToCSV")), read.csv))

Merging Two csv files with different column lengths with R

I am trying to merge two files in R in an attempt to compute the correlation. One file is located at http://richardtwatson.com/data/SolarRadiationAthens.csv and the other at http://richardtwatson.com/data/electricityprices.csv
Currently my code looks as follows:
library(dplyr)
data1<-read.csv("C:/Users/nldru/Downloads/SolarRadiationAthens.csv")
data2 <- read.csv("C:/Users/nldru/Downloads/electricityprices.csv")
n <- merge(data1,data2)
I have the code stored locally on my computer just for ease of access. The files are being read in properly, but for some reason when I merge, variable n receives no data, just the headers of the columns from the csv files. I have experimented with using inner_join to no avail as well as pulling the files directly from the http address linked above and using read_delim() commands but can't seem to get it to work. Any help or tips are much appreciated.

R and zipped files

I have about ~1000 tar.gz files (about 2 GB/file compressed) each containing bunch of large .tsv (tab separated) files e.g. 1.tsv, 2.tsv, 3.tsv, 4.tsv etc.
I want to work in R on a subset of the .tsv files (say 1.tsv, 2.tsv) without extracting the .tar.gz files, in order to save space/time.
I tried looking around but couldn't find a library or a routine to stream the tar.gz files through memory and extracting data from them on the fly. In other languages there are ways of doing this efficiently. I would be surprised if one couldn't do this in R
Does anyone know of a way to accomplish this in R? Any help is greatly appreciated! Note: Unzipping/untarring the file is not an option. I want to extract relevant fields and save them in a data.frame without extracting the file

Reading data from zip files located in zip files with R

I'd like to use R to extract data from zip files located in zip files (i.e. preform some ZIP file inception).
An example "directory" of one of my datapoints looks like this:
C:\ZipMother.zip\ZipChild.zip\index.txt
My goal is to read in the "index.txt" from each ZipChild.zip. The issue is that I have 324 ZipMother.zip files with an average of 2000 ZipChild.zip files, therefore unzipping the ZipMother.zip files is a concern due to memory constraints (the ZipMother files are about 600 megabytes on average).
With the unzip package, I can successfully get the filepaths of each ZipChild located in the ZipMother, but I cannot use it to list the files located in the ZipChild folders.
Therefore,
unzip("./ZipMother.zip",list=TRUE)
works just fine, but...
unzip("./ZipMother.zip/ZipChild.zip",list=TRUE)
gives me the following error
Error in unzip("./ZipMother.zip/ZipChild.zip", list = TRUE) :
zip file './ZipMother.zip/ZipChild.zip' cannot be opened
Is there any way to use unzip or another method to extract the data from the ZipChild files?
Once I get this to work, I plan on using the ldply function to compile the index.txt files into a dataset.
Any input is very much appreciated. Thank you!
A reproducible example (i.e. a link to a zip file with the appropriate structure) would be useful, but how about:
tmpd <- tempdir()
## extract just the child
unzip("./ZipMother.zip",
files="zipChild.zip",exdir=tmpd)
ff <- file.path(tmpd,"zipChild.zip")
index <- unzip(ff,list=TRUE)
unlink(ff)
This could obviously be packaged into a function for convenience.
It could be slow, but it means you never have to unpack more than one child at a time ...

How to get R to read in files from multiple subdirectories under one large directory?

I am trying to get started with writing my first R code. I have searched for this answer but I am not quite sure what I've found is what I'm looking for exactly. I know how to get R to read in multiple files in the same subdirectory, but I'm not quite sure how to get it to read in one specific file from multiple subdirectories.
For instance, I have a main directory containing a series of trajectory replicates, each replicate is in it's own subdirectory. The break down is as follows;
"Main Dir" -> "SubDir1" -> "ReplicateDirs 1-6"
From each "ReplicateDir" I want R to pull the "RMSD.dat" table (file) to read from. All of the RMSD.dat files have identical names, they are just in different directories and contain different data of course.
I could move all the files to one folder but this doesn't seem like the most efficient way to attack this problem.
If anyone could enlighten me, I'd appreciate it.
Thanks
This should work, of course change My Dir to your directory
dat.files <- list.files(path="Main Dir",
recursive=T,
pattern="RMSD.dat"
,full.names=T)
If you want to read the files into the data set, you could use the function below:
readDatFile <- function(f) {
dat.fl <- read.csv(f) # You may have to change read.csv to match your data type
}
And apply to the list of files:
data.files <- sapply(dat.files, readDatFile)

Resources