How to download multiple gzip file? - r

I have to download a lot of files in the format .gz (one file ~40mb , 40k rows).
The file contain a data from another country i would like to choose data only from france -> fr (limiting the number of columns)
I am trying to automate this process but I have problems with unpacking.
The data is on a webpage
and I'm intersted in data from whole folder.
I try with:
create tempfile
dowloand zip to tempfile
unzip, read and selected rows.
save as new file and repeat to next file.
I would like to ask if this way of think is correct.(the below code will be in for loop)
temp <- tempfile()
temp1 <- "C:/Users/tdo/Desktop/data/test.txt"
download.file("https://dumps.wikimedia.org/other/pageviews/2018/2018-
06/pageviews-20180601-000000.gz",temp) # example
unzip(files = temp,exdir = temp1)
data <- read.table(..)
daata[data$name == 'fr']
write.table(...)
In this way I created links:
dumpList <- read_html("https://dumps.wikimedia.org/other/pageviews/2018/2018-04/")
links <- data_frame(filename = html_attr(html_nodes(dumpList, "a"), "href")) %>%
filter(grepl(x = filename, "pageviews")) %>% data by project
mutate(link = paste0("https://dumps.wikimedia.org/other/pageviews/2018/2018-04/", filename))

Why not directly read the gzipped files? I don't see the need to locally unpack the archives, if all you want to do is subset/filter the data and store as new local files.
I recommend using readr::read_table2 to directly read the gzipped file.
Here is a minimal example:
# List of files to download
# url is the link, target the local filename
lst.files <- list(
list(
url = "https://dumps.wikimedia.org/other/pageviews/2018/2018-06/pageviews-20180601-000000.gz",
target = "pageviews-20180601-000000.gz"))
# Download gzipped files (only if file does not exist)
lapply(lst.files, function(x)
if (!file.exists(x$target)) download.file(x$url, x$target))
# Open files
library(readr)
lst <- lapply(lst.files, function(x) {
df <- read_table2(x$target)
# Filter/subset entries
# Write to file with write_delim
})

Related

Read multiple “.xlsx” files

I am trying to read multiple excel files under different folders by R
Here is my solution:
setwd("D:/data")
filename <- list.files(getwd(),full.names = TRUE)
# Four folders "epdata1" "epdata2" "epdata3" "epdata4" were inside the folder "data"
dataname <- list.files(filename,pattern="*.xlsx$",full.names = TRUE)
# Every folder in the folder "data" contains five excel files
datalist <- lapply(dataname,read_xlsx)
Error: `path` does not exist:'D:/data/epidata1/出院舱随访1.xlsx'
But read_xlsx was successfully run
read_xlsx("D:/data/epidata1/出院舱随访1.xlsx")
All file directories are available in the "data" folder and why R fails to read those excel file?
Your help will much appreciated!
I dont see any point why your code shouldnt work. Make sure your folder names are correct. In your comments you write "epdata1" and your error says "epidata1".
I tried it with some csv and mixed xlsx files.
This is again what i would come up with, to find the error/typo:
library(readxl)
pp <- function(...){print(paste(...))}
main <- function(){
# finding / setting up data main folder
# You may change this to your needs
main_dir <- paste0(getwd(),"/data/")
pp("working directory:",dir_data)
pp("Found following folders:")
pp(list.files(main_dir,full.names = FALSE))
data_folders <- list.files(dir_data,full.names = TRUE)
pp("Found these files in folders:",list.files(data_folders,full.names = TRUE))
pp("Filtering *.xlsx files",list.files(data_folders,pattern="*.xlsx$",full.names = TRUE))
files <- list.files(data_folders,pattern="\\.xlsx$",full.names = TRUE)
datalist <- lapply(files,read_xlsx)
print(datalist)
}
main()

Loop function for reading csv files and store them in a list

I have a folder in which are stored approximately 10 subfolders, containing 4 .csv files each! Each subfolder corresponds to a weather station, and each file in each subfolder contain temperature data for different period
(e.g. station134_2000_2005.csv,station134_2006_2011.csv,station134_2012_2018.csv etc.) .
I wrote a loop for opening each folder, and rbind all data in one data frame but it is not very handy to do my work.
I need to create a loop so that those 4 files from each subfolder, rbined together to a dataframe, and then stored in a different "slot" in a list, or if it's easier,each station rbined csv data (namely each subfolder) to be exported from the loop as dataframe.
The code I wrote, which opens all files in all folders and create a big (rbined) data frame is:
directory <- list.files() # to have the names of each subfolder
stations <- data.frame() # to store all the rbined csv files
library(plyr)
for(i in directory){
periexomena <- list.files(i,full.names = T, pattern = "\\.csv$")
for(f in periexomena){
data_files <- read.csv(f, stringsAsFactors = F, sep = ";", dec = ",")
stations <- rbind.fill(data_files,stations)
}
Does anyone knows how can I have a list with each subfolder's rbined 4 csv files data in different slot, or how can I modify the abovementioned code in order to export in different data frame, the data from each subfolder?
Try:
slotted <- lapply(setNames(nm = directory), function(D) {
alldat <- lapply(list.files(D, pattern="\\.csv$", full.names=TRUE),
function(fn) {
message(fn)
read.csv2(fn, stringsAsFactors=FALSE)
})
# stringsAsFactors=F should be the default as of R-3.6, I believe
do.call(rbind.fill, alldat)
})

How do I apply the same action to all Excel Files in the directory?

I need to shape the data stored in Excel files and save it as new .csv files. I figured out what specific actions should be done, but can't understand how to use lapply.
All Excell files have the same structure. Each of the .csv files should have the name of original files.
## the original actions successfully performed on a single file
library(readxl)
library("reshape2")
DataSource <- read_excel("File1.xlsx", sheet = "Sheet10")
DataShaped <- melt(subset(DataSource [-(1),], select = - c(ng)), id.vars = c ("itemname","week"))
write.csv2(DataShaped, "C:/Users/Ol/Desktop/Meta/File1.csv")
## my attempt to apply to the rest of the files in the directory
lapply(Files, function (i){write.csv2((melt(subset(read_excel(i,sheet = "Sheet10")[-(1),], select = - c(ng)), id.vars = c ("itemname","week"))))})
R returns the result to the console but doesn't create any files. The result resembles .csv structure.
Could anybody explain what I am doing wrong? I'm new to R, I would be really grateful for the help
Answer
Thanks to the prompt answer from #Parfait the code is working! So glad. Here it is:
library(readxl)
library(reshape2)
Files <- list.files(full.names = TRUE)
lapply(Files, function(i) {
write.csv2(
melt(subset(read_excel(i, sheet = "Decomp_Val")[-(1),],
select = -c(ng)),id.vars = c("itemname","week")),
file = paste0(sub(".xlsx", ".csv",i)))
})
It reads an Excel file in the directory, drops first row (but headers) and the column named "ng", melts the data by labels "itemname" and "week", writes the result as a .csv to the working directory attributing the name of the original file. And then - rinse and repeat.
Simply pass an actual file path to write.csv2. Otherwise, as denoted in docs ?write.csv, the default value for file argument is empty string "" :
file: either a character string naming a file or a connection open for writing. "" indicates output to the console.
Below concatenates the Excel file stem to the specified path directory with .csv extension:
path <- "C:/Users/Ol/Desktop/Meta/"
lapply(Files, function (i){
write.csv2(
melt(subset(read_excel(i, sheet = "Sheet10")[-(1),],
select = -c(ng)),
id.vars = c("itemname","week")),
file = paste0(path, sub(".xlsx", ".csv", i))
)
})

Read the file created/modified last in different directories in R

I'd want to read the CSV files modified( or created) most recently in differents directories and then put it in a pre-existing single dataframe (df_total).
I have two kinds of directories to read:
A:/LogIIS/FOLDER01/"files.csv"
On others there a folder with several files.csv, as the example bellow:
"A:/LogIIS/FOLDER02/FOLDER_A/"files.csv"
"A:/LogIIS/FOLDER02/FOLDER_B/"files.csv"
"A:/LogIIS/FOLDER02/FOLDER_C/"files.csv"
"A:/LogIIS/FOLDER03/FOLDER_A/"files.csv"
"A:/LogIIS/FOLDER03/FOLDER_B/"files.csv"
"A:/LogIIS/FOLDER03/FOLDER_C/"files.csv"
"A:/LogIIS/FOLDER03/FOLDER_D/"files.csv"
Something like this...
#get a vector of all filenames
files <- list.files(path="A:/LogIIS",pattern="files.csv",full.names = TRUE,recursive = TRUE)
#get the directory names of these (for grouping)
dirs <- dirname(files)
#find the last file in each directory (i.e. latest modified time)
lastfiles <- tapply(files,dirs,function(v) v[which.max(file.mtime(v))])
You can then loop through these and read them in.
If you just want the latest file overall, this will be files[which.max(file.mtime(files))].
Here a tidyverse-friendly solution
list.files("data/",full.names = T) %>%
enframe(name = NULL) %>%
bind_cols(pmap_df(., file.info)) %>%
filter(mtime==max(mtime)) %>%
pull(value)
Consider creating a data frame of files as file.info maintains OS file system metadata per path such as created time:
setwd("A:/LogIIS")
files <- list.files(getwd(), full.names = TRUE, recursive = TRUE)
# DATAFRAME OF FILE, DIR, AND METADATA
filesdf <- cbind(file=files,
dir=dirname(files),
data.frame(file.info(files), row.names =NULL),
stringsAsFactors=FALSE)
# SORT BY DIR AND CREATED TIME (DESC)
filesdf <- with(filesdf, filesdf[order(dir, -xtfrm(ctime)),])
# AGGREGATE LATEST FILE PER DIR
latestfiles <- aggregate(.~dir, filesdf, FUN=function(i) head(i)[[1]])
# LOOP THROUGH LATEST FILE VECTOR FOR IMPORT
df_total <- do.call(rbind, lapply(latestfiles$file, read.csv))
Here is a pipe-friendly way to get the most recent file in a folder. It uses an anonymous function which in my view is slightly more readable than a one-liner. file.mtime is faster than file.info(fpath)$ctime.
dir(path = "your_path_goes_here", full.names = T) %>% # on W, use pattern="^your_pattern"
(function(fpath){
ftime <- file.mtime(fpath) # file.info(fpath)$ctime for file CREATED time
return(fpath[which.max(ftime)]) # returns the most recent file path
})

I have a zip folder which contains 332 csv file. I have to first unzip it using R and then save it to a directory. How do i do that?

I have tried-
read.zip(file ="C:/Users/dm/Downloads/rprog-data-specdata.zip")
and-
l = list.files("C:/Users/dm/Downloads/rprog-data-specdata")
read.csv(l[1:332])
But it's not working
Unless you really want them all extracted, you don't have to. You can read them all in directly from the archive:
# you
zipped_csvs <- "rprog-data-specdata.zip"
# get data.frame of file info in the zip
fils <- unzip(zipped_csvs, list=TRUE)
# read them all into a list (or you can read individual ones)
dats <- lapply(fils$Name, function(x) {
read.csv(unzip(zipped_csvs, x), stringsAsFactors=FALSE)
})

Resources