I am trying to create a loop which creates a number of files for a VCS Station that are named according to their station name. Below is the code that does this for one station, and I am trying to turn it into a loop so I can do it for 68 stations. (i.e. if I was copying and pasting I would be replacing the P205187 with a different station name, say P205200). I have the individual station names (e.g. P205187) in a dataframe called VCS.Sites. Can anyone point me in the right direction? New R user here and I am stuck!
P205187 <- VCSrawdata[VCSrawdata$Network_ID=="P205187",] #create a file for VCS station P205187
#clean up after subset
P205187$Network_ID <- factor(P205187$Network_ID)
# create annual file for VCS station P205187
P205187_annual <- group_by(P205187,Year,DESCRIPTION)
P205187_annual <- summarise(P205187_annual,Sum_Annual = sum(Value), Mean_Annual = mean(Value), CountDays=n())
# create monthly file for VCS station P205187
P205187_monthly <- group_by(P205187,Year, Month,DESCRIPTION)
P205187_monthly <- summarise(P205187_monthly,Sum_Monthly = sum(Value),Mean_monthly = mean(Value),CountDays=n())
You could do it quite well with an lapply loop. Something like this:
list_of_ids <- c("List be here")
monthly <- function(id){
P205187 <- VCSrawdata[VCSrawdata$Network_ID==id,] #create a file for VCS station P205187
#clean up after subset
P205187$Network_ID <- factor(P205187$Network_ID)
# create annual file for VCS station P205187
P205187_annual <- group_by(P205187,Year,DESCRIPTION)
P205187_annual <- summarise(P205187_annual,Sum_Annual = sum(Value), Mean_Annual = mean(Value), CountDays=n())
# create monthly file for VCS station P205187
P205187_monthly <- group_by(P205187,Year, Month,DESCRIPTION)
P205187_monthly <- summarise(P205187_monthly,Sum_Monthly = sum(Value),Mean_monthly = mean(Value),CountDays=n())
return(P205187_monthly)
}
monthlies <- lapply(list_of_ids, monthly)
It sounds like this is for writing csvs. We can use group_map from dplyr to loop through all of the stations and write csvs.
library(dplyr)
VCSrawdata %>%
group_by(Network_ID) %>%
group_walk(~ {
.x%>%
group_by(Year, DESCRIPTION) %>%
summarize(sum_annual = sum(Value),
mean_annual = mean(Value),
countDays = n())%>%
write.csv(file = paste0(.y$Network_ID, "_annual_csv"))
.x%>%
group_by(Year, Month, DESCRIPTION) %>%
summarize(sum_month = sum(Value),
mean_month = mean(Value),
countDays = n())%>%
write.csv(file = paste0(.y$Network_ID, "_month_csv"))
}
)
Things to note:
The .x is referring to the the grouped tibble that is split by Network_ID
The .y is referring to the grouping. In this case, we only have Network_ID.
Simply generalize your process in a defined method, then pass station names as a parameter in a loop or apply function to iterate through stations. With this approach you avoid many separate objects flooding global environment and instead use a single named list of many underlying elements for better serialization and organization.
summarize_stations <- function(station_name) {
tmp_df <- VCSrawdata[VCSrawdata$Network_ID==station_name,]
tmp_df$Network_ID <- factor(tmp_df$Network_ID)
# create annual file for VCS station
tmp_annual <- summarise(group_by(tmp,Year,DESCRIPTION),
Sum_Annual = sum(Value),
Mean_Annual = mean(Value),
CountDays=n())
# create monthly file for VCS station
tmp_monthly <- summarise(group_by(tmp, Year, Month,DESCRIPTION),
Sum_Annual = sum(Value),
Mean_Annual = mean(Value),
CountDays=n())
# RETURN NAMED LIST OF BOTH AGGREGATIONS
return(list(annual=tmp_annual, monthly=tmp_monthly))
}
station_list <- sapply(VCS.Sites$station_names, summarize_stations, simplify=FALSE)
# ACCESS UNDERLYING ELEMENTS
station_list$P205187$annual
station_list$P205187$monthly
...
You can even use by (object-oriented wrapper to tapply) to subset VCSrawdata by Network_ID (assuming it includes all your needed stations). To do so, slightly adjust function to receive a data frame as parameter which allows you to skip subset line.
summarize_stations <- function(tmp_df) {
# REMOVE SUBSET LINE
# tmp_df <- VCSrawdata[VCSrawdata$Network_ID=="P205187",]
...keep same code as above
}
station_list <- by(VCSrawdata, VCSrawdata$Network_ID, FUN=summarize_stations)
# ACCESS UNDERLYING ELEMENTS
station_list$P205187$annual
station_list$P205187$monthly
...
Related
I have different datasets for different waves. Each wave has its own prefix for both dataset and variables names. I'm trying to import and append all the data files using a subset of variables I need. Therefore, I'm currently doing:
var_list <- c("pidp", "jbsat", "jbhrs", "jbnssec8_dv", "panssec8_dv", "manssec8_dv", "paedqf", "maedqf", "qfhigh", "age_dv",
"sex_dv", "psu", "strata", "employ", "jbhas", "jboff", "jbsem", "jbstat", "jbterm1", "jbterm2", "pjbptft", "fimnet_dv",
"fimngrs_dv", "fimnlabnet_dv", "seearnnet_dv", "fimnmisc_dv", "fimnprben_dv", "fimninvent_dv", "fimnpen_dv", "fimnsben_dv",
"hhtype_dv", "livesp_dv", "nch14resp", "nmpsp_dv", "tenure_dv", "urban_dv", "jbsat", "health", "sf1", "scghqa",
"scghqb", "scghqc", "scghqd", "scgqhe", "scgqhf", "scghqg", "scghqi", "scghqj", "scghqh", "scghql", "sclsat1",
"sclsat2", "sclsat3", "sclsat4", "indscus_lw", "indscub_xw")
Then import the data for the first wave, selecting these variables and remove the wave-prefix:
longfile <- read_dta(file=paste0(dir, "ukhls_w1/a_indresp.dta")) %>%
select(matches(var_list)) %>%
rename_at(vars(starts_with("a_")), ~str_replace(.,"a_", "")) %>% #remove the wave prefix
mutate(wave = 1)
At this point, I would simply use the following loop:
for (wn in 2:10) {
wl <- paste0(letters[wn],"_")
wave_data <- read_dta(paste0(dir, "ukhls_w", wn, "/", wl, "indresp.dta")) %>%
select(matches(var_list)) %>%
rename_at(vars(starts_with(wl)), ~str_replace(.,wl, "")) %>% # remove prefix wave
mutate(wave = wn)
longfile <- rbind(longfile, wave_data)
}
However, the problem is that some variable names matches more than one column in the files for the subsequent waves. For example, in the second wave it exists a variable named "nxtjbhrs", therefore this will be included as it matches "jbhrs". This will create an error in the rbind as the number of columns will differ.
How can I select the exact matches in this case? Or force the appending of the dataset?
Thanks for your support!
select(setdiff(names(.), var_list))
I have around 400'000 objects that have an id and a specific status. Every day (365 days a year) a status file is generated as follows:
Example of a status file:
n = 400000
df <- data.frame(id = sample(200000000000:299999999999, size = n, replace = FALSE),
status = sample( seq(100, 900, 10), size = n, replace = TRUE))
time <- paste0(Sys.Date(), "_", format(Sys.time(), "%H%M"))
writexl::write_xlsx(df, paste0("status_", time, ".xlsx"))
Now, my question is: How can I most efficiently put these files together so that I can analyse the development of certain status over the time? To save some memory I didn't create a date variable within each status file as it would be the same for all cases (the date is just in the filename).
One status file is around 6 or 7 MB, so over a year this would sum up to around 2.5 GB.
The easiest way would be to list the files, then read them in, and then join them.
So you could do something like this:
dir <- "path to directory"
files <- list.files(dir, full.names = TRUE)
# can change based on file type (if you convert to csv, you can use data.table::fread)
data <- purrr::map(files, readxl::read_excel)
# you'll probably want the date of each
dates <- dir %>% list.files() %>% stringr::str_remove(".xlsx") %>% stringr::str_remove("status_")
data <- purrr::map2(files, data, ~ mutate(.x, status = rep_len(.y, nrow(.x))
data_join <- purrr::reduce(data, rbind)
# you might want to convert the dates into date-times, here's an example
data_join <- data_join %>% separate(dates, into = c("date", "time"), by = "_") %>% mutate(date = lubridate::ymd(date), time = lubridate::hm(time)) %>% mutate(date_time = date + time)
This might not be the fastest solution in R, but is one of the fastest to get you running from the excel files.
I am a beginner user in R and have been compiling a code to create a custom function to execute a specific task on some data that I possess. The custom function is structured to identify missing data in a csv file and patch this using the mean value. Thereafter, I want to summarize the data by year and month and export this as a csv file. I have multiple csv files that are sitting in a folder and would like to perform this task on each of these files. Thus far, I am able to get the code to perform the task at hand but don't know how to write a unique output for each csv file that has been processed and save these to a new folder. I would also like to retain the original file name in the processed output but have the words "_processed" appended to it. Additionally, any suggestions on how this code can be improved are most welcome. Thanks in advance.
# Load all packages required by the script
library(tidyverse) # data science package
library(lubridate) # work with dates
library(dplyr) # data manipulation (filter, summarize, mutate)
library(ggplot2) # graphics
library(gridExtra) # tile several plots next to each other
library(scales)
# Set the working directory #
setwd("H:/Shaeden_Post_Doc/Genus_Exchange/GEE_Data/MODIS_Product_Data_Raw/Cold_Temperate_Moist")
#create a function to summarize data by year and month
#patch missing values using the average
summarize_by_month = function(df){
# counting unique, missing and mean values in the ET column
df %>% summarise(n = n_distinct(ET),
na = sum(is.na(ET)),
med = mean(ET, na.rm = TRUE))
# assign mean values to the missing data and modify the dataframe
df = df %>%
mutate(ET = replace(ET,is.na(ET),mean(ET, na.rm = TRUE)))
df
#separate data into year, month and day
df$date = as.Date(df$date,format="%Y/%m/%d")
#summarize by year and month
df %>%
mutate(year = format(date, "%Y"), month = format(date, "%m")) %>%
group_by(year, month) %>%
summarise(mean_monthly = mean(ET))
}
#import all files and execute custom function for each
file_list = list.files(pattern="AET", full.names=TRUE)
file_list
my_AET_files = lapply(file_list, read_csv)
monthly_AET = lapply(my_AET_files, summarize_by_month)
monthly_AET
A link to the sample datasets is provided below
https://drive.google.com/drive/folders/1pLHt-vT87lxzW2We-AS1PwVcne3ALP2d?usp=sharing
You can read, manipulate data and write the csv in the same function :
library(dplyr)
summarize_by_month = function(file) {
df <- readr::read_csv(file)
# assign mean values to the missing data and modify the dataframe
df = df %>% mutate(ET = replace(ET,is.na(ET),mean(ET, na.rm = TRUE)))
#separate data into year, month and day
df$date = as.Date(df$date,format="%Y/%m/%d")
#summarize by year and month
new_df <- df %>%
mutate(year = format(date, "%Y"), month = format(date, "%m")) %>%
group_by(year, month) %>%
summarise(mean_monthly = mean(ET))
write.csv(new_df, sprintf('output_folder/%s_processed.csv',
tools::file_path_sans_ext(basename(file))), row.names = FALSE)
}
monthly_AET = lapply(file_list, summarize_by_month)
path<-"your_peferred_path/" #set a path to were you want to save the files
x<-list.files(pattern= "your_pattern") # create a list of your file names
name<-str_sub(x, start=xL, end=yL) #x & y being the part of the name you want to keep
for (i in 1:length(monthly_AET)){
write_excel_csv(monthly_AET[i], paste0(path, name, "_processed.csv")) # paste0 allows to create custom names from variables and static strings
}
note: this is only an assumption and may have to be tweaked to suit your needs
I am building a Shiny application in which a large ggplot2 fortified dataframe needs to be calculated over and over again, using a large amount of external source files. I am searching for the fastest and most efficient way to do this. In the following paragraph I will delve a little bit more into the subject and the code I have so far and also provide the input data to enable your kind assistance.
I am using the Helsinki Region Travel Time Matrix 2018, a dataset provided by Digital Geography Lab, a research group in the University of Helsinki. This data uses a generalised map of Helsinki capital region, in 250 x 250 meter cells (in my code grid_f), to calculate travel times between all cells in the map (grid ids are called YKR_ID, n=13231) by public transport, private car, bicycle and by foot. The calculations are stored in delimited .txt files, one text file for all the travel times to a specific cell id. The data is available for download at this website, under "Download the data". NB, the unzipped data is 13.8 GB in size.
Here is a selection from a text file in the dataset:
from_id;to_id;walk_t;walk_d;bike_s_t;bike_f_t;bike_d;pt_r_tt;pt_r_t;pt_r_d;pt_m_tt;pt_m_t;pt_m_d;car_r_t;car_r_d;car_m_t;car_m_d;car_sl_t
5785640;5785640;0;0;-1;-1;-1;0;0;0;0;0;0;-1;0;-1;0;-1
5785641;5785640;48;3353;51;32;11590;48;48;3353;48;48;3353;22;985;21;985;16
5785642;5785640;50;3471;51;32;11590;50;50;3471;50;50;3471;22;12167;21;12167;16
5785643;5785640;54;3764;41;26;9333;54;54;3764;54;54;3764;22;10372;21;10370;16
5787544;5785640;38;2658;10;7;1758;38;38;2658;38;38;2658;7;2183;7;2183;6
My interest is to visualise (with ggplot2) this 250x250m Helsinki region map for one travel mode, the private car, using any of the possible 13231 cell ids, repeatedly if the user wants. Because of this it is important that the dataframe fetch is as fast and efficient as possible. For this question, let's concentrate on the fetching and processing of the data from the external files and use only one specific id value.
In a nutshell, After I have produced a ggplot2::fortify() version of the 250 x 250 meter grid spatial dataset grid_f,
I need to scan through all the 13231 Travel Time Matrix 2018 text files
Pick only the relevant columns (from_id, to_id, car_r_t, car_m_t, car_sl_t) in each file
Pick the relevant row using from_id (in this case, origin_id <- "5985086") in each file
Join the the resulting row to the fortified spatial data grid_f
My code is as follows:
# Libraries
library(ggplot2)
library(dplyr)
library(rgdal)
library(data.table)
library(sf)
library(sp)
# File paths. ttm_path is the folder which contains the unchanged Travel
# Time Matrix 2018 data from the research group's home page
ttm_path <- "HelsinkiTravelTimeMatrix2018"
gridpath <- "MetropAccess_YKR_grid_EurefFIN.shp"
#### Import grid cells
# use this CRS information throughout the app
app_crs <- sp::CRS("+init=epsg:3067")
# Read grid shapefile and transform
grid_f <- rgdal::readOGR(gridpath, stringsAsFactors = TRUE) %>%
sp::spTransform(., app_crs) %>%
# preserve grid dataframe data in the fortify
{dplyr::left_join(ggplot2::fortify(.),
as.data.frame(.) %>%
dplyr::mutate(id = as.character(dplyr::row_number() - 1)))} %>%
dplyr::select(-c(x, y))
The code above this point is meant to run only once. The code below, more or less, would be run over and over with different origin_ids.
#### Fetch TTM18 data
origin_id <- "5985086"
origin_id_num <- as.numeric(origin_id)
# column positions of columns from_id, to_id, car_r_t, car_m_t, car_sl_t
col_range <- c(1, 2, 14, 16, 18)
# grid_f as data.table version
dt_grid <- as.data.table(grid_f)
# Get filepaths of all of the TTM18 data. Remove metadata textfile filepath.
all_files <- list.files(path = ttm_path,
pattern = ".txt$",
recursive = TRUE,
full.names = TRUE)
all_files <- all_files[-length(all_files)]
# lapply function
TTM18_fetch <- function(x, col_range, origin_id) {
res <- fread(x, select = col_range)
res <- subset(res, from_id == origin_id)
return(res)
}
# The part of the code that needs to be fast and efficient
result <-
lapply(all_files, FUN = TTM18_fetch, col_range, origin_id_num) %>%
data.table::rbindlist(., fill = TRUE) %>%
data.table::merge.data.table(dt_grid, ., by.x = "YKR_ID", by.y = "to_id")
The dataframe result should have 66155 rows of 12 variables, five rows for each 250x250 meter grid cell. The columns are YKR_ID, long, lat, order, hole, piece, id, group, from_id, car_r_t, car_m_t, car_sl_t.
My current lapply() and data.table::fread() solution takes about 2-3 minutes to complete. I think this is already a good achievement, but I can't help and think there are better and faster ways to complete this. So far, I have tried these alternatives to what I now have:
A conventional for loop: that was obviously a slow solution
I tried to teach myself more about vectorised functions in R, but that did not lead anywhere. Used this link
Tried to dabble with with() unsuccessfully using this SO question, inspired by this SO question
Looked into package parallel but ended up not utilising that because of the Windows environment I am using
Tried to find alternative ways to solve this with apply() and sapply() but nothing noteworthy came out of that.
As to why I didn't do all this to the data before ggplot2::fortify, I simply found it troublesome to work with a SpatialPolygonsDataFrame.
Thank you for your time.
Whenver I’m trying to figure out how to improve the performance of my R
functions, I generally use the following approach. First, I look for any
function calls that may be unesscesary or identify places where multiple
function calls can be simplified into one. Then, I look for places in my
code that are incurring the greatest time penalty by benchmarking each
part separately. This can easily be done using the microbenchmark
package.
For example, we can ask if we get better performance with or without
piping (e.g. %>%).
# hint... piping is always slower
library(magrittr)
library(microbenchmark)
microbenchmark(
pipe = iris %>% subset(Species=='setosa'),
no_pipe = subset(iris, Species=='setosa'),
times = 200)
Unit: microseconds
expr min lq mean median uq max neval cld
pipe 157.518 196.739 308.1328 229.6775 312.6565 2473.582 200 b
no_pipe 84.894 116.386 145.4039 126.1950 139.4100 612.492 200 a
Here, we find that removing subseting a data.frame without piping
takes nearly half the time to execute!
Next, I determine the net time penalty for each place I
benchmarked by multipling the execution time by total number of times it
needs to be executed. For the areas with the greatest net time penalty,
I try to replace it with faster functions and/or try reduce the total
number of times it needs to be executed.
TLDR
In your case, you can speed things up by using the fst package
although you would need to convert your csv files to fst files.
# before
TTM18_fetch <- function(x, col_range, origin_id) {
res <- data.table::fread(x, select = col_range)
res <- subset(res, from_id == origin_id)
return(res)
}
# after (NB x needs to be a fst file)
col_range <- c('from_id', 'to_id', 'car_r_t', 'car_m_t', 'car_sl_t')
TTM18_fetch <- function(x, col_range, origin_id) {
res <- fst::read_fst(path = x,
columns = col_range,
as.data.table = TRUE)[from_id==origin_id]
return(res)
}
To convert your csv files to fst
library(data.table)
library(fst)
ttm_path <- 'REPLACE THIS'
new_ttm_path <- 'REPLACE THIS'
# Get filepaths of all of the TTM18 data. Remove metadata textfile filepath.
all_files <- list.files(path = ttm_path,
pattern = ".txt$",
recursive = TRUE,
full.names = TRUE)
all_files <- all_files[-grepl('[Mm]eta', all_files)]
# creating new file paths and names for fst files
file_names <- list.files(path = ttm_path,
pattern = ".txt$",
recursive = TRUE)
file_names <- file_names[-grepl('[Mm]eta', file_names)]
file_names <- gsub(pattern = '.csv$',
replacement = '.fst',
x =file_names)
file_names <- file.path(new_ttm_path, file_names)
# csv to fst conversion
require(progress) # this will help you create track of things
pb <- progress_bar$new(
format = " :what [:bar] :percent eta: :eta",
clear = FALSE, total = length(file_names), width = 60)
# an index file to store from_id file locations
from_id_paths <- data.table(from_id = numeric(),
file_path = character())
for(i in seq_along(file_names)){
pb$tick(tokens = list(what = 'reading'))
tmp <- data.table::fread(all_files[i], key = 'from_id')
pb$update(tokens = list(what = 'writing'))
fst::write_fst(tmp,
compress = 50, # less compressed files read faster
path = file_names[i] )
pb$update(tokens = list(what = 'indexing'))
from_id_paths <- rbind(from_id_paths,
data.table(from_id = unique(tmp$from_id),
file_path = file_names[i]))
}
setkey(from_id_paths, from_id)
write_fst(from_id_paths,
path = file.path('new_ttm_path', 'from_id_index.fst'),
compress = 0)
This would be the replacement
library(fst)
library(data.table)
new_ttm_path <- 'REPLACE THIS'
#### Fetch TTM18 data
origin_id <- "5985086"
origin_id_num <- as.numeric(origin_id)
# column positions of columns from_id, to_id, car_r_t, car_m_t, car_sl_t
col_range <- c('from_id', 'to_id', 'car_r_t', 'car_m_t', 'car_sl_t')
# grid_f as data.table version
dt_grid <- as.data.table(grid_f)
nescessary_files <- read_fst(path = file.path(new_ttm_path,
'from_id_index.fst'),
as.data.table = TRUE
)[from_id==origin_id,file_path]
TTM18_fetch <- function(x, col_range, origin_id) {
res <- fst::read_fst(path = x,
columns = col_range,
as.data.table = TRUE)[from_id==origin_id]
return(res)
}
result <- rbindlist(lapply(nescessary_files, FUN = TTM18_fetch, col_range, origin_id_num),
fill = TRUE)
result <- data.table::merge.data.table(dt_grid, result, by.x = "YKR_ID", by.y = "to_id")
I'm new-ish to R and am having some trouble iterating through values.
For context: I have data on 60 people over time, and each person has his/her own dataset in a folder (I received the data with id #s 00:59). For each person, there are 2 values I need - time of response and picture response given (a number 1 - 16). I need to convert this data from wide to long format for each person, and then eventually append all of the datasets together.
My problem is that I'm having trouble writing a loop that will do this for each person (i.e. each dataset). Here's the code I have so far:
pam[x] <- fromJSON(file = "PAM_u[x].json")
pam[x]df <- as.data.frame(pam[x])
#Creating long dataframe for times
pam[x]_long_times <- gather(
select(pam[x]df, starts_with("resp")),
key = "time",
value = "resp_times"
)
#Creating long dataframe for pic_nums (affect response)
pam[x]_long_pics <- gather(
select(pam[x]df, starts_with("pic")),
key = "picture",
value = "pic_num"
)
#Combining the two long dataframes so that I have one df per person
pam[x]_long_fin <- bind_cols(pam[x]_long_times, pam[x]_long_pics) %>%
select(resp_times, pic_num) %>%
add_column(id = [x], .before = 1)
If you replace [x] in the above code with a person's id# (e.g. 00), the code will run and will give me the dataframe I want for that person. Any advice on how to do this so I can get all 60 people done?
Thanks!
EDIT
So, using library(jsonlite) rather than library(rjson) set up the files in the format I needed without having to do all of the manipulation. Thanks all for the responses, but the solution was apparently much easier than I'd thought.
I don't know the structure of your json files. If you are not in the same folder, like the json files, try that:
library(jsonlite)
# setup - read files
json_folder <- "U:/test/" #adjust you folder here
files <- list.files(path = paste0(json_folder), pattern = "\\.json$")
# import data
pam <- NULL
pam_df <- NULL
for (i in seq_along(files)) {
pam[[i]] <- fromJSON(file = files[i])
pam_df[[i]] <- as.data.frame(pam[[i]])
}
Here you generally read all json files in the folder and build a vector of a length of 60.
Than you sequence along that vector and read all files.
I assume at the end you can do bind_rowsor add you code in the for loop. But remember to set the data frames to NULL before the loop starts, e.g. pam_long_pics <- NULL
Hope that helped? Let me know.
Something along these lines could work:
#library("tidyverse")
#library("jsonlite")
file_list <- list.files(pattern = "*.json", full.names = TRUE)
Data_raw <- tibble(File_name = file_list) %>%
mutate(File_contents = map(File_name, fromJSON)) %>% # This should result in a nested tibble
mutate(File_contents = map(File_contents, as_tibble))
Data_raw %>%
mutate(Long_times = map(File_contents, ~ gather(key = "time", value = "resp_times", starts_with("resp"))),
Long_pics = map(File_contents, ~ gather(key = "picture", value = "pic_num", starts_with("pic")))) %>%
unnest(Long_times, Long_pics) %>%
select(File_name, resp_times, pic_num)
EDIT: you may or may not need not to include as_tibble() after reading in the JSON files, depending on how your data looks like.