I am new to R. I created the function below to calculate the mean of dataset contained in 332 csv files. Seek advice on how I could improve this code. It takes 38 sec to run which make me think it is not very efficient.
pollutantmean <- function(directory, pollutant, id = 1:332) {
files_list <- list.files(directory, full.names = TRUE) #creats list of files
dat <- data.frame() #creates empty dataframe
for(i in id){
dat<- rbind(dat,read.csv(files_list[i])) #combin all the monitor data together
}
good <- complete.cases(dat) #remove all NA values from dataset
mean(dat[good,pollutant]) #calculate mean
} #run time ~ 37sec - NEED TO OPTIMISE THE CODE
Instead of creating a void data.frame and rbind each time with a for loop, you can store all data.frames in a list and combine them in one shot. You can also use na.rm option of mean function not to take into account NA values.
pollutantmean <- function(directory, pollutant, id = 1:332)
{
files_list = list.files(directory, full.names = TRUE)[id]
df = do.call(rbind, lapply(files_list, read.csv))
mean(df[[pollutant]], na.rm=TRUE)
}
Optional - I would increase the readability with magrittr:
library(magrittr)
pollutantmean <- function(directory, pollutant, id = 1:332)
{
list.files(directory, full.names = TRUE)[id] %>%
lapply(read.csv) %>%
do.call(rbind,.) %>%
extract2(pollutant) %>%
mean(na.rm=TRUE)
}
You can improve it by using data.table's fread function (see Quickly reading very large tables as dataframes in R)
Also binding the result using data.table::rbindlist is way faster.
require(data.table)
pollutantmean <- function(directory, pollutant, id = 1:332) {
files_list = list.files(directory, full.names = TRUE)[id]
DT = rbindlist(lapply(files_list, fread))
mean(DT[[pollutant]], na.rm=TRUE)
}
Related
I have roughly 50000 .rda files. Each contains a dataframe named results with exactly one row. I would like to append them all into one dataframe.
I tried the following, which works, but is slow:
root_dir <- paste(path, "models/", sep="")
files <- paste(root_dir, list.files(root_dir), sep="")
load(files[1])
results_table = results
rm(results)
for(i in c(2:length(files))) {
print(paste("We are at step ", i,sep=""))
load(files[i])
results_table= bind_rows(list(results_table, results))
rm(results)
}
Is there a more efficient way to do this?
Using .rds is a little bit easier. But if we are limited to .rda the following might be useful. I'm not certain if this is faster than what you have done:
library(purrr)
library(dplyr)
library(tidyr)
## make and write some sample data to .rda
x <- 1:10
fake_files <- function(x){
df <- tibble(x = x)
save(df, file = here::here(paste0(as.character(x),
".rda")))
return(NULL)
}
purrr::map(x,
~fake_files(x = .x))
## map and load the .rda files into a single tibble
load_rda <- function(file) {
foo <- load(file = file) # foo just provides the name of the objects loaded
return(df) # note df is the name of the rda returned object
}
rda_files <- tibble(files = list.files(path = here::here(""),
pattern = "*.rda",
full.names = TRUE)) %>%
mutate(data = pmap(., ~load_rda(file = .x))) %>%
unnest(data)
This is untested code but should be pretty efficient:
root_dir <- paste(path, "models/", sep="")
files <- paste(root_dir, list.files(root_dir), sep="")
data_list <- lapply("mydata.rda", function(f) {
message("loading file: ", f)
name <- load(f) # this should capture the name of the loaded object
return(eval(parse(text = name))) # returns the object with the name saved in `name`
})
results_table <- data.table::rbindlist(data_list)
data.table::rbindlist is very similar to dplyr::bind_rows but a little faster.
I am working on R and learning how to code. I have written a piece of code, utilizing a for loop and I find it very slow. I was wondering if I can get some assistance to convert it to use either the sapply or lapply function. Here is my working R code:
library(dplyr)
pollutantmean <- function(directory, pollutant, id = 1:332) {
files_list <- list.files(directory, full.names=TRUE) #creates a list of files
dat <- data.frame() #creates an empty data frame
for (i in seq_along(files_list)) {
#loops through the files, rbinding them together
dat <- rbind(dat, read.csv(files_list[i]))
}
dat_subset <- filter(dat, dat$ID %in% id) #subsets the rows that match the 'ID' argument
mean(dat_subset[, pollutant], na.rm=TRUE) #identifies the Mean of a Pollutant
}
pollutantmean("specdata", "sulfate", 1:10)
This code takes almost 20 seconds to return, which is unacceptable for 332 records. Imagine if I have a dataset with 10K records and wanted to get the mean of those variables?
You can rbind all elements in a list using do.call, and you can read in all the files into that list using lapply:
mean(
filter( # here's the filter that will be applied to the rbind-ed data
do.call("rbind", # call "rbind" on all elements of a list
lapply( # create a list by reading in the files from list.files()
# add any necessary args to read.csv:
list.files("[::DIR_PATH::]"), function(x) read.csv(file=x, ...)
)
)
), ID %in% id)$pollutant, # make sure id is replaced with what you want
na.rm = TRUE
)
The reason your code is slow because you are incrementally growing your dataframe in the loop. One way to do this using dplyr and map_df from purrr can be
library(dplyr)
pollutantmean <- function(directory, pollutant, id = 1:332) {
files_list <- list.files(directory, full.names=TRUE)
purrr::map_df(files_list, read.csv) %>%
filter(ID %in% id) %>%
summarise_at(pollutant, mean, na.rm = TRUE)
}
I'm trying to calculate the the mean of a variable across multiple datasets in R. However I keep running into this error and can't seem to get past it. Suggestions?
mymean <- function(directory, observation, id = 1:400){
all_files <- list.files(directory, pattern = "*.csv", full.names = TRUE)
dat <- data.frame
for(i in id) {
dat <-rbind(dat, read.csv(all_files[i]))
}
mean(dat[observation], na.rm=TRUE)
I have a problem i'm attempting to solve and have run into a brick wall. I'm attempting to find the mean of a set of data given specific pollutant names and the ID number. So the code all the way to the for loop I believe works fine. I create a function with 3 arguments, create an empty data.frame and then bind all my files into one variable called "dat".
Now i'm trying to subset this new binded data by "id" and by the specific pollutant name (there's two of them named sulfate and nitrate). As you can see, the code under the for loop is a mess.
In specific, i'm unsure how to subset two parameters/arguments in one "which" function so I tried to make a seperate one for each. I was thinking I could use the median function to find the mean between both
pollutantmean <- function(directory, pollutant, id = 1:332) {
files_list <- list.files(directory, full.names = TRUE)
dat <- data.frame()
for (i in 1:332){
dat <- rbind(dat, read.csv(files.list[1]))
}
subset_id <-dat[which(dat[, "id"] ==id) , ]
subset_poll <-dat[which(dat[, "pollutant"] ==pollutant) , ]
median(subset_id)
}
Here is a photo of what the head/tail data looks like in R.
EDIT1: So I was able to get the function initilized (proper term?) but am getting numerous "undefined columns selected" when I try to run it with input.
pollutantmean <- function(directory, pollutant, ID = 1:332) {
files_list <- list.files(directory, full.names = TRUE)
dat <- data.frame()
for (i in 1:332) {
dat <- rbind(dat, read.csv(files_list[1]))
}
subset_id <- dat[which(dat[, "ID"] == ID & dat[, "pollutant"] ==
pollutant) ]
median(subset_id[, "pollutant"], na.rm = TRUE)
}
So that function gets placed into memory just fine, but when I try to input parameters "pollutantmean("specdata","sulfate", 1:10)" I get the following errors.
Error in `[.data.frame`(dat, , "pollutant") : undefined columns selected
In addition: Warning message:
In dat[, "ID"] == ID :
Error in `[.data.frame`(dat, , "pollutant") : undefined columns selected
I was able to solve this question with some outside help.
pollutantmean <- function(directory, pollutant, ID = 1:332) {
files_list <- list.files(directory, full.names = TRUE)
dat <- data.frame()
for (i in ID) {
dat <- rbind(dat, read.csv(files_list[i]))
}
mean(dat[!is.na(dat[, "ID"]),pollutant], na.rm = TRUE)
}
I am new to R and not sure why I have to rename data frame column names at the end of the program though I have defined data frame with column names at the beginning of the program. The use of the data frame is, I got two columns where I have to save sequence under ID column and some sort of number in NOBS column.
complete <- function(directory, id = 1:332) {
collectCounts = data.frame(id=numeric(), nobs=numeric())
for(i in id) {
fileName = sprintf("%03d",i)
fileLocation = paste(directory, "/", fileName,".csv", sep="")
fileData = read.csv(fileLocation, header=TRUE)
completeCount = sum(!is.na(fileData[,2]), na.rm=TRUE)
collectCounts <- rbind(collectCounts, c(id=i, completeCount))
#print(completeCount)
}
colnames(collectCounts)[1] <- "id"
colnames(collectCounts)[2] <- "nobs"
print(collectCounts)
}
Its not quite clear what your specific problem is, as you did not provide a complete and verifiable example. But I can give a few pointers on improving the code, nonetheless.
1) It is not recommended to 'grow' a data.frame within a loop. This is extremely inefficient in R, as it copies the entire structure each time. Better is to assign the whole data.frame at the outset, then fill in the rows in the loop.
2) R has a handy functionpaste0 that does not require you to specify sep = "".
3) There's no need to specify na.rm = TRUE in your sum, because is.na will never return NA's
Putting this together:
complete = function(directory, id = 1:332) {
collectCounts = data.frame(id=id, nobs=numeric(length(id)))
for(i in 1:length(id)) {
fileName = sprintf("%03d", id[i])
fileLocation = paste0(directory, "/", fileName,".csv")
fileData = read.csv(fileLocation, header=TRUE)
completeCount = sum(!is.na(fileData[, 2]))
collectCounts[i, 'nobs'] <- completeCount
}
}
Always hard to answer questions without example data.
You could start with
collectCounts = data.frame(id, nobs=NA)
And in your loop, do:
collectCounts[i, 2] <- completeCount
Here is another way to do this:
complete <- function(directory, id = 1:332) {
nobs <- sapply(id, function(i) {
fileName = paste0(sprintf("%03d",i), ".csv")
fileLocation = file.path(directory, fileName)
fileData = read.csv(fileLocation, header=TRUE)
sum(!is.na(fileData[,2]), na.rm=TRUE)
}
)
data.frame(id=id, nobs=nobs)
}