R - plot overlapping time intervals - r

I have a list of people and their working start and end times during a day. I want to plot a curve showing the total of people working at any given minute in the day. What I could do is just add 1440 additional conditional boolean variables for each minute of the day and sum them up, but that seems very inelegant. I'm wondering if there a better way to do it (integrals?).
Here's the code to generate a df with my sample data:
sample_wt <- function() {
require(lubridate)
set.seed(10)
worktime <- data.frame(
ID = c(1:100),
start = now()+abs(rnorm(100,4800,2400))
)
worktime$end <- worktime$start + abs(rnorm(100,20000,10000))
worktime$length <- difftime(worktime$end, worktime$start, units="mins")
worktime
}
To create a sample data , you can do something like:
DF <- sample_wt()

Here one option using IRanges package from Bioconductor.
library(IRanges)
## generate sample
DF <- sample_wt()
## create the range from the sample data
rangesA <- IRanges(as.numeric(DF$start), as.numeric(DF$end))
## create one minute range
xx = seq(min(DF$start),max(DF$end),60)
rangesB <- IRanges(as.numeric(xx),as.numeric(xx+60))
## count the overlaps
ov <- countOverlaps(rangesB, rangesA, type="within")
## plot the result
plot(xx,ov,type='l')

Surely it can be improved, but this seems to do it:
time_range <- seq(min(DF$start), max(DF$end), 60)
result <- integer(length(time_range))
for (t in seq_along(time_range)) {
result[t] <- sum(DF$start <= time_range[t] & DF$end >= time_range[t])
}

I don't have lubridate installed, so I produced the data.frame through Sys.time instead of now (guess they should be similar). This could make the trick:
minutes<-seq(as.POSIXct(paste(sep="",Sys.Date()," 00:00:00")),by="min",length.out=24*60)
rowSums(outer(minutes,worktime$start,">") & outer(minutes,worktime$end,"<"))

Related

Time efficient alternative to for-loops for manual sampling from timelines in R

So I have sampled a set of lakes at x timepoints throughout the year. I also have deployed loggers etc. in the water and I want to use daily averages from these loggers, at the timepoint of the visit to x days/hours before. Sometimes I also just grab the a sample for the timepoint of the visit.
This is my solution, it works just fine but since I experiment alot with some model assumptions and perform sensitivity analyses it operates unsatisfactory slow.
I seem to have solved most of my R problems with loops and I often encounter more efficient scripts, it would be very interesting to see some more effective alternatives to my code.
Below code just generates some dummy data..
library(dplyr)
library(lubridate)
do.pct.sat <- function(x,y,z){
t <- x
do <- y
p <- z
atm <- (p*100)/101325
do.sat <- atm*exp(-139.34411+157570.1/(t+273.15)-66423080/(t+273.15)^2+12438000000/(t+273.15)^3-862194900000/(t+273.15)^4)
do.pct.sat <- (do/do.sat)*100
return(do.pct.sat)
}#function for calculating the % oxygen saturation
#here's some dummy date resembling real data
date.initial <- as.POSIXct("2022-06-01")#deployment date
date.end <- as.POSIXct("2022-10-01")#date of retrieval
id <- c("a","b","c")#lake id
lake <- list()#make dataset list for each lake
s <- list()#list of dataframes for the samples from the lake logger timelines
#loop below generates dummy data. this is not part of the real script that I want to improve.
for(i in 1:3){
datetime <- seq(from = date.initial,to = date.end,by=10*60)#10 minute intervals from deploy to retrieve
l <- length(datetime)#vector length of datetime
#set dummy data
do <- rnorm(l,mean = 10,sd=3)#o2 conc.
pressure <- rnorm(l,mean = 980,sd=50)#baro pressure
temp <- rnorm(l,mean=15,sd=5)#water temp
k.z <- rnorm(l,mean=0.35,sd=0.1)#gas exchange koeff / mixed layer depth
dosat.pct <- do.pct.sat(temp,do,pressure)#oxygen sat in %
iso <- as.data.frame(cbind(datetime,do,dosat.pct,temp,pressure,k.z))#bind dummy dataframe to resemble real data
iso$datetime <- as.POSIXct(iso$datetime,origin = "1970-01-01")
lake[[i]] <- iso#save the data frame to the lake logger list
samples <- as.POSIXct(sample((date.initial+5*24*60*60):date.end, 7, replace=FALSE),origin = "1970-01-01")#randomize 7 timepoints
s[[i]] <- as.data.frame(samples)#save it in empty data frame
s[[i]]$lake <- id[i]
}
names(lake) <- id
samples <- bind_rows(s)
samples$samples <- round_date(samples$samples,unit="10 minutes")#rounds my random samples to closest 10 minute
Below is the function that I want to effectivize (same library). I think it operates slow because I take one date at a time, before taking the next;
sample.lakes <- function(average=3){
dts <- list()#empty list
for(i in 1:length(lake)){
print(id[i])
data = lake[[i]]
y <- samples[grepl(id[i],samples$lake),]
dates <- y$samples
#empty vectors to fill with values sampled in loop
avg.kz <- vector()
sd.kz <- vector()
do.mgl <- vector()
dosat.pct <- vector()
temp.c <- vector()
for (k in 1:length(dates)){
print(k)
#below I filter the logger data to contain timepoint of sampling minus number of days I want the average from 'averages'.
prior.days = filter(data, datetime > as.POSIXct(dates[k])-(24*60*60)*average & datetime < as.POSIXct(dates[k]))
#fill the empty vectors with value I desire, mean and sd k.z and point sample of the other variables.
avg.kz[k] = mean(prior.days$k.z)
sd.kz[k] = sd(prior.days$k.z)
temp.c[k] <- data[grepl(dates[k],data$datetime),]$temp
do.mgl[k] <- data[grepl(dates[k],data$datetime),]$do
dosat.pct[k] <- data[grepl(dates[k],data$datetime),]$dosat.pct
}
sd.kz[is.na(sd.kz)] <- 0
#add them to data frame y
y$dosat.pct <- dosat.pct
y$do.mgl <- do.mgl
y$temp.c <- temp.c
y$avg.kz <- avg.kz
y$sd.kz <- sd.kz
dts[[i]] <- y#add to single-row dataframe
}
iso <- bind_rows(dts)#make a complete dataframe with samples.
return(iso)
}
iso <- sample.lakes(average=4)#do not set average to > 5 in this example script
I would appreciaty any suggestions alot!
My guess is that this part using grepl:
data[grepl(dates[k],data$datetime),]
inside your inner for loop is slow.
Couldn't you instead try just seeing if the datetimes are the same with ==?
In addition, you only need to subset data once.
Try this as an alternative:
for (k in 1:length(dates)){
print(k)
prior.days = filter(data, datetime > as.POSIXct(dates[k])-(24*60*60)*average & datetime < as.POSIXct(dates[k]))
avg.kz[k] = mean(prior.days$k.z)
sd.kz[k] = sd(prior.days$k.z)
sub_data <- data[data$datetime == dates[k], ]
temp.c[k] <- sub_data$temp
do.mgl[k] <- sub_data$do
dosat.pct[k] <- sub_data$dosat.pct
}

Filter data using a loop to compare calculated metrics for different days of week in R

I have a data set of traffic by day and by hour. I have written a function that I would like to apply to different conditions of this data set.
For instance, I want to compare average traffic for different days of the week and different hours of the day.
How do I use a loop to filter through each possibility day of week and return the metric i have a function for, for each of them?
Would really appreciate some help here.
Thanks,
Zach
I'm not sure a loop is the best thing for what you're trying to do, but here's one way to do it.
# generate example data
set.seed(1234)
df <- data.frame(hour = sample(1:24, 100, T),
dow = sample(1:7, 100, T),
traffic = round(runif(100, 1, 50)))
# prep storage matrix for results
H <- sort(unique(df$hour))
D <- sort(unique(df$dow))
res_mat <- matrix(NA, nrow=length(H), ncol=length(D))
colnames(res_mat) <- D
rownames(res_mat) <- H
# function I want to apply to subsets of values
my_fun <- function(x) { mean(x) + 2 }
# loop
for(h in seq_along(H)) {
for(d in seq_along(D)) {
# get vector of traffic for a particular hour and day-of-week combo
subset_of_traffic <- df[df$hour == H[h] & df$dow == D[d], "traffic"]
# skip if no traffic data for this hour and day-of-week combo
if(length(subset_of_traffic)==0) next
# run function on that subset and store result
res_mat[h,d] <- my_fun(subset_of_traffic)
}
}
A faster way to get the same results with data.table:
library(data.table)
dt <- data.table(df)
res_dt <- dt[ , .(results = my_fun(traffic)), by=.(hour, dow)]

function to be run multiple times to generate the final dataset in R

I am new to R and have written a function that needs to be run multiple times to generate the final dataset.
So the multiple times is determined by the vector of unique years and again based on these years every single time the function gives an output.
Still I am not getting the right output.
Desired output: for eg it takes 10 samples from each year, after 10th run I should have 100 rows of correct output.
create_strsample <- function(n1,n2){
yr <- c(2010,2011,2012,2013)
for(i in 1:length(yr)){
k1<-subset(data,format(as.Date(data$account_opening_date),"%Y")==yr[i])
r1 <-sample(which(!is.na(k1$account_closing_date)),n1,replace=FALSE)
r2<-sample(which(is.na(k1$account_closing_date)),n2,replace=FALSE)
#final.data <-k1[c(r1,r2),]
sample.data <- lapply(yr, function(x) {f.data<-create_strsample(200,800)})
k1 <- do.call(rbind,k1)
return(k1)
}
final <- do.call(rbind,sample.data)
return(final)
}
stratified.sample.data <- create_strsample(200,800)
A MWE would have been nice, but I'll give you a template for these kind of questions. Note, that this is not optimized for speed (or anything else), but only for the ease of understanding.
As noted in the comments, that call to create_strsample in the loop looks weird and probably isn't what you really want.
data <- data.frame() # we need an empty, but existing variable for the first loop iteration
for (i in 1:10) {
temp <- runif(1,max=i) # do something...
data <- rbind(data,temp) # ... and add this to 'data'
} # repeat 10 times
rm(temp) # don't need this anymore
That return(k1) in the loop also looks wrong.
I tried this later after your suggestion #herbaman for the desired output minus the lapply.
create_strsample <- function(n1,n2){
final.data <- NULL
yr <- c(2010,2011,2012,2013)
for(i in 1:length(yr)){
k1<-subset(data,format(as.Date(data$account_opening_date),"%Y")==yr[i])
r1 <- k1[sample(which(!is.na(k1$account_closing_date)),n1,replace=FALSE), ]
r2 <- k1[sample(which(is.na(k1$account_closing_date)),n2,replace=FALSE), ]
sample.data <- rbind(r1,r2)
final.data <- rbind(final.data, sample.data)
}
return(final.data)
}
stratified.sample.data <- create_strsample(200,800)

average gridded climate data for duplicated times in r

I have a gridded climate dataset, such as:
# generate time vector
time1 <- seq(14847.5,14974.5, by = 1)
time2 <- seq(14947.5,14974.5, by = 1)
time <- c(time1,time2)
time <- as.POSIXct(time*86400,origin='1970-01-01 00:00')
# generate lat and lon coordinates
lat <- seq(80,90, by = 1)
lon <- seq(20,30, by = 1)
# generate 3dimensional array
dat <- array(runif(length(lat)*length(lon)*length(time)),
dim = c(length(lon),length(lat),length(time)))
such that
> dim(dat)
[1] 11 11 156
the dimensions of the data are describing the variable at different longitude (dim = 1), latitude (dim = 2), and time (dim = 3).
The issue I have at the moment is that some of the times are repeated, something to do with overlapping sensors measuring the data. Therefore, I was wondering if it was possible to only keep the unique times for dat, but average the data within the grid for the duplicated times i.e. if there are two repeated days we take the average value in each latitude and longitude grid for that time.
I can find the unique times as:
# only select unique times
new_time <- unique(time)
unique_time <- unique(time)
The following code then aims to loop through each grid (lat/lon) and average all of the duplicated days.
# loop through lat/lon coordinates to generate new data
new_dat <- array(dim = c(length(lon),length(lat),length(new_time)))
for(i in 1:length(lon)){
for(ii in 1:length(lat)){
dat2 <- dat[i,ii,]
dat2b <- NA
for(k in 1:length(unique_time)){
idx <- time == unique_time[k]
dat2b[k] <- mean(dat2[idx], na.rm = TRUE)
}
new_dat[i,ii,] <- dat2b
}
}
I'm convinced that this provides the correct answer, but I'm certain there is a much cleaner method do achieve this.
I should also note that my data is quite large (i.e. k = 7000), so this last loop is not very efficient, to say the least.
My original answer:
This is a bit more concise and efficient by use of aggregate:
for(i in 1:length(lon)){
for(ii in 1:length(lat)){
new_dat[i,ii,] <- as.numeric(aggregate(dat[i,ii,], by=list(time),mean)$x)
}
}
It still has 2 out of the 3 of the loops, but it manages to bypass creating dat2, dat2b, and unique_time.
My improved answer:
f <- function(i, ii){as.numeric(aggregate(dat[i,ii,], by=list(time),mean)$x)}
for(i in 1:nrow(expand.grid(1:length(lon),1:length(lat)))){
new_dat[expand.grid(1:length(lon),1:length(lat))[i,1],
expand.grid(1:length(lon),1:length(lat))[i,2],] <-
f(expand.grid(1:length(lon),1:length(lat))[i,1],expand.grid(1:length(lon),
1:length(lat))[i,2])
}
Got it down to just 1 loop. We could probably bypass that loop too with an apply.

Inserting outliers to a dataframe

I try to create a function to inject outliers to an existing data frame.
I started creating a new dataframe outsusing the maxand minvalues of the original dataframe. This outs dataframe will containing a certain amountof outliered data.
Later I want to inject the outliered values of the outs dataframe to the original dataframe.
What I want to get is a function to inject a certain amount of outliers to an original dataframe.
I have different problems for example: I do know if I am using correctly runif to create a dataframe of outliers and second I do not know how to inject the outliers to temp
The code I've tried until now is:
addOutlier <- function (data, amount){
maxi <- apply(data, 2, function(x) (mean(x)+(3*(sd(x)))))
mini <- apply(data, 2, function(x) (mean(x)-(3*(sd(x)))))
temp <- data
amount2 <- ifelse(amount<1, (prod(dim(data))*amount), amount)
outs <- runif(amount2, 2, min = mini, max = maxi) # outliers
if (amount2 >= prod(dim(data))) stop("exceeded data size")
for (i in 1:length(outs))
temp[sample.int(nrow(temp), 1), sample.int(ncol(temp), 1)] <- outs
return (temp)
}
Please any help to make this work, will be deeply appreciated
My understanding is that what you're trying to achieve is adding a set amount of outliers to each column in your vector. Alternatively, you seem to also be looking into adding a % of outliers to each column. I wrote down a solution only for the former case, but the latter should pretty easy to implement if you really need it. Note how I broke things down into two functions, to (hopefully) help clarify what is going on. Hope this helps!
add.outlier.to.vector <- function(vector, amount) {
cells.to.modify <- sample(1:length(vector), amount, replace=F)
mean.val <- mean(vector)
sd.val <- sd(vector)
min.val <- mean.val - 3 * sd.val
max.val <- mean.val + 3 * sd.val
vector[cells.to.modify] <- runif(amount, min=min.val, max=max.val)
return(vector)
}
add.outlier.to.data.frame <- function (temp, amount){
for (i in 1:ncol(temp)) {
temp[,i] <- add.outlier.to.vector(temp[,i], amount)
}
return (temp)
}
data <- data.frame(
a=c(1,2,3,4),
b=c(7,8,9,10)
)
add.outlier.to.data.frame(data, 2)

Resources