I have data in form: date, key, value, n,
where:
date is the first date and time when a variable key got a specific value.
key is the variable name
value is a value
n is the number of subsequent occurrences of the same value
For example, if a has a value of 20 from 8am to 11am on 2017-01-01, and there are four recordings during that time span, its n value for 2017-01-01 8am would be 4. The reason the data is highly aggregated like this is that there are billions of rows of data.
This is a small example:
r1 <- c("2017-01-01 08:00:00","a",20,5)
r2 <- c("2017-01-01 08:00:00","b",10,20)
r3 <- c("2017-01-01 14:00:00","a",35,4)
dat <- rbind(r1,r2,r3)
colnames(dat) <- c("Date","Key","Value","n")
My goal is to show the value distributions over time, using different plots including lines (for time series).
As the amount of data is huge, I'm looking for an effective way of ungrouping this kind of data (i.e. replicating the value n-times) or displaying the data as it is.
Here is how I would ungroup the data, using dplyr chain. But as you can see, the comment of Roman is quite similar.
r1 <- c("2017-01-01 08:00:00","a",20,5)
r2 <- c("2017-01-01 08:00:00","b",10,20)
r3 <- c("2017-01-01 14:00:00","a",35,4)
dat <- as.data.frame(rbind(r1,r2,r3),stringsAsFactors = F)
colnames(dat) <- c("Date","Key","Value","n")
library(dplyr)
dat %>% mutate(n = as.numeric(n)) %>%
do(.[rep(1:nrow(.), .$n),])
You could do this:
dat <- as.data.frame(dat)
dat$Date <- as.character(dat$Date)
dat$n <- as.numeric(dat$n)
dat$Value <- as.numeric(dat$Value)
ggplot(dat) +
geom_point(aes(x = Date, y = Value, color = Key, stroke = n)) +
expand_limits(y = 0)
Related
I have a data frame which contains a 'year' variable with values between 1 and 100000 repeating multiple times. I have another data frame with 1000 'loss amounts' with an associated probability for each loss. I'd like to merge loss amounts onto the year data frame by sampling from the loss amounts table. I want to sample without replacement within each level of the year variable e.g. within each level of the year variable the loss amounts should be unique.
Reproducible example below where I can only get it to sample without replacement across the full 'year' dataset and not just within the different levels of the year variable as required. Is there a way of doing this (ideally without using loops as I need the code to run quickly)
#mean frequency
freq <- 100
years <- 100000
#create data frame with number of losses in each year
num_losses <- rpois(years, freq)
year <- tibble(index=1:length(num_losses), num=num_losses)
year <- map2(year$index, year$num, function(x, y) rep(x, y)) %>% unlist() %>% tibble(year = .)
#lookup table with loss amounts
lookup <- tibble(prob = runif(1000, 0, 1), amount = rgamma(1000, shape = 1.688, scale = 700000)) %>%
mutate(total_prob = cumsum(prob)/sum(prob),
pdf = total_prob - lag(total_prob),
pdf = ifelse(is.na(pdf), total_prob, pdf))
#add on amounts to year table by sampling from lookup table
sample_from_lookup <- function(number){
amount <- sample(lookup$amount, number, replace = FALSE, prob = lookup$pdf)
}
amounts <- sample_from_lookup(nrow(year))
year <- tibble(year = year$year, amount = amounts)
According to your description, maybe you can try replicate within your sample_from_lookup, i.e.,
sample_from_lookup <- function(number){
amount <- replicate(number,
sample(lookup$amount,
1,
replace = FALSE,
prob = lookup$pdf))
}
In this case, you need to set size 1 to your sample function.
I ended up using split to break the 'year' data into groups within a list. Then running the(slightly amended) sample_from_lookup function on each element of the list using map. Amended code below.
#mean frequency
freq <- 5
years <- 100
#create data frame with number of losses in each year
num_losses <- rpois(years, freq)
year <- tibble(index=1:length(num_losses), num=num_losses)
year <- map2(year$index, year$num, function(x, y) rep(x, y)) %>% unlist() %>% tibble(year = .)
year_split = split(year, year$year)
#lookup table
lookup <- tibble(prob = runif(1000, 0, 1), amount = rgamma(1000, shape = 1.688, scale = 700000)) %>%
mutate(total_prob = cumsum(prob)/sum(prob),
pdf = total_prob - lag(total_prob),
pdf = ifelse(is.na(pdf), total_prob, pdf))
#add on amounts to year table by sampling from lookup table
sample_from_lookup <- function(x){
number = NROW(x)
amount <- sample(lookup$amount, number, replace = FALSE, prob = lookup$pdf)
}
amounts <- map(year_split, sample_from_lookup) %>% unlist() %>% tibble(amount = .)
year <- tibble(year = year$year, amount = amounts$amount)
I think this problem may be of interest to others who deal with data smoothing of long-term environmental variables.
I have a dataset structured as below:
Columns:
Date Hour_Min Y(response variable)
These data are hourly, and I need to create a moving average of the diel cycle, but categorized by the Hour_Min. In other words, if I were to use a 31 day window, for a given day the running average data point for Hour_Min 00:00 would take the average of the day in question with the data points from Hour_Min 00:00 for the previous and the following 15 days. This would then repeat for that day's hour 1:00, etc. through the dataframe.
Unfortunately the data also have many NAs, which is problematic for moving window averages, although I think that can be solved using rollapply from the zoo package.
One approach I tried was to use tidyr's spread function to switch from long to wide format, to create a dataframe like this:
Date Y_Hour_Min_0000 Y_Hour_Min_0100 Y_Hour_Min_0200 etc...
If I could change the format in this way, I could then create new columns of running averages of each Y_Hour_Min_.... column. I would then need to gather everything together back to long format (another task I'm not sure how to approach).
However, I wasn't able to get the spread function to work so that it kept Date as a grouping variable associated with each Y_Hour_Min_.... column.
Another, possibly more elegant solution would be if there is a way to create a single new column in one step, using some combination of rollapply and custom function.
Any thoughts on how to implement code for this task will be greatly appreciated. Below I have a simple code to simulate my dataset:
Simulated data:
### Create vector of hours/dates:
date <- seq(as.POSIXct("2016-01-01 00:00"), as.POSIXct("2016-12-30
23:00"), by="hour")
### Create vector of noisy sine function:
d <- 365
n <- 24*d # number of data points
t <- seq(from = 0, to = 2*d*pi, length.out=24*d)
a <- 6
b <- 1
c.norm <- rnorm(n)
amp <- 3
y <- a*sin(b*t)+c.norm*amp+15
### Randomly insert NAs into data:
ind <- which(y %in% sample(y, 1000))
y[ind]<-NA
### Create test dataframe:
df <- data.frame(dt = date, y = y) %>%
separate(dt, c("date", "hour_min"), sep=" ") %>%
mutate(date = as.Date(date))
I think this could work:
EDIT: Simplified code by adding fill = NA parameter to rollapply() function as suggested in the comments.
# add a complete date + time stamp
df$date_time <- paste(df$date, df$hour_min)
# make new column to store median data
df$median_y <- NA
# set rolling median width
width_roll <- 31
# do a rolling median for each hour, one at a time
# add NAs where no median can be calculated
for (i in levels(factor(df$hour_min))) {
df[df$hour_min == i, "median_y"] <- rollapply(df[df$hour_min == i, "y"],
width = width_roll,
median,
na.rm = TRUE,
fill = NA))
}
The approach is just to use the rollapply() function as you suggested, but only on one particular hour at a time. Then each of these is placed back into a new column in turn.
Here's an example for just one hour over the whole year, which makes it easier to visualize the median smoothing.
# Examples:
# plot one hour plus rolling median over time
# here i = "23:00:00"
plot(x = as.POSIXct(df[df$hour_min == i, "date_time"]),
y = df[df$hour_min == i, "y"],
type = "l",
col = "blue",
ylab = "y values",
xlab = i)
lines(x = as.POSIXct(df[df$hour_min == i, "date_time"]),
y = df[df$hour_min == i, "median_y"],
lwd = 3)
legend("topleft",
legend = c("raw", "median"),
col = c("blue", "black"),
lwd = 3)
Plot for a single hour
This is for everything (lots of data so not so easy to see but looks like it worked).
# plot all the data
plot(x = as.POSIXct(df$date_time),
y = df$y,
type = "l",
col = "blue",
ylab = "y values",
xlab = "Date")
lines(x = as.POSIXct(df$date_time),
y = df$median_y,
lwd = 3)
legend("topleft",
legend = c("raw", "median"),
col = c("blue", "black"),
lwd = 3)
Plot for all data
I'll take a crack at it but its not perfect. Hoping someone can come in and top me off.
TL:DR;
df2 <- df %>% slice(-7441) %>% spread(hour_min, y)
mov_avg <- function(x) {c(rep(NA, 15), rollapply(x, width = list(-15:15), FUN = mean, align="center", na.rm=T), rep(NA, 15))}
avgs <- as.data.frame(matrix(unlist(lapply(df2[,2:ncol(df2)], mov_avg)), nrow = nrow(df2), byrow = FALSE))
colnames(avgs) <- paste0("avg_", colnames(df2[,2:ncol(df2)]))
final_df <- cbind(df2, avgs) %>%
gather(2:ncol(.), key = "hour_min", value = "value") %>%
arrange(date, hour_min)
In Depth:
Starting at your starting point.. I added set.seed(1) so we can all follow along in tandem.
Your Initial Starting Point:
### Create vector of hours/dates:
set.seed(1)
date <- seq(as.POSIXct("2016-01-01 00:00"), as.POSIXct("2016-12-30
23:00"), by="hour")
### Create vector of noisy sine function:
d <- 365
n <- 24*d # number of data points
t <- seq(from = 0, to = 2*d*pi, length.out=24*d)
a <- 6
b <- 1
c.norm <- rnorm(n)
amp <- 3
y <- a*sin(b*t)+c.norm*amp+15
### Randomly insert NAs into data:
ind <- which(y %in% sample(y, 1000))
y[ind]<-NA
### Create test dataframe:
df <- data.frame(dt = date, y = y) %>%
separate(dt, c("date", "hour_min"), sep=" ") %>%
mutate(date = as.Date(date))
First thing was to do what you said and try the long format. Normally I think this problem would be best by using dplyr's group_by on the hour_min column and doing the rolling average there, but I'm not sure how to do that.
First thing I noticed is that there is a duplicate value for one row on a given day. There are two observations for 1am, which breaks our spread, so I removed that observation using slice(-7441)
So let's spread your df.
df2 <- df %>% slice(-7441) %>% spread(hour_min, y)
As we can see, the dataframe is now 365 observations long(dates), and 25 columns wide (date + 24 hours)
dim(df2)
[1] 365 25
Next thing I did which is where this isn't perfect, is using rollapply. When using rollapply we can give it a width = list(-15:15). This will look 15 days into the past and 15 into the future and average all 31 days together. The problem is the first 15 days don't have a past 15, and the last 15 days don't have a future 15. So I padded these with NAs. I'm hoping someone can fix this part of my answer.
I created a custom function to do this:
mov_avg <- function(x) {c(rep(NA, 15), rollapply(x, width = list(-15:15), FUN = mean, align="center", na.rm=T), rep(NA, 15))}
If we just do the rollapply we will get a vector of length 335. I padded 15 in front and back to get us to our needed 365.
Next we want to lapply that function across our entire dataframe. That will give us a list of 24 vectors of length 365. We then want to turn that into a dataframe and bind it to our current dataframe.
Lastly we gather all of the columns back into the long format and arrange
avgs <- as.data.frame(matrix(unlist(lapply(df2[,2:ncol(df2)], mov_avg)), nrow = nrow(df2), byrow = FALSE))
colnames(avgs) <- paste0("avg_", colnames(df2[,2:ncol(df2)]))
final_df <- cbind(df2, avgs) %>%
gather(2:ncol(.), key = "hour_min", value = "value") %>%
arrange(date, hour_min)
I hope this helps.
I have a data frame which consists of date and temperature of 34 different systems each system in different column. I need to calculate every systems average hourly temperature. I use this code to calculate average for 1 system. But if I want to calculate average for other 33 systems, I have to repeat code again, and again. Is there a better way to find hourly average in all columns at once ?
dat$ut_ms <- dat$ut_ms/1000
dat[ ,1]<- as.POSIXct(dat[,1], origin="1970-01-01")
dat$ut_ms <- strptime(dat$ut_ms, "%Y-%m-%d %H:%M")
dat$ut_ms <- cut(dat[enter image description here][1]$ut_ms, breaks = 'hour')
meanNPWD2401<- aggregate(NPWD2401 ~ ut_ms, dat, mean)
I added a picture of the data. For better understing of what I want.
You can split your data per hour and itterate,
list1 <- split(dat, cut(strptime(dat$ut_ms, format = '%Y-%m-%d %H:%M'), 'hour'))
lapply(list1, colMeans)
When you rearrange the data into a long format, things get much easier
n.system <- 34
n.time <- 100
temp <- rnorm(n.time * n.system)
temp <- matrix(temp, ncol = n.system)
seconds <- runif(n.time, max = 3 * 3600)
time <- as.POSIXct(seconds, origin = "1970-01-01")
dataset <- data.frame(time, temp)
library(dplyr)
library(tidyr)
dataset %>%
gather(key = "system", value = "temperature", -time) %>%
mutate(hour = cut(time, "hour")) %>%
group_by(system, hour) %>%
summarise(average = mean(temperature))
My data look like this:
I want to identify which "downward trend" each observation is part of, so I can group them and do things like make this graph:
My logic for distinguishing "downward trends" is that they end when the next observation has a higher measurement.
I've written a loop to do this, but I'm wondering if there's a better way to do it with one of the apply functions or something like them.
##Create sample data
df <- data.frame(timestamp = seq(1:20),
measurement = seq(10, 1, by = -1))
## This is the for loop I'm hoping to improve
df$downward.trend.seq <- 0
seq <- 1
for(i in 1:nrow(df)){
df$downward.trend.seq[i] <- seq
if (i < nrow(df) & df$measurement[i] < df$measurement[i+1]) {
seq <- seq + 1
}
}
## Code for plots
library(ggplot2)
library(dplyr)
ggplot(df, aes(x = timestamp, y = measurement)) + geom_point()
ggplot(df, aes(x = timestamp, y = measurement, group = downward.trend.seq)) + geom_line(aes(color=downward.trend.seq %>% factor))
You can use which and diff to help identify the where downward trend changes occur, and use cumsum to fill out the group membership.
# set up new column with all 0s
df$downward.trend.seq <- 0
# use diff to identify indices to change to 1
df$downward.trend.seq[which(c(NA, diff(df$measurement)) > 0)] <- 1
# use cumsum to fill in proper group membership
df$downward.trend.seq <- cumsum(df$downward.trend.seq)
Here is a dplyr solution
df %>% mutate(data_group = cumsum( c(0, diff(measurement)) > 0 ))
This performs the cumulative sum over a logical vector and assigns the results to data_group
I have following data on student scores on several pretests before their true exam.
a<-(c("2013-02-25","2013-03-13","2013-04-24","2013-05-12","2013-07-12","2013-08-11","actual_exam_date"))
b<-c(300,230,400,NA,NA,NA,"2013-04-30")
c<-c(NA,260,410,420,NA,NA,"2013-05-30")
d<-c(300,230,400,NA,370,390,"2013-08-30")
df<-as.data.frame(rbind(b,c,d))
colnames(df)<-a
rownames(df)<-(c("student 1","student 2","student 3"))
The actual datasheet is much larger. Since the dates vary so much, and the timing between the pretests and to the exam are relatively similar, I would rather convert the true dates into the number of days before the exam, so that they are the new column names, not dates. I understand that this will merge some of the columns which is OK. How would I be able to do that?
This is another good use case for reshape2, because you want to go to long form for plotting. For example:
# you are going to need the student id as a field
df$student_id <- row.names(df)
library('reshape2')
df2 <- melt(df, id.vars = c('student_id','actual_exam_date'),
variable.name = 'pretest_date',
value.name = 'pretest_score')
# drop empty observations
df2 <- df2[!is.na(df2$pretest_score),]
# these need to be dates
df2$actual_exam_date <- as.Date(df2$actual_exam_date)
df2$pretest_date <- as.Date(df2$pretest_date)
# date difference
df2$days_before_exam <- as.integer(df2$actual_exam_date - df2$pretest_date)
# scores need to be numeric
df2$pretest_score <- as.numeric(df2$pretest_score)
# now you can make some plots
library('ggplot2')
ggplot(df2, aes(x = days_before_exam, y = pretest_score, col=student_id) ) +
geom_line(lwd=1) + scale_x_reverse() +
geom_vline(xintercept = 0, linetype = 'dashed', lwd = 1) +
ggtitle('Pretest Performance') + xlab('Days Before Exam') + ylab('Pretest Score')
Here is one way to approach this one. I am sure there are many others. I commented the code to explain what is going on at each step:
# Load two libraries you need
library(tidyr)
library(dplyr)
# Construct data frame you provided
a <- (c("2013-02-25","2013-03-13","2013-04-24","2013-05-12","2013-07-12","2013-08-11","actual_exam_date"))
b <- c(300,230,400,NA,NA,NA,"2013-04-30")
c <- c(NA,260,410,420,NA,NA,"2013-05-30")
d <- c(300,230,400,NA,370,390,"2013-08-30")
df <- as.data.frame(rbind(b,c,d))
colnames(df) <- a
# Add student IDs as a column instead of row names and move them to first position
df$StudentID <- row.names(df)
row.names(df) <- NULL
df <- select(df, StudentID, everything())
# Gather date columns as 'categories' with score as the new column value
newdf <- df %>% gather(Date, Score, -actual_exam_date, -StudentID) %>% arrange(StudentID)
# Convert dates coded as factor variables into actual dates so we can do days to exam computation
newdf$actual_exam_date <- as.Date(as.character(newdf$actual_exam_date))
newdf$Date <- as.Date(as.character(newdf$Date))
# Create a new column of days before exam per student ID (group) and filter
# out dates with missing scores for each student
newdf <- newdf %>% group_by(StudentID) %>% mutate(daysBeforeExam = as.integer(difftime(actual_exam_date, Date, units = 'days'))) %>% filter(!is.na(Score))
# Plot the trends using ggplot
ggplot(newdf, aes(x = daysBeforeExam, y = Score, col = StudentID, group = StudentID)) + geom_line(size = 1) + geom_point(size = 2)