R: Replacing NA values by mean of hour with dplyr - r

I'm learning the dplyr package in R and I really like it. But now I'm dealing with NA values in my data.
I would like to replace any NA by the average of the corresponding hour, for example with this very easy example:
#create an example
day = c(1, 1, 2, 2, 3, 3)
hour = c(8, 16, 8, 16, 8, 16)
profit = c(100, 200, 50, 60, NA, NA)
shop.data = data.frame(day, hour, profit)
#calculate the average for each hour
library(dplyr)
mean.profit <- shop.data %>%
group_by(hour) %>%
summarize(mean=mean(profit, na.rm=TRUE))
> mean.profit
Source: local data frame [2 x 2]
hour mean
1 8 75
2 16 130
Can I use the dplyr transform command to replace the NA's of day 3 in the profit with 75 (for 8:00) and 130 (for 16:00)?

Try
shop.data %>%
group_by(hour) %>%
mutate(profit= ifelse(is.na(profit), mean(profit, na.rm=TRUE), profit))
# day hour profit
#1 1 8 100
#2 1 16 200
#3 2 8 50
#4 2 16 60
#5 3 8 75
#6 3 16 130
Or you could use replace
shop.data %>%
group_by(hour) %>%
mutate(profit= replace(profit, is.na(profit), mean(profit, na.rm=TRUE)))

A (less elegant) approach with base functions:
transform(shop.data,
profit = ifelse(is.na(profit),
ave(profit, hour, FUN = function(x) mean(x, na.rm = TRUE)),
profit))
# day hour profit
# 1 1 8 100
# 2 1 16 200
# 3 2 8 50
# 4 2 16 60
# 5 3 8 75
# 6 3 16 130

Related

R: creating a longitudinal dataset using tidyr

I am looking to generate a longitudinal dataset. I have generated my pat numbers and treatment groups:
library(dplyr)
set.seed(420)
Pat_TNO <- 1001:1618
data.frame(Pat_TNO = Pat_TNO) %>%
rowwise() %>%
mutate(
trt = rbinom(1, 1, 0.5)
)
My timepoints (in days) are:
timepoint_weeks <- c(seq(2, 12, 2), 16, 20, 24, 52)
timepoint_days <- 7 * timepoint_weeks
How can I pivot this dataset using the vector timepoint_days, so I have 10 rows per participant and column names Pat_TNO, trt, timepoint_days.
You can use the unnest function from tidyr to achieve what you want.
Here is the code
library(dplyr)
library(tidyr)
set.seed(420)
Pat_TNO <- 1001:1618
x <- data.frame(Pat_TNO = Pat_TNO) %>%
rowwise() %>%
mutate(
trt = rbinom(1, 1, 0.5)
)
timepoint_weeks <- c(seq(2, 12, 2), 16, 20, 24, 52)
timepoint_days <- 7 * timepoint_weeks
x %>%
mutate(timepoint_days = list(timepoint_days)) %>%
unnest()
Output
# A tibble: 6,180 × 3
Pat_TNO trt timepoint_days
<int> <int> <dbl>
1 1001 1 14
2 1001 1 28
3 1001 1 42
4 1001 1 56
5 1001 1 70
6 1001 1 84
7 1001 1 112
8 1001 1 140
9 1001 1 168
10 1001 1 364
# … with 6,170 more rows
Here I used the mutate function to add a column with a list containing timepoint_days in every row. And then unnest collapses each row to get 10 rows per participant.

Inventory Projection Calculation in R

I am trying to replace an obsolete Excel report currently used for sales forecasting and inventory projections by our supply chain team and I am using R for this.
The desired output is a data frame with one of the columns being the projected closing inventory positions for each week across a span of N weeks.
The part I am struggling with is the recursive calculation for the closing inventory positions. Below is a subset of the data frame with dummy data where "stock_projection" is the desire result.
I've just started learning about recursion in R so I am not really sure on how to implement this here. Any help will be much appreciated!
week
forecast
opening_stock
stock_projection
1
10
100
100
2
11
89
3
12
77
4
10
67
5
11
56
6
10
46
7
12
34
8
11
23
9
9
14
10
12
2
Update
I have managed to modify the solution explained here and have replicated the above outcome:
inventory<- tibble(week = 1, opening_stock = 100)
forecast<- tibble(week = 2:10, forecast = c(11, 12, 10, 11, 10, 12, 11, 9, 12) )
dat <- full_join(inventory, forecast)
dat2 <- dat %>%
mutate(forecast = -forecast) %>%
gather(transaction, value, -week) %>%
arrange(week) %>%
mutate(value = replace_na(value, 0))
dat2 %>%
mutate(value = cumsum(value)) %>%
ungroup() %>%
group_by(week) %>%
summarise(stock_projection = last(value))
Despite working like a charm, I am wondering whether there is another way to achieve this?
I think in the question above, you don't have to worry too much about recursion because the stock projection looks just like the opening stock minus the cumulative sum of the forecast. You could do that with:
library(dplyr)
dat <- tibble(
week = 1:10,
forecast = c(10,11,12,10,11,10,12,11,9,12),
opening_stock = c(100, rep(NA, 9))
)
dat <- dat %>%
mutate(fcst = case_when(week == 1 ~ 0,
TRUE ~ forecast),
stock_projection = case_when(
week == 1 ~ opening_stock,
TRUE ~ opening_stock[1] - cumsum(fcst))) %>%
dplyr::select(-fcst)
dat
# # A tibble: 10 × 4
# week forecast opening_stock stock_projection
# <int> <dbl> <dbl> <dbl>
# 1 1 10 100 100
# 2 2 11 NA 89
# 3 3 12 NA 77
# 4 4 10 NA 67
# 5 5 11 NA 56
# 6 6 10 NA 46
# 7 7 12 NA 34
# 8 8 11 NA 23
# 9 9 9 NA 14
# 10 10 12 NA 2

Identify the highest number of consecutive numbers in a dataframe and add rows with NA

Here is a reproducible example of the situation I need help for. I have a database (db1) in which weekly ratings of behavioral outcomes are recorded. The variable "Week" corresponds to the number of the week from the beginning of the year (e.g., Week = 1 indicates the week between January 1st and 7th, and so on...) and the variable "Score" to the value obtained by the subject on the criterion measure. In the real data set, I have several participants and a different number of ratings for each subject; however, in this example there is only one subject to make things easier.
library(magrittr)
x1 <- c(14, 18, 19, 20, 21, 23, 24, 25)
y1 <- c(34, 21, 45, 32, 56, 45, 23, 48)
db1 <- cbind(x1, y1) %>% as.data.frame() %>% setNames(c("Week", "Score"))
db1
# Week Score
#1 14 34
#2 18 21
#3 19 45
#4 20 32
#5 21 56
#6 23 45
#7 24 23
#8 25 48
What I need to do is to identify the highest number of ratings occurred in consecutive weeks in the database. In the example, the highest number is 4 because the ratings were consecutive from week 18 to 21. Here I added a column for demonstration, but it might not be necessary for the solution.
x2 <- c(14, 18, 19, 20, 21, 23, 24, 25)
y2 <- c(34, 21, 45, 32, 56, 45, 23, 48)
z2 <- c(1, 1, 2, 3, 4, 1, 2, 3)
db2 <- cbind(x2, y2, z2) %>% as.data.frame() %>% setNames(c("Week", "Score", "Consecutive"))
db2
# Week Score Consecutive
#1 14 34 1
#2 18 21 1
#3 19 45 2
#4 20 32 3
#5 21 56 4
#6 23 45 1
#7 24 23 2
#8 25 48 3
Finally, because every subject has to have a total of five consecutive ratings, I need to add a row with a missing datum where the highest number of consecutive weeks is below five (so that I can impute the missing data later on). However, there might be ratings before and after the sequence. If that is the case, I want to add the row based on the minimal distance between the first or last week of the longest series of consecutive weeks from the other existing rating. In the example, that means that the row with missing datum will be added after 21 because there are 4 missing weeks between week 14 and 18 whereas only 1 between week 21 and 23.
x3 <- c(14, 18, 19, 20, 21, 22, 23, 24, 25)
y3 <- c(34, 21, 45, 32, 56, NA, 45, 23, 48)
z3 <- c(1, 1, 2, 3, 4, 5, 1, 2, 3)
db3 <- cbind(x3, y3, z3) %>% as.data.frame() %>% setNames(c("Week", "Score", "Consecutive"))
db3
# Week Score Consecutive
#1 14 34 1
#2 18 21 1
#3 19 45 2
#4 20 32 3
#5 21 56 4
#6 22 NA 5
#7 23 45 1
#8 24 23 2
#9 25 48 3
For your information, this is not going to be part of the main statistical analyses but rather one of several ways I want to use to test the sensitivity of my model. So do not worry about whether it makes sense from a methodological point of view. In addition, if possible, a tidyverse solution would be greatly appreciated.
Thanks so much to anyone who will take the time.
The code is relatively easier, if you want to do it just for max group and if more than one, just for one.
db1 %>% mutate(consecutive = accumulate(diff(Week), .init = 1, ~if(.y == 1) { .x +1} else {1}),
dummy = max(consecutive) == consecutive & max(consecutive) < 5) %>%
group_by(grp = cumsum(consecutive == 1)) %>%
filter(sum(dummy) > 0) %>% #filter out group(s) with max consecutive
ungroup() %>% select(-dummy) %>%
filter(grp == min(grp)) %>% # filter out first such group, if there are more than 1
complete(consecutive = 1:5) %>%
select(-grp) %>%
mutate(Week = first(Week) + consecutive -1)
# A tibble: 5 x 3
consecutive Week Score
<dbl> <dbl> <dbl>
1 1 18 21
2 2 19 45
3 3 20 32
4 4 21 56
5 5 22 NA
OLD ANSWER Another tidyverse strategy (this can be modified to suit your additional column requirements which you have not given in sample)
library(tidyverse)
db1
#> Week Score
#> 1 14 34
#> 2 18 21
#> 3 19 45
#> 4 20 32
#> 5 21 56
#> 6 23 45
#> 7 24 23
#> 8 25 48
library(data.table)
db1 %>% mutate(consecutive = accumulate(diff(Week), .init = 1, ~if(.y == 1) { .x +1} else {1}),
dummy = max(consecutive) == consecutive & max(consecutive) < 5,
dummy2 = rleid(dummy)) %>%
group_split(dummy2, .keep = F) %>%
map_if( ~.x$dummy[[1]], ~.x %>% complete(consecutive = seq(max(consecutive), 5, 1), fill = list(Week = 1)) %>%
mutate(Week = cumsum(Week))) %>%
map_dfr(~.x %>% select(-dummy))
#> # A tibble: 9 x 3
#> Week Score consecutive
#> <dbl> <dbl> <dbl>
#> 1 14 34 1
#> 2 18 21 1
#> 3 19 45 2
#> 4 20 32 3
#> 5 21 56 4
#> 6 22 NA 5
#> 7 23 45 1
#> 8 24 23 2
#> 9 25 48 3
Created on 2021-06-10 by the reprex package (v2.0.0)
if I understand correctly
library(data.table)
library(tidyverse)
x1 <- c(14, 18, 19, 20, 21, 23, 24, 25)
y1 <- c(34, 21, 45, 32, 56, 45, 23, 48)
db1 <- cbind(x1, y1) %>% as.data.frame() %>% setNames(c("Week", "Score"))
db1 %>%
mutate(grp = cumsum(c(0, diff(Week)) > 1)) %>%
group_by(grp) %>%
mutate(n_grp = n()) %>%
ungroup() %>%
filter(n_grp == max(n_grp, na.rm = TRUE)) %>%
complete(grp,
n_grp,
nesting(Week = seq(from = first(Week), length = 5))) %>%
select(-c(grp, n_grp)) %>%
rows_upsert(db1, by = c("Week", "Score"))
#> # A tibble: 9 x 2
#> Week Score
#> <dbl> <dbl>
#> 1 18 21
#> 2 19 45
#> 3 20 32
#> 4 21 56
#> 5 22 NA
#> 6 14 34
#> 7 23 45
#> 8 24 23
#> 9 25 48
Created on 2021-06-10 by the reprex package (v2.0.0)
You can also use the following solution. Midway through this solution before we use add_row to add your additional rows, we can filter the whole data set for we use group_split I filtered the whole data set to keep only those groups with the maximum observations which means they have longer consecutive Weeks than others. So after we split by grouping variable we may end of with 2 or more groups of equal consecutive Weeks so then you can choose whichever your like based on your preference:
library(dplyr)
library(purrr)
library(tibble)
db1 %>%
mutate(Consecutive = +(Week - lag(Week, default = first(Week)) == 1),
grp = cumsum(Consecutive == 0)) %>%
group_by(grp) %>%
mutate(Consecutive = row_number()) %>%
group_by(grp, .drop = TRUE) %>%
add_count() %>%
ungroup() -> db2 # We create our grouping variable `grp` here
db2 %>%
filter(n == max(n)) %>%
group_split(grp) %>%
map_dfr(~ add_row(.x, Week = .x$Week[.x$n[1]] + seq(1, 5 - .x$n[1], 1),
Consecutive = .x$Consecutive[.x$n[1]] + seq(1, 5 - .x$n[1], 1),
grp = .x$grp[1])) %>%
bind_rows(db2 %>%
filter(n != max(n))) %>%
select(-c(grp, n)) %>%
arrange(Week)
# A tibble: 9 x 3
Week Score Consecutive
<dbl> <dbl> <dbl>
1 14 34 1
2 18 21 1
3 19 45 2
4 20 32 3
5 21 56 4
6 22 NA 5
7 23 45 1
8 24 23 2
9 25 48 3

How to populate a column using multiple conditionals across 2 dataframes?

Im trying to populate a column with values based on two conditionals across two separate dataframes. So,
df1$day == df2$day & df1$hour == df2$hour then fill df1$X with df2$depth
I struggle because I am not asking it to populate it with a generic value (i.e. if x==y, then y2=1). I am trying to get it select values across multiple rows. A mock example:
df1 df2
day hour X day hour depth
1 10 NA 1 10 50
1 11 NA 1 11 10
2 5 NA 1 3 100
5 9 NA 5 9 50
6 20 NA 7 17 80
7 17 NA 10 4 65
Any help would be greatly appreciated.
An easier option is join from data.table
library(data.table)
setDT(df1)[df2, X := depth, on = .(day, hour)]
df1
# day hour X
#1: 1 10 50
#2: 1 11 10
#3: 2 5 NA
#4: 5 9 50
#5: 6 20 NA
#6: 7 17 80
In base R, we can use match
df1$X <- with(df1, df2$depth[match(paste(day, hour), paste(df2$day, df2$hour))])
data
df1<- data.frame(day = c(1, 1, 2, 5:7), hour = c(10:11, 5, 9, 20, 17),
X = NA_integer_)
df2 <- data.frame(day = c(1, 1, 1, 5, 7, 10), hour = c(10, 11, 3, 9,
17, 4), depth = c(50, 10, 100, 50, 80, 65))
Using dplyr, we can do a left_join and then rename the depth column as X
library(dplyr)
left_join(df1, df2, by = c("day", "hour")) %>%
select(-X) %>%
rename(X = depth)
# day hour X
#1 1 10 50
#2 1 11 10
#3 2 5 NA
#4 5 9 50
#5 6 20 NA
#6 7 17 80
If the X column is not always NA you could use coalesce.
left_join(df1, df2, by = c("day", "hour")) %>%
mutate(X = coalesce(depth, X)) %>%
select(names(df1))
Or in base R :
merge(df1, df2, all.x = TRUE)[-3]

Count number of times in a month and year that time series data is above a threshold

I have a large dataframe in R with daily time series data of rainfall for a number of locations (each in their own column). I would like to know the number of times the rainfall is less than, or is greater than a threshold value for each location in each month and also by year.
My dataframe is large so I have provided example data here:
Date_ex <- seq.Date(as.Date('2000-01-01'),as.Date('2005-01-31'),by = 1)
A <- sample(x = c(1, 3, 5), size = 1858, replace = TRUE)
B <- sample(x = c(1, 2, 10), size = 1858, replace = TRUE)
C <- sample(x = c(1, 3, 5), size = 1858, replace = TRUE)
D <- sample(x = c(1, 3, 4), size = 1858, replace = TRUE)
df <- data.frame(Date_ex, A, B, C, D)
How would I find out the number of times the value in A, B, C and D is greater than 4 for each month and then also for each year.
I think I should then be able to summarise this into two new tables.
One like this (example, ignore numbers):
A B C D
2000-01 1 0 5 0
2000-02 2 16 25 0
2000-03 1 5 26 0
And one like this (example, ignore numbers):
A B C D
2000 44 221 67 0
2001 67 231 4 132
2002 99 111 66 4
2003 33 45 45 4
I think I should be using dplyr for this? But I'm not sure how to get the dates to work.
A solution using the dplyr and lubridate package. The key is to create Year and Month columns, group by those columns, and use summarise_all to summarize the data.
# Create the example data frame, set the seed for reproducibility
set.seed(199)
Date_ex <- seq.Date(as.Date('2000-01-01'),as.Date('2005-01-31'),by = 1)
A <- sample(x = c(1, 3, 5), size = 1858, replace = TRUE)
B <- sample(x = c(1, 2, 10), size = 1858, replace = TRUE)
C <- sample(x = c(1, 3, 5), size = 1858, replace = TRUE)
D <- sample(x = c(1, 3, 4), size = 1858, replace = TRUE)
df <- data.frame(Date_ex, A, B, C, D)
library(dplyr)
library(lubridate)
# Summarise for each month
df2 <- df %>%
mutate(Year = year(Date_ex), Month = month(Date_ex)) %>%
select(-Date_ex) %>%
group_by(Year, Month) %>%
summarise_all(funs(sum(. > 4))) %>%
ungroup()
df2
# # A tibble: 61 x 6
# Year Month A B C D
# <dbl> <dbl> <int> <int> <int> <int>
# 1 2000 1 13 8 13 0
# 2 2000 2 12 7 8 0
# 3 2000 3 7 9 9 0
# 4 2000 4 9 12 10 0
# 5 2000 5 11 12 8 0
# 6 2000 6 12 9 16 0
# 7 2000 7 10 11 10 0
# 8 2000 8 8 12 14 0
# 9 2000 9 12 12 12 0
# 10 2000 10 9 9 7 0
# # ... with 51 more rows
# Summarise for each year and month
df3 <- df %>%
mutate(Year = year(Date_ex)) %>%
select(-Date_ex) %>%
group_by(Year) %>%
summarise_all(funs(sum(. > 4)))
df3
# # A tibble: 6 x 5
# Year A B C D
# <dbl> <int> <int> <int> <int>
# 1 2000 120 119 125 0
# 2 2001 119 123 113 0
# 3 2002 135 122 105 0
# 4 2003 114 112 104 0
# 5 2004 115 125 124 0
# 6 2005 9 14 11 0
Here are a few solutions.
1) aggregate This solution uses only base R. The new Date column is the date for the first of the month or first of the year.
aggregate(df[-1] > 4, list(Date = as.Date(cut(df[[1]], "month"))), sum)
aggregate(df[-1] > 4, list(Date = as.Date(cut(df[[1]], "year"))), sum)
1a) Using yearmon class from zoo and toyear from (3) we can write:
library(zoo)
aggregate(df[-1] > 4, list(Date = as.yearmon(df[[1]])), sum)
aggregate(df[-1] > 4, list(Date = toyear(df[[1]])), sum)
2) rowsum This is another base R solution. The year/month or year is given by the row names.
rowsum((df[-1] > 4) + 0, format(df[[1]], "%Y-%m"))
rowsum((df[-1] > 4) + 0, format(df[[1]], "%Y"))
2a) Using yearmon class from zoo and toyear from (3) we can write:
library(zoo)
rowsum((df[-1] > 4) + 0, as.yearmon(df[[1]]))
rowsum((df[-1] > 4) + 0, toyear(df[[1]]))
3) aggregate.zoo Convert to a zoo object and use aggregate.zoo. Note that yearmon class internally represents a year and month as the year plus 0 for Jan, 1/12 for Feb, 2/12 for March, etc. so taking the integer part gives the year.
library(zoo)
z <- read.zoo(df)
aggregate(z > 4, as.yearmon, sum)
toyear <- function(x) as.integer(as.yearmon(x))
aggregate(z > 4, toyear, sum)
The result is a zoo time series with a yearmon index in the first case and an integer index in the second. If you want a data frame use fortify.zoo(ag) where ag is the result of aggregate.
4) dplyr toyear is from (3).
library(dplyr)
library(zoo)
df %>%
group_by(YearMonth = as.yearmon(Date_ex)) %>%
summarize_all(funs(sum)) %>%
ungroup
df %>%
group_by(Year = toyear(Date_ex)) %>%
summarize_all(funs(sum)) %>%
ungroup
Data.table is missing so I'm adding this. Comments are in the code. I used set.seed(1) to generate the samples.
library(data.table)
setDT(df)
# add year and month to df
df[, `:=`(month = month(Date_ex),
year = year(Date_ex))]
# monthly returns, remove date_ex
monthly_dt <- df[,lapply(.SD, function(x) sum(x > 4)), by = .(year, month), .SDcols = -("Date_ex")]
year month A B C D
1: 2000 1 10 10 11 0
2: 2000 2 10 11 8 0
3: 2000 3 11 11 11 0
4: 2000 4 10 11 8 0
5: 2000 5 7 10 8 0
6: 2000 6 9 6 7 0
.....
# yearly returns, remove Date_ex and month
yearly_dt <- df[,lapply(.SD, function(x) sum(x > 4)), by = .(year), .SDcols = -c("Date_ex", "month")]
year A B C D
1: 2000 114 118 113 0
2: 2001 127 129 120 0
3: 2002 122 108 126 0
4: 2003 123 128 125 0
5: 2004 123 132 131 0
6: 2005 14 15 15 0

Resources