R: calculating demand and supply dynamics - r

A sample data first
set.seed(123)
dat <- data.frame(day= 1:50 ,demand = sample(0:17, size = 50,replace = T),supply = sample(2:9, size = 50,replace = T))
reservoir <- 200
I have a data of demand and supply starting with day 1 till 50
and a fourth column which is the difference between supply and demand
dat$balance <- dat$supply - dat$demand
I want to calculate another column called net deficit. Here's the logic
If for a given day, Demand > Supply, a deficit exists. However, this
deficit can be met by reservoir and hence the net deficit columns will get zero,
If Supply > Demand, the excess supply is either added to reservoir (only if reservoir < 200).If reservoir is at its full capacity (200), the excess supply is discarded
If Demand > Supply and reservoir is zero, then the net deficit column gets the difference between Demand and Supply
For example, starting with day 1, there was a deficit (balance) of 3. This deficit is met by reservoir (making it 197) and, net deficit is zero,
Day 2: deficit is -9 which is borrowed from reservoir (making it 188) and net defict will be zero again.
Day 3, there is an excess of 1 which is used to fill reservoir (since reservoir < 100) net deficit gets a value of 0 and reservour becomes (189)
Day 4: there is a deficit of 13 in balance which is met by reservoir. Reservoir further reduces to 176
I hope this is clear.
If at some point of time, reservoir becomes 0, deficit cannot be compensated and
therefore net deficit gets the value of dat$balance

The solution is basically using a for loop to construct the reservoir vector based on the balance each day. The provided sample did not actually manage to empty the reservoir in 50 days, so I made it longer (but this means the numbers are not the same as the 50 day example). You can then simply bind the vector as a column to your data, and set the net_deficit column to zero while reservoir is positive.
library(tidyverse)
set.seed(123)
dat <- tibble(
day = 1:100,
demand = sample(0:17, size = 100,replace = T),
supply = sample(2:9, size = 100,replace = T)
)
balance <- dat$supply - dat$demand
reservoir <- rep(200, nrow(dat))
reservoir[1] <- reservoir[1] + balance[1]
for (day in 2:nrow(dat)){
reservoir[day] <- reservoir[day - 1] + balance[day]
}
out <- dat %>%
bind_cols(balance = balance, reservoir = reservoir) %>%
mutate(net_deficit = ifelse(reservoir >= 0, 0, reservoir))
out[61:70, ]
# A tibble: 10 x 6
day demand supply balance reservoir net_deficit
<int> <int> <int> <int> <dbl> <dbl>
1 61 11 6 - 5 3.00 0
2 62 1 4 3 6.00 0
3 63 6 7 1 7.00 0
4 64 4 4 0 7.00 0
5 65 14 4 -10 - 3.00 - 3.00
6 66 8 6 - 2 - 5.00 - 5.00
7 67 14 7 - 7 -12.0 -12.0
8 68 14 3 -11 -23.0 -23.0
9 69 14 5 - 9 -32.0 -32.0
10 70 7 4 - 3 -35.0 -35.0

Related

R dplyr: How do I apply a less than / greater than mapping table across a large dataset efficiently?

I have a large dataset ~1M rows with, among others, a column that has a score for each customer record. The score is between 0 and 100.
What I'm trying to do is efficiently map the score to a rating using a rating table. Each customer receives a rating between 1 and 15 based the customer's score.
# Generate Example Customer Data
set.seed(1)
n_customers <- 10
customer_df <-
tibble(id = c(1:n_customers),
score = sample(50:80, n_customers, replace = TRUE))
# Rating Map
rating_map <- tibble(
max = c(
47.0,
53.0,
57.0,
60.5,
63.0,
65.5,
67.3,
69.7,
71.7,
74.0,
76.3,
79.0,
82.5,
85.5,
100.00
),
rating = c(15:1)
)
The best code that I've come up with to map the rating table onto the customer score data is as follows.
customer_df <-
customer_df %>%
mutate(rating = map(.x = score,
.f = ~max(select(filter(rating_map, .x < max),rating))
)
) %>%
unnest(rating)
The problem I'm having is that while it works, it is extremely inefficient. If you set n = 100k in the above code, you can get a sense of how long it takes to work.
customer_df
# A tibble: 10 x 3
id score rating
<int> <int> <int>
1 1 74 5
2 2 53 13
3 3 56 13
4 4 50 14
5 5 51 14
6 6 78 4
7 7 72 6
8 8 60 12
9 9 63 10
10 10 67 9
I need to speed up the code because it's currently taking over an hour to run. I've identified the inefficiency in the code to be my use of the purrr::map() function. So my question is how I could replicate the above results without using the map() function?
Thanks!
customer_df$rating <- length(rating_map$max) -
cut(score, breaks = rating_map$max, labels = FALSE, right = FALSE)
This produces the same output and is much faster. It takes 1/20th of a second on 1M rows, which sounds like >72,000x speedup.
It seems like this is a good use case for the base R cut function, which assigns values to a set of intervals you provide.
cut divides the range of x into intervals and codes the values in x
according to which interval they fall. The leftmost interval
corresponds to level one, the next leftmost to level two and so on.
In this case you want the lowest rating for the highest score, hence the subtraction of the cut term from the length of the breaks.
EDIT -- added right = FALSE because you want the intervals to be closed on the left and open on the right. Now matches your output exactly; previously had different results when the value matched a break.
We could do a non-equi join
library(data.table)
setDT(rating_map)[customer_df, on = .(max > score), mult = "first"]
-output
max rating id
<int> <int> <int>
1: 74 5 1
2: 53 13 2
3: 56 13 3
4: 50 14 4
5: 51 14 5
6: 78 4 6
7: 72 6 7
8: 60 12 8
9: 63 10 9
10: 67 9 10
Or another option in base R is with findInterval
customer_df$rating <- nrow(rating_map) -
findInterval(customer_df$score, rating_map$max)
-output
> customer_df
id score rating
1 1 74 5
2 2 53 13
3 3 56 13
4 4 50 14
5 5 51 14
6 6 78 4
7 7 72 6
8 8 60 12
9 9 63 10
10 10 67 9

For loop that references prior rows

I'm interested in filtering out data based on a set of rules.
I have a dataset that contains play data for all games in which a team had a .8 win probability at some point. What I'd like to do is find that point in which the win probability reached .8 and remove every play thereafter until the next game data begins. The dataset contains numerous games so once a game ends data from a new one begins in which the win probability goes back to around .5.
Here are the relevant columns and each row is a play in the game:
game_id = unique num for each game
team = team that will eventually get an .8 win prob
play_id = num that is increased (but not necessary in seq order for some reason) after each play
win_per = num showing what the teams win percentage chance at the start of that recorded play was
Example df
df = data.frame(game_id = c(122,122,122,122,122,144,144,144,144,144),
team = c("a","a","a","a","a", "b","b","b","b","b"),
play_id = c(1,5,22,25,34, 45,47,55,58,66),
win_per = c(.5,.6,.86,.81,.85,.54,.43,.47,.81,.77))
So in this small example, I have recorded 5 plays of two teams (a and b) who both obtained a win_prob of at least .8 at some point in the game. In both example cases, I would want to have all the plays removed AFTER they attained this .8 mark regardless of whether the win_prob kept rising or fell back below .8.
So team a would have the final two rows of data removed (win_prob == .81 and .85) and team b would have the final row removed (win_prob = .77)
I'm imagining running a for loop that checks if the team in any row is the same team as the prior row, and if so, find a win_prob >= .8 with the lowest play-id (as this would be the first time the team reached .8) and then somehow remove the rest of the rows following that match UNTIL the team != prior row's team.
Of course, you might know a better way as well. Thank you so much for helping me out!
No need to use a loop, that whole selection can be performed in 1 line using the dplyr package:
df = data.frame(game_id = c(122,122,122,122,122,144,144,144,144,144),
team = c("a","a","a","a","a", "b","b","b","b","b"),
play_id = c(1,5,22,25,34, 45,47,55,58,66),
win_per = c(.5,.6,.86,.81,.85,.54,.43,.47,.81,.77))
library(dplyr)
#group by team
#find the first row that exceeds .80 and add temp column
#save the row from 1 to the row that exceeds 0.80
#remove temp column
df %>% group_by(team, game_id) %>%
mutate(g80= min(which(win_per>=0.80))) %>%
slice(1:g80) %>%
select(-g80)
# A tibble: 7 x 4
# Groups: team [2]
game_id team play_id win_per
<dbl> <fct> <dbl> <dbl>
1 122 a 1 0.5
2 122 a 5 0.6
3 122 a 22 0.86
4 144 b 45 0.54
5 144 b 47 0.43
6 144 b 55 0.47
7 144 b 58 0.81
Here is a base R way using cumsum in ave
subset(df, ave(win_per > 0.8, game_id, FUN = function(x) c(0, cumsum(x)[-length(x)])) == 0)
# game_id team play_id win_per
#1 122 a 1 0.50
#2 122 a 5 0.60
#3 122 a 22 0.86
#6 144 b 45 0.54
#7 144 b 47 0.43
#8 144 b 55 0.47
#9 144 b 58 0.81
and using the similar concept in dplyr
library(dplyr)
df %>% group_by(game_id) %>% filter(lag(cumsum(win_per > 0.8) == 0, default = TRUE))

group_by() summarise() and weights percentages - R

Let's suppose that a company has 3 Bosses and 20 Employees, where each Employee has done n_Projects with an overall Performance in percentage:
> df <- data.frame(Boss = sample(1:3, 20, replace=TRUE),
Employee = sample(1:20,20),
n_Projects = sample(50:100, 20, replace=TRUE),
Performance = round(sample(1:100,20,replace=TRUE)/100,2),
stringsAsFactors = FALSE)
> df
Boss Employee n_Projects Performance
1 3 8 79 0.57
2 1 3 59 0.18
3 1 11 76 0.43
4 2 5 85 0.12
5 2 2 75 0.10
6 2 9 66 0.60
7 2 19 85 0.36
8 1 20 79 0.65
9 2 17 79 0.90
10 3 14 77 0.41
11 1 1 78 0.97
12 1 7 72 0.52
13 2 6 62 0.69
14 2 10 53 0.97
15 3 16 91 0.94
16 3 4 98 0.63
17 1 18 63 0.95
18 2 15 90 0.33
19 1 12 80 0.48
20 1 13 97 0.07
The CEO asks me to compute the quality of the work for each boss. However, he asks for a specific calculation: Each Performance value has to have a weight equal to the n_Project value over the total n_Project for that boss.
For example, for Boss 1 we have a total of 604 n_Projects, where the project 1 has a Performance weight of 0,13 (78/604 * 0,97 = 0,13), project 3 a Performance weight of 0,1 (59/604 * 0,18 = 0,02), and so on. The sum of these Performance weights are the Boss performance, that for Boss 1 is 0,52. So, the final output should be like this:
Boss total_Projects Performance
1 604 0.52
2 340 0.18 #the values for boss 2 are invented
3 230 0.43 #the values for boss 3 are invented
However, I'm still struggling with this:
df %>%
group_by(Boss) %>%
summarise(total_Projects = sum(n_Projects),
Weight_Project = n_Projects/sum(total_Projects))
In addition to this problem, can you give me any feedback about this problem (my code, specifically) or any recommendation to improve data-manipulations skills? (you can see in my profile that I have asked a lot of questions like this, but still I'm not able to solve them on my own)
We can get the sum of product of `n_Projects' and 'Performance' and divide by the 'total_projects'
library(dplyr)
df %>%
group_by(Boss) %>%
summarise(total_projects = sum(n_Projects),
Weight_Project = sum(n_Projects * Performance)/total_projects)
# or
# Weight_Project = n_Projects %*% Performance/total_projects)
# A tibble: 3 x 3
# Boss total_projects Weight_Project
# <int> <int> <dbl>
#1 1 604 0.518
#2 2 595 0.475
#3 3 345 0.649
Adding some more details about what you did and #akrun's answer :
You must have received the following error message :
df %>%
group_by(Boss) %>%
summarise(total_Projects = sum(n_Projects),
Weight_Project = n_Projects/sum(total_Projects))
## Error in summarise_impl(.data, dots) :
## Column `Weight_Project` must be length 1 (a summary value), not 7
This tells you that the calculus you made for Weight_Project does not yield a unique value for each Boss, but 7. summarise is there to summarise several values into one (by means, sums, etc.). Here you just divide each value of n_Projects by sum(total_Projects), but you don't summarise it into a single value.
Assuming that what you had in mind was first calculating the weight for each performance, then combining it with the performance mark to yield the weighted mean performance, you can proceed in two steps :
df %>%
group_by(Boss) %>%
mutate(Weight_Performance = n_Projects / sum(n_Projects)) %>%
summarise(weighted_mean_performance = sum(Weight_Performance * Performance))
The mutate statement preserves the number of total rows in df, but sum(n_Projects) is calculated for each Boss value thanks to group_by.
Once, for each row, you have a project weight (which depends on the boss), you can calculate the weighted mean — which is a mean thus a summary value — with summarise.
A more compact way that still lets appear the weighted calculus would be :
df %>%
group_by(Boss) %>%
summarise(weighted_mean_performance = sum((n_Projects / sum(n_Projects)) * Performance))
# Reordering to minimise parenthesis, which is #akrun's answer
df %>%
group_by(Boss) %>%
summarise(weighted_mean_performance = sum(n_Projects * Performance) / sum(n_Projects))

rolling percentile for conditional selections in r

I have a data.frame with daily maximum and minimum temperatures for 40 years and need to select all days that have maximum temperature above 90th percentile of maximum temperature and minimum temperatures above the 85th percentile of minimum temperature.
I was able to do that
> head(df)
YEAR MONTH DAY Date MEAN MAX MIN
1 1965 1 1 1/1/1965 NA 27.0 17.0
2 1965 1 2 1/2/1965 24.0 28.0 20.7
3 1965 1 3 1/3/1965 19.9 23.7 16.2
4 1965 1 4 1/4/1965 18.0 23.4 12.0
5 1965 1 5 1/5/1965 19.7 24.0 14.0
6 1965 1 6 1/6/1965 18.6 24.0 13.0
df[, hotday := +(df$MAX>=(quantile(df$MAX,.90, na.rm = T, type = 6)) & df$MIN>=(quantile(df$MIN,.85, na.rm = T, type = 6)))
] [, length := with(rle(hotday), rep(lengths,lengths)) # to calculate lenght so I can select consecutive days only
] [hotday==0, length:=0][!!hotday, Highest_Mean := max(MEAN) , rleid(length)][] # to find the highest Mean temp for each consecutive group
But I need to do the same thing using centered rolling percentiles for every 15 days (i.e., for a given day, the 90th percentile of maximum temperature is the 90th percentile of the historical data for a 15-day window centered on that day)
I mean that the percentile to be calculated from the historical data of each calendar day using 15-days calendar window. That is, there are 365 days so for day 118 I will use the historical data for day 111, 112,..... to day 125. So in my case, I have data for 40 years so the 15-day window will yield a total sample size of 40 years × 15 days = 600 for each calendar day. The moving window is based on the calendar day, not the time series
Any thought please
What about something like this to select the rows you want ?
Since you want a sliding window of 15 days centered at the day of interest, you will always have windows of 7 preceding days + day of interest + 7 following days. Because of this constraint, the first 7 and the last 7 days (rows) of the dataset are excluded and forced == FALSE { rep(FALSE, 7) }
the code included in the sapply() call will test each day (starting from day n.(7+1=8) ) against the 15-day sliding window (as defined before) and check if the max temperature is higher than the 90th percentile of that window (test1). A similar test (test2) is executed looking at the MIN temp. If one of the two tests is TRUE, TRUE is returned (otherwise, FALSE is outputted. You can easily adapt this to your needs).
The resulting vector (stored in the KEEP vector) includes booleans TRUE/FALSE that can be used for subsetting the initial dataframe.
set.seed(111)
df <- data.frame(MIN=sample(50:70, size = 50, replace = T),
MAX=sample(70:90, size = 50, replace = T))
head(df)
KEEP <- c(rep(FALSE, 7),
sapply(8:(length(df$MAX) - 7), (function(i){
test1 <- df$MAX[i] >= as.numeric(quantile(df$MAX[(i-7):(i+7)], 0.9, na.rm = TRUE))
test2 <- df$MIN[i] <= as.numeric(quantile(df$MIN[(i-7):(i+7)], 0.15, na.rm = TRUE))
test1 | test2
})),
rep(FALSE, 7))
head(KEEP)
df <- df[KEEP,]
df
This should return
MIN MAX
10 51 86
13 51 73
14 50 75
15 53 89
22 55 83
28 55 90
31 51 72
32 60 88
37 52 84
42 56 87

How to subsetting rows group wise in R?

Probably my question title is not appropriate, sorry for that. I have a csv file named "table_parameter". Please, download from here.. Data look like this:
time Avg.PM10 sill range nugget
1 1 2012030101 52.269231 0.11054330 45574.072 0.037261216
2 2 2012030102 55.314286 0.20250974 87306.391 0.048315377
3 3 2012030103 56.038095 0.17711558 56806.827 0.034956709
4 4 2012030104 55.904762 0.16466350 104767.669 0.030752835
5 5 2012030105 57.123810 0.23638953 87306.391 0.037308364
6 6 2012030106 58.542857 0.24130317 87306.391 0.042108754
7 7 2012030107 60.066667 0.20362439 87306.391 0.037353980
8 8 2012030108 63.790476 0.19417801 87306.391 0.034144464
.
.
.
In my dataframe there is a variable named time contains hours value from 01 march 2012 to 7 march 2012 in numeric form. for example 01 march 2012, 1.00 a.m. is written as 2012030101 and so on.
I want to subset this dataframe time wise. I want a dataframe contains only morning times of every 7 days. morning time is 1.00 am to 5.00 a.m. That means I want a dataframe which contais all the value belongs to 2012030101 to 2012030105, 2012030201 to 2012030205..........2012030701 to 2012030705.in other words,I want a dataframe like below:
time Avg.PM10 sill range nugget
1 49 49 2012030301 17.371429 0.7154449 48239.54 0.17163448
2 50 50 2012030302 17.811321 1.1201199 117603.55 0.12425337
3 51 51 2012030303 17.094340 0.5799705 55103.16 0.12061258
4 52 52 2012030304 16.679245 0.8486774 86725.77 0.15210005
5 53 53 2012030305 16.885714 1.2408621 154677.61 0.09743375
6 73 73 2012030401 21.619048 0.4417369 104767.67 0.08567888
7 74 74 2012030402 20.485714 2.0271124 215474.54 0.06340464
8 75 75 2012030403 20.552381 0.4509354 104767.67 0.06319812
9 76 76 2012030404 20.104762 0.4438798 104767.67 0.05639840
10 77 77 2012030405 20.133333 0.5050201 104767.67 0.09037341
.
.
.
For doing this I wrote these code:
table<-read.csv("table_parameter.csv")
table
table_morning<-subset(table, time %in% c(2012030101:2012030105,
2012030201:2012030205,
2012030301:2012030305,
2012030401:2012030405,
2012030501:2012030505,
2012030601:2012030605,
2012030701:2012030705) & Avg.PM10 <=30)
table_morning
But this code is not efficient.as you see, I wrote all the hour values to subset! If want to do the same work for 90 days then Its very inefficient. So, how can I do this subsetting efficiently? If you have any further query please let me know.
you could use substring like below:
table_morning <- subset(table, substring(time, 9, 10) %in% c("01", "02","03","04", "05") & Avg.PM10 <=30)
I would extract the hour from the time and then filter accordingly.
For example:
library(dplyr)
data_orpheus = read.csv('table_parameter.csv')
data_orpheus$hour = as.numeric(substr(as.character(data_orpheus$time),9,10))
data_morning = data_orpheus %>% filter(hour >= 1 & hour <= 5)
The dplyr operator %>% is not necessary, you could filter with data_morning = data_orpheus[with(data_orpheus,hour >= 1 & hour <= 5)]
Update
I am still learning dplyr, so here is a beautiful one-liner that does it all:
data_morning = read.csv('table_parameter.csv') %>% # Read CSV
mutate(hours = as.numeric(substr(time,9,10))) %>% # Extract hours
filter(hours >= 1 & hours <= 5) %>% # Keep only mornings
select(-hours) # Drop hours, if not needed
head(data_morning)
X time Avg.PM10 sill range nugget
1 1 2012030101 52.26923 0.1105433 45574.07 0.03726122
2 2 2012030102 55.31429 0.2025097 87306.39 0.04831538
3 3 2012030103 56.03810 0.1771156 56806.83 0.03495671
4 4 2012030104 55.90476 0.1646635 104767.67 0.03075283
5 5 2012030105 57.12381 0.2363895 87306.39 0.03730836
6 25 2012030201 67.10476 0.1434977 72755.33 0.03003781
Thanks a lot for Other answers. My improvised answer for my future advantage:
table<-read.csv("table_parameter.csv")
times<- as.numeric(substr(table$time,9,10))
table_morning<- subset(table, times>=1 & times<=5 & Avg.PM10<=30)

Resources