How to change all values within group following lag(value) > 1 - r

Someone named Tarqon on Reddits /r/Rlanguage solved the problem. 1 + cumsum(days_between >= 45 instead of the if_else.
group_by(DMHID) %>%
arrange(DMHID, DateOfService) %>%
mutate(days_between = as.numeric(DateOfService - lag(DateOfService, default = DateOfService[1]))) %>%
mutate(eoc_45dco = 1 + cumsum(days_between >= 45)) %>%
mutate(id_eoc = as.integer(paste0(DMHID, eoc_45dco))) %>%
ORIGINAL QUESTION
So I am trying to split cases based on the amount of days (> 45) between one visit and the next. It works for the individual instance when there is more than 45 days between one visit and the next, but I need each visit after that to be part of the second group. For example, Participant 1234 has 362 visits, but between visit 105 (2016-12-26) and 106 (2017-02-23) there was a 59 day gap so i want all cases after that to be labeled 2. Rather All cases leading up to and including 105 are 12341 and after that 12342, so I can group by this variable for later analyses. Problem is that I can only seem to get the 106th visit to be labeled 12342 and everything before and after are 12341. I created a stripped down dataset and script that does reproduce the problem.
https://www.dropbox.com/s/k6gvo8igvbhpgti/reprex.zip?dl=0
EDIT: I just thought of another way to say it. I basically need to figure out how to group/subset data for each person, with the dividing line being the first time there is a gap of 45 days or more. I might be going down the wrong road with my current implementation, so if you can suggest alternative ways to split the data the way I want let me know. In the example I only have one persons visits, so the full dataset has a few thousand people with similar issues.
barometer <- df_pdencs_orig %>%
select(-EncID, -SiteName, -EOCKey, -ProgramLevel, -ProgramLevelCode, -ProcedureDesc, -MedicationValue, -CheckDate, -PdAmount, -PayerType) %>%
mutate_at(vars(contains("Date")), funs(ymd)) %>%
filter(DMHID %in% valid_diag$DMHID & DateOfService >= ymd(open_date)) %>%
group_by(DMHID) %>%
arrange(DMHID, DateOfService) %>%
mutate(days_between = DateOfService - lag(DateOfService, n = 1, default = DateOfService[1])) %>%
mutate(eoc_45dco = 1) %>%
mutate(eoc_45dco = if_else(days_between >= 45, lag(eoc_45dco) + 1, eoc_45dco)) %>%
mutate(eoc_45dco2 = if_else(lag(eoc_45dco) > 1, eoc_45dco + 1, eoc_45dco)) %>%
mutate(id_eoc = as.integer(paste0(DMHID, eoc_45dco))) %>%
...
The reprex below works just fine so I don't think that helps.
library(tidyverse)
library(lubridate)
#>
#> Attaching package: 'lubridate'
#> The following object is masked from 'package:base':
#>
#> date
df <- data.frame(
date = sample(seq(as.Date('2016/06/01'), as.Date('2017/03/01'), by="day"), 11),
days = as.difftime(c(40:50), units = "days")
)
df %>%
mutate(id = 1234) %>%
arrange(days) %>%
mutate(Z = 1) %>%
mutate(Z = if_else(days >= 45, lag(Z) + 1, Z)) %>%
mutate(id_eoc = as.integer(paste0(id, Z)))
#> date days id Z id_eoc
#> 1 2016-06-30 40 days 1234 1 12341
#> 2 2016-11-25 41 days 1234 1 12341
#> 3 2016-09-09 42 days 1234 1 12341
#> 4 2017-01-16 43 days 1234 1 12341
#> 5 2016-08-16 44 days 1234 1 12341
#> 6 2016-09-23 45 days 1234 2 12342
#> 7 2016-09-05 46 days 1234 2 12342
#> 8 2016-08-29 47 days 1234 2 12342
#> 9 2016-07-08 48 days 1234 2 12342
#> 10 2017-01-11 49 days 1234 2 12342
#> 11 2017-02-22 50 days 1234 2 12342
Created on 2018-04-17 by the reprex package (v0.2.0).
As such I think the issue is with the dates maybe since subtracting dates gives a time variable and not an integer.

Related

Create date of "X" column, when I have age in days at "X" column and birth date column in R

I'm having some trouble finding out how to do a specific thing in R.
In my dataset, I have a column with the date of birth of participants. I also have a column giving me the age in days at which a disease was diagnosed.
What I want to do is to create a new column showing the date of diagnosis. I'm guessing it's a pretty easy thing to do since I have all the information needed, basically it's birth date + X number of days = Date of diagnosis, but I'm unable to figure out how to do it.
All of my searches give me information on the opposite, going from date to age. So if you're able to help me, it would be much appreciated!
library(tidyverse)
library(lubridate)
df <- tibble(
birth = sample(seq("1950-01-01" %>%
as.Date(),
today(), by = "day"), 10, replace = TRUE),
age = sample(3650:15000, 10, replace = TRUE)
)
df %>%
mutate(diagnosis_date = birth %m+% days(age))
#> # A tibble: 10 x 3
#> birth age diagnosis_date
#> <date> <int> <date>
#> 1 1955-01-16 6684 1973-05-05
#> 2 1958-11-03 6322 1976-02-24
#> 3 2007-02-23 4312 2018-12-14
#> 4 2002-07-11 8681 2026-04-17
#> 5 2021-12-28 11892 2054-07-20
#> 6 2017-07-31 3872 2028-03-07
#> 7 1995-06-30 14549 2035-04-30
#> 8 1955-09-02 12633 1990-04-04
#> 9 1958-10-10 4534 1971-03-10
#> 10 1980-12-05 6893 1999-10-20
Created on 2022-06-30 by the reprex package (v2.0.1)

Sum unique occurrences per night and create a new data frame in R

I have studied prey deliveries in a breeding owl and want to score the number of prey items delivered during the night to the nestlings. I define night as from 21 to 5. How could I make a new data frame with number of prey each night per location ID based upon these 24/7 observation dataset? In the new data frame, I wish to have the following columns: ID (A & B), No_prey_during_night (the sum of prey items), Time (date, e.g. 4/6 to 5/6), there will be a unique row per night per ID.
https://drive.google.com/file/d/1y5VCoNWZCmYbyWCktKfMSBqjOIaLeumQ/view?usp=sharing. I have done it in Excel so far, but very time demanding. I would be happy to get help with a simple script I could use in R.
To take into account the fact that a night begins and ends on different dates, you could first assign all the morning hours to the prior day. The final label (the Time column in your question) then includes the next day. If the year of the data collection has a Feb 29, make sure the year is correct (I used 2022).
library(dplyr)
library(lubridate)
read.csv("Tot_prey_example.csv") %>%
mutate(time = make_datetime(year = 2022, month = Month, day = Day, hour = Hour),
night_time = if_else(between(Hour, 0, 5), time - days(1), time),
night_date = floor_date(night_time, unit = "day"),
night = Hour <= 5 | Hour >= 21) %>%
filter(night) %>%
group_by(ID, night_date) %>%
summarise(No_prey_during_night = sum(n), .groups = "drop") %>%
mutate(next_day = night_date + days(1),
Time = glue::glue("{day(night_date)}/{month(night_date)} to {day(next_day)}/{month(next_day)}")) %>%
select(ID, No_prey_during_night, Time)
#> # A tibble: 88 × 3
#> ID No_prey_during_night Time
#> <chr> <int> <glue>
#> 1 A 12 4/6 to 5/6
#> 2 A 22 5/6 to 6/6
#> 3 A 20 6/6 to 7/6
#> 4 A 14 7/6 to 8/6
#> 5 A 14 8/6 to 9/6
#> 6 A 27 9/6 to 10/6
#> 7 A 22 10/6 to 11/6
#> 8 A 18 11/6 to 12/6
#> 9 A 22 12/6 to 13/6
#> 10 A 25 13/6 to 14/6
#> # … with 78 more rows
Created on 2022-05-18 by the reprex package (v2.0.1)
You can do something like this:
library(dplyr)
library(lubridate)
read.csv("Tot_prey_example.csv") %>%
# create initial datetime variable, `night`
mutate(night = lubridate::make_datetime(2021, Month,Day,Hour)) %>%
# filter to nighttime hours
filter(Hour>=21 | Hour<=5) %>%
# flip datetime variable to the next day if hour is >=21
mutate(night = if_else(Hour>=21,night + 60*60*24, night)) %>%
# now group by the date part of `night`
group_by(ID,Night_No = as.Date(night)) %>%
# summarize the sum of prey
summarize(
No_prey_during_night = sum(n),
No_deliveries_during_night = sum(PreyDelivery)
) %>%
# replace the Night_No with a character variable showing both dates
mutate(Night_No = paste0(Night_No-1, "-", Night_No))
Output:
# A tibble: 88 × 4
# Groups: ID [2]
ID Night_No No_prey_during_night No_deliveries_during_night
<chr> <chr> <int> <int>
1 A 2021-06-04-2021-06-05 12 5
2 A 2021-06-05-2021-06-06 22 6
3 A 2021-06-06-2021-06-07 20 5
4 A 2021-06-07-2021-06-08 14 6
5 A 2021-06-08-2021-06-09 14 5
6 A 2021-06-09-2021-06-10 27 5
7 A 2021-06-10-2021-06-11 22 4
8 A 2021-06-11-2021-06-12 18 6
9 A 2021-06-12-2021-06-13 22 6
10 A 2021-06-13-2021-06-14 25 5
# … with 78 more rows

rolling 30-day geometric mean with variable width

The solution to this question by #ShirinYavari was almost what I needed except for the use of the static averaging window width of 2. I have a dataset with random samples from multiple stations that I want to calculate a rolling 30-day geomean. I want all samples within a 30-day window of a given sample to be averaged and the width may change if preceding samples are farther or closer together in time, for instance whether you would need to average 2, 3, or more samples if 1, 2, or more preceding samples were within 30 days of a given sample.
Here is some example data, plus my code attempt:
RESULT = c(50,900,25,25,125,50,25,25,2000,25,25,
25,25,25,25,25,25,325,25,300,475,25)
DATE = as.Date(c("2018-05-23","2018-06-05","2018-06-17",
"2018-08-20","2018-10-05","2016-05-22",
"2016-06-20","2016-07-25","2016-08-11",
"2017-07-21","2017-08-08","2017-09-18",
"2017-10-12","2011-04-19","2011-06-29",
"2011-08-24","2011-10-23","2012-06-28",
"2012-07-16","2012-08-14","2012-09-29",
"2012-10-24"))
FINAL_SITEID = c(rep("A", 5), rep("B", 8), rep("C", 9))
df=data.frame(FINAL_SITEID,DATE,RESULT)
data_roll <- df %>%
group_by(FINAL_SITEID) %>%
arrange(DATE) %>%
mutate(day=DATE-dplyr::lag(DATE, n=1),
day=replace_na(day, 1),
rnk=cumsum(c(TRUE, day > 30))) %>%
group_by(FINAL_SITEID, rnk) %>%
mutate(count=rowid(rnk)) %>%
mutate(GM30=rollapply(RESULT, width=count, geometric.mean, fill=RESULT, align="right"))
I get this error message, which seems like it should be an easy fix, but I can't figure it out:
Error: Column `rnk` must be length 5 (the group size) or one, not 6
Easiest way to compute rolling statistics depending on datetime windows is runner package. You don't have to hack around to get just 30-days windows. Function runner allows you to apply any R function in rolling window. Below example of 30-days geometric.mean within FINAL_SITEID group:
library(psych)
library(runner)
df %>%
group_by(FINAL_SITEID) %>%
arrange(DATE) %>%
mutate(GM30 = runner(RESULT, k = 30, idx = DATE, f = geometric.mean))
# FINAL_SITEID DATE RESULT GM30
# <fct> <date> <dbl> <dbl>
# 1 C 2011-04-19 25 25.0
# 2 C 2011-06-29 25 25.0
# 3 C 2011-08-24 25 25.0
# 4 C 2011-10-23 25 25.0
# 5 C 2012-06-28 325 325.
# 6 C 2012-07-16 25 90.1
# 7 C 2012-08-14 300 86.6
# 8 C 2012-09-29 475 475.
# 9 C 2012-10-24 25 109.
# 10 B 2016-05-22 50 50.0
The width argument of rollapply can be a vector of widths which can be set using findInterval. An example of this is shown in the Examples section of the rollapply help file and we use that below.
library(dplyr)
library(psych)
library(zoo)
data_roll <- df %>%
arrange(FINAL_SITEID, DATE) %>%
group_by(FINAL_SITEID) %>%
mutate(GM30 = rollapplyr(RESULT, 1:n() - findInterval(DATE - 30, DATE),
geometric.mean, fill = NA)) %>%
ungroup
giving:
# A tibble: 22 x 4
FINAL_SITEID DATE RESULT GM30
<fct> <date> <dbl> <dbl>
1 A 2018-05-23 50 50.0
2 A 2018-06-05 900 212.
3 A 2018-06-17 25 104.
4 A 2018-08-20 25 25.0
5 A 2018-10-05 125 125.
6 B 2016-05-22 50 50.0
7 B 2016-06-20 25 35.4
8 B 2016-07-25 25 25.0
9 B 2016-08-11 2000 224.
10 B 2017-07-21 25 25.0
# ... with 12 more rows

Struggling to Create a Pivot Table in R

I am very, very new to any type of coding language. I am used to Pivot tables in Excel, and trying to replicate a pivot I have done in Excel in R. I have spent a long time searching the internet/ YouTube, but I just can't get it to work.
I am looking to produce a table in which I the left hand side column shows a number of locations, and across the top of the table it shows different pages that have been viewed. I want to show in the table the number of views per location which each of these pages.
The data frame 'specificreports' shows all views over the past year for different pages on an online platform. I want to filter for the month of October, and then pivot the different Employee Teams against the number of views for different pages.
specificreports <- readxl::read_excel("Multi-Tab File - Dashboard
Usage.xlsx", sheet = "Specific Reports")
specificreportsLocal <- tbl_df(specificreports)
specificreportsLocal %>% filter(Month == "October") %>%
group_by("Employee Team") %>%
This bit works, in that it groups the different team names and filters entries for the month of October. After this I have tried using the summarise function to summarise the number of hits but can't get it to work at all. I keep getting errors regarding data type. I keep getting confused because solutions I look up keep using different packages.
I would appreciate any help, using the simplest way of doing this as I am a total newbie!
Thanks in advance,
Holly
let's see if I can help a bit. It's hard to know what your data looks like from the info you gave us. So I'm going to guess and make some fake data for us to play with. It's worth noting that having field names with spaces in them is going to make your life really hard. You should start by renaming your fields to something more manageable. Since I'm just making data up, I'll give my fields names without spaces:
library(tidyverse)
## this makes some fake data
## a data frame with 3 fields: month, team, value
n <- 100
specificreportsLocal <-
data.frame(
month = sample(1:12, size = n, replace = TRUE),
team = letters[1:5],
value = sample(1:100, size = n, replace = TRUE)
)
That's just a data frame called specificreportsLocal with three fields: month, team, value
Let's do some things with it:
# This will give us total values by team when month = 10
specificreportsLocal %>%
filter(month == 10) %>%
group_by(team) %>%
summarize(total_value = sum(value))
#> # A tibble: 4 x 2
#> team total_value
#> <fct> <int>
#> 1 a 119
#> 2 b 172
#> 3 c 67
#> 4 d 229
I think that's sort of like what you already did, except I added the summarize to show how it works.
Now let's use all months and reshape it from 'long' to 'wide'
# if I want to see all months I leave out the filter and
# add a group_by month
specificreportsLocal %>%
group_by(team, month) %>%
summarize(total_value = sum(value)) %>%
head(5) # this just shows the first 5 values
#> # A tibble: 5 x 3
#> # Groups: team [1]
#> team month total_value
#> <fct> <int> <int>
#> 1 a 1 17
#> 2 a 2 46
#> 3 a 3 91
#> 4 a 4 69
#> 5 a 5 83
# to make this 'long' data 'wide', we can use the `spread` function
specificreportsLocal %>%
group_by(team, month) %>%
summarize(total_value = sum(value)) %>%
spread(team, total_value)
#> # A tibble: 12 x 6
#> month a b c d e
#> <int> <int> <int> <int> <int> <int>
#> 1 1 17 122 136 NA 167
#> 2 2 46 104 158 94 197
#> 3 3 91 NA NA NA 11
#> 4 4 69 120 159 76 98
#> 5 5 83 186 158 19 208
#> 6 6 103 NA 118 105 84
#> 7 7 NA NA 73 127 107
#> 8 8 NA 130 NA 166 99
#> 9 9 125 72 118 135 71
#> 10 10 119 172 67 229 NA
#> 11 11 107 81 NA 131 49
#> 12 12 174 87 39 NA 41
Created on 2018-12-01 by the reprex package (v0.2.1)
Now I'm not really sure if that's what you want. So feel free to make a comment on this answer if you need any of this clarified.
Welcome to Stack Overflow!
I'm not sure I correctly understand your need without a data sample, but this may work for you:
library(rpivotTable)
specificreportsLocal %>% filter(Month == "October")
rpivotTable(specificreportsLocal, rows="Employee Team", cols="page", vals="views", aggregatorName = "Sum")
Otherwise, if you do not need it interactive (as the Pivot Tables in Excel), this may work as well:
specificreportsLocal %>% filter(Month == "October") %>%
group_by_at(c("Employee Team", "page")) %>%
summarise(nr_views = sum(views, na.rm=TRUE))

R: Group by 2 columns but only when 2nd column doesn't match

I'm new to R (and dplyr) and I'm currently working with some seasonal football data and need some help. Currently if a player transfers to another club in the same league then the row of data and all metrics is simply duplicated but with a new team_id. However if the player transfers to another league then the metrics are split.
For consistency's sake I need to resolve this which means that I have to:
Group by player_id where comp_id does not match
(football regulations dictate that you can only play for a max. of 2 clubs in a season so this negates further complications and so this simple rule resolves everything)
so in other words if there are duplicates sum all rows but only if the comp_id differs
I was trying to do this in dplyr and was hoping that there would be some way of writing this such as:
football_data %>%
group_by(player_id, !comp_id)
but this doesn't work and quite rightly so.
My other option is to create an additional column based on a composite key (player_id, comp_id) and group_by player_id only where count of composite key = 1, but even then I'm struggling to write this with dplyr.
Any suggestions would be very welcome.
Edit:
There's close to a hundred metrics so I'll simplify as per below:
player_id player_name comp, team, metric1, metric2, metric3 .....
1 Lacazette Bund Dort 20 30 20
1 Lacazette EPL Ars 10 15 15
2 Arnautovic EPL Stoke 30 40 30
2 Arnautovic EPL W. Ham 30 40 30
so in this example the data metrics are split when a player moves leagues (Lacazette) but not if the transfer is in the same league (Arnautovic).
So for consistency I need to sum the metrics for the first player but not for the second player
resulting in an output:
player_id player_name comp, team, metric1, metric2, metric3 .....
1 Lacazette Bund Dort 30 45 35
1 Lacazette EPL Ars 30 45 35
2 Arnautovic EPL Stoke 30 40 30
2 Arnautovic EPL W. Ham 30 40 30
Essentially, you want to
get the player-level sum of the metric variables, grouped by comp
join them back into the full dataframe.
library(tidyverse)
#> -- Attaching packages ----------------------------------------------------------------------------------------------------- tidyverse 1.2.1 --
#> v ggplot2 2.2.1 v purrr 0.2.4
#> v tibble 1.4.1 v dplyr 0.7.4
#> v tidyr 0.7.2 v stringr 1.2.0
#> v readr 1.1.1 v forcats 0.2.0
#> -- Conflicts -------------------------------------------------------------------------------------------------------- tidyverse_conflicts() --
#> x dplyr::filter() masks stats::filter()
#> x dplyr::lag() masks stats::lag()
df <- data.frame(player_id = c(1, 1, 2, 2),
player_name = c("Lacazette", "Lacazette", "Arnuatovic", "Arnuatovic"),
comp = c("Bund", "EPL", "EPL", "EPL"),
team = c("Dort", "Ars", "Stoke", "W. Ham"),
metric1 = c(20, 10, 30, 30),
metric2 = c(30, 15, 40, 40),
metric3 = c(20, 15, 30, 30))
df %>%
distinct(player_id, player_name, comp, .keep_all = T) %>%
group_by(player_id) %>%
summarize_at(vars(metric1:metric3), funs(sum)) %>%
left_join(df %>% select(player_id:team), ., by = "player_id")
#> player_id player_name comp team metric1 metric2 metric3
#> 1 1 Lacazette Bund Dort 30 45 35
#> 2 1 Lacazette EPL Ars 30 45 35
#> 3 2 Arnuatovic EPL Stoke 30 40 30
#> 4 2 Arnuatovic EPL W. Ham 30 40 30

Resources