I have a long term sightings data set of identified individuals (~16,000 records from 1979- 2019) and I would like to subset the same date range (YYYY-09-01 to YYYY(+1)-08-31) across years in R. I have successfully done so for each "year" (and obtained the unique IDs) using:
library(dplyr)
library(lubridate)
year79 <-data%>%
select(ID, Sex, AgeClass, Age, Date, Month, Year)%>%
filter(Date>= as.Date("1978-09-01") & Date<= as.Date("1979-08-31")) %>%
filter(!duplicated(ID))
year80 <-data%>%
select(ID, Sex, AgeClass, Age, Date, Month, Year)%>%
filter(Date>= as.Date("1979-09-01") & Date<= as.Date("1980-08-31")) %>%
filter(!duplicated(ID))
I would like to clean up the code and ideally not need to specify the each range (just have it iterate through). I am new at R and stuck how to do this. Any suggestions?
FYI "Month" and "Year" are included for producing a table via melt and cast later on.
example data:
ID Year Month Day Date AgeClass Age Sex
1 1034 1979 4 17 1979-04-17 U 3 F
2 1127 1979 5 3 1979-05-03 A 13 F
3 1222 1979 5 3 1979-05-03 U 0 F
4 1303 1979 6 16 1979-06-16 U 0 F
5 1153 1980 4 16 1980-04-16 C 0 F
6 1014 1980 4 16 1980-04-16 U 6 F
ID Year Month Day Date AgeClass Age Sex
16428 2503 2019 5 8 2019-05-08 U NA F
16429 3760 2019 5 8 2019-05-08 A 12 F
16430 4080 2019 5 9 2019-05-09 A 9 F
16431 4095 2019 5 9 2019-05-09 A 9 U
16432 1204 2019 5 11 2019-05-11 A 37 F
16433 1204 2019 5 11 2019-05-11 A NA F
#> sessionInfo()
R version 3.5.1 (2018-07-02)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)
Every year has 122 days from Sept 1 to Dec 31 inclusive, so you could add a variable marking the "fiscal year" for each row:
set.seed(42)
library(dplyr)
my_data <- tibble(ID = 1:6,
Date = as.Date("1978-09-01") + c(-1, 0, 1, 364, 365, 366))
my_data
# There are 122 days from each Aug 31 (last of the FY) to the end of the CY.
# lubridate::ymd(19781231) - lubridate::ymd(19780831)
my_data %>%
mutate(FY = year(Date + 122))
## A tibble: 6 x 3
# ID Date FY
# <int> <date> <dbl>
#1 1 1978-08-31 1978
#2 2 1978-09-01 1979
#3 3 1978-09-02 1979
#4 4 1979-08-31 1979
#5 5 1979-09-01 1980
#6 6 1979-09-02 1980
You could keep the data in one table and do subsequent analysis using group_by(FY), or use %>% split(.$FY) to put each FY into its own element of a list. From my limited experience, I think it's generally an anti-pattern to create separate data frames for annual subsets of your data, as that makes your code harder to maintain, troubleshoot, and modify.
Related
This question already has answers here:
Repeat each row of data.frame the number of times specified in a column
(10 answers)
Closed 1 year ago.
First time asking a question here, sry if I aren't clear enough
Here's my data:
df <- data.frame(Year=c("2018","2018","2019","2019","2018","2018","2019","2019"),Area=c("CF","CF","CF","CF","NY","NY","NY","NY"), Birth=c(1000,1100,1100,1000,2000,2100,2100,2000),Gender= c("F","M","F","M","F","M","F","M"))
df
# Year Area Birth Gender
# 1 2018 CF 1000 F
# 2 2018 CF 1100 M
# 3 2019 CF 1100 F
# 4 2019 CF 1000 M
# 5 2018 NY 2000 F
# 6 2018 NY 2100 M
# 7 2019 NY 2100 F
# 8 2019 NY 2000 M
where birth is the new babies born..
What I want to do is creates a classification model where it predicts how likely a new born baby would be a male/female, with area/year as predictor.
yes I know it should be linear regression with Y as birth, X as others, however I just somehow fall into this situation.
With the given data, I already know the results as 50% of an observation being male and 50% of an observation being female. What I want to know is the probability of a baby being male/female, not which observation(row) being male/female which I already knows.
Is their a way that I can make birth as observation which is 1000+1100+1100+1000+2000+2100+2100+2000=12400 rows of data? which would be something like 1st observation is a 2018 born female baby from CF, 2nd observation is a 2018 born male baby from CF. With 12400 of it.
Or any suggestion to deal with this?
We may use uncount
library(dplyr)
library(tidyr)
df %>%
uncount(Birth) %>%
as_tibble
-output
# A tibble: 12,400 x 3
Year Area Gender
<chr> <chr> <chr>
1 2018 CF F
2 2018 CF F
3 2018 CF F
4 2018 CF F
5 2018 CF F
6 2018 CF F
7 2018 CF F
8 2018 CF F
9 2018 CF F
10 2018 CF F
# … with 12,390 more rows
Or using base R
transform(df[rep(seq_len(nrow(df)), df$Birth),], Birth = sequence(df$Birth))
You could use dplyr and summarize:
library(tidyverse)
df_expanded <- df %>%
group_by(Year, Area, Gender) %>%
summarize(expanded = 1:Birth)
# A tibble: 12,400 x 4
# Groups: Year, Area, Gender [8]
Year Area Gender expanded
<chr> <chr> <chr> <int>
1 2018 CF F 1
2 2018 CF F 2
3 2018 CF F 3
4 2018 CF F 4
5 2018 CF F 5
6 2018 CF F 6
7 2018 CF F 7
8 2018 CF F 8
9 2018 CF F 9
10 2018 CF F 10
# … with 12,390 more rows
Uncount is without a doubt the best solution for this problem. One alternative to the solutions shown could be
library(dplyr)
library(tidyr)
df %>%
mutate(Birth = lapply(Birth, function(n) 1:n)) %>%
unnest(Birth)
This returns
# A tibble: 12,400 x 4
Year Area Birth Gender
<chr> <chr> <int> <chr>
1 2018 CF 1 F
2 2018 CF 2 F
3 2018 CF 3 F
4 2018 CF 4 F
5 2018 CF 5 F
6 2018 CF 6 F
7 2018 CF 7 F
8 2018 CF 8 F
9 2018 CF 9 F
10 2018 CF 10 F
# ... with 12,390 more rows
I am working on creating conditional averages for a large data set that involves # of flu cases seen during the week for several years. The data is organized as such:
What I want to do is create a new column that tabulates that average number of cases for that same week in previous years. For instance, for the row where Week.Number is 1 and Flu.Year is 2017, I would like the new row to give the average count for any year with Week.Number==1 & Flu.Year<2017. Normally, I would use the case_when() function to conditionally tabulate something like this. For instance, when calculating the average weekly volume I used this code:
mutate(average = case_when(
Flu.Year==2016 ~ mean(chcc$count[chcc$Flu.Year==2016]),
Flu.Year==2017 ~ mean(chcc$count[chcc$Flu.Year==2017]),
Flu.Year==2018 ~ mean(chcc$count[chcc$Flu.Year==2018]),
Flu.Year==2019 ~ mean(chcc$count[chcc$Flu.Year==2019]),
),
However, since there are four years of data * 52 weeks which is a lot of iterations to spell out the conditions for. Is there a way to elegantly code this in dplyr? The problem I keep running into is that I want to call values in counts column based on Week.Number and Flu.Year values in other rows conditioned on the current value of Week.Number and Flu.Year, and I am not sure how to accomplish that. Please let me know if there is further information / detail I can provide.
Thanks,
Steven
dat <- tibble( Flu.Year = rep(2016:2019,each = 52), Week.Number = rep(1:52,4), count = sample(1000, size=52*4, replace=TRUE) )
It's bad-form and, in some cases, an error when you use $-indexing within dplyr verbs.
I think a better way to get that average field is to group_by(Flu.Year) and calculate it straight-up.
library(dplyr)
set.seed(42)
dat <- tibble(
Flu.Year = sample(2016:2020, size=100, replace=TRUE),
count = sample(1000, size=100, replace=TRUE)
)
dat %>%
group_by(Flu.Year) %>%
mutate(average = mean(count)) %>%
# just to show a quick summary
slice(1:3) %>%
ungroup()
# # A tibble: 15 x 3
# Flu.Year count average
# <int> <int> <dbl>
# 1 2016 734 578.
# 2 2016 356 578.
# 3 2016 411 578.
# 4 2017 217 436.
# 5 2017 453 436.
# 6 2017 920 436.
# 7 2018 963 558
# 8 2018 609 558
# 9 2018 536 558
# 10 2019 943 543.
# 11 2019 740 543.
# 12 2019 536 543.
# 13 2020 627 494.
# 14 2020 218 494.
# 15 2020 389 494.
An alternative approach is to generate a summary table (just one row per year) and join it back in to the original data.
dat %>%
group_by(Flu.Year) %>%
summarize(average = mean(count))
# # A tibble: 5 x 2
# Flu.Year average
# <int> <dbl>
# 1 2016 578.
# 2 2017 436.
# 3 2018 558
# 4 2019 543.
# 5 2020 494.
dat %>%
group_by(Flu.Year) %>%
summarize(average = mean(count)) %>%
full_join(dat, by = "Flu.Year")
# # A tibble: 100 x 3
# Flu.Year average count
# <int> <dbl> <int>
# 1 2016 578. 734
# 2 2016 578. 356
# 3 2016 578. 411
# 4 2016 578. 720
# 5 2016 578. 851
# 6 2016 578. 822
# 7 2016 578. 465
# 8 2016 578. 679
# 9 2016 578. 30
# 10 2016 578. 180
# # ... with 90 more rows
The result, after chat:
tibble( Flu.Year = rep(2016:2018,each = 3), Week.Number = rep(1:3,3), count = 1:9 ) %>%
arrange(Flu.Year, Week.Number) %>%
group_by(Week.Number) %>%
mutate(year_week.average = lag(cumsum(count) / seq_along(count)))
# # A tibble: 9 x 4
# # Groups: Week.Number [3]
# Flu.Year Week.Number count year_week.average
# <int> <int> <int> <dbl>
# 1 2016 1 1 NA
# 2 2016 2 2 NA
# 3 2016 3 3 NA
# 4 2017 1 4 1
# 5 2017 2 5 2
# 6 2017 3 6 3
# 7 2018 1 7 2.5
# 8 2018 2 8 3.5
# 9 2018 3 9 4.5
We can use aggregate from base R
aggregate(count ~ Flu.Year, data, FUN = mean)
I have data similar to this Sample Data:
Cities Country Date Cases
1 BE A 2/12/20 12
2 BD A 2/12/20 244
3 BF A 2/12/20 1
4 V 2/12/20 13
5 Q 2/13/20 2
6 D 2/14/20 4
7 GH N 2/15/20 6
8 DA N 2/15/20 624
9 AG J 2/15/20 204
10 FS U 2/16/20 433
11 FR U 2/16/20 38
I want to organize the data by on the date and country and then sum a country's daily case. However, I try something like, it reveal the total sum:
my_data %>%
group_by(Country, Date)%>%
summarize(Cases=sum(Cases))
Your summarize function is likely being called from another package (plyr?). Try calling dplyr::sumarize like this:
my_data %>%
group_by(Country, Date)%>%
dplyr::summarize(Cases=sum(Cases))
# A tibble: 7 x 3
# Groups: Country [7]
Country Date Cases
<fct> <fct> <int>
1 A 2/12/20 257
2 D 2/14/20 4
3 J 2/15/20 204
4 N 2/15/20 630
5 Q 2/13/20 2
6 U 2/16/20 471
7 V 2/12/20 13
I sympathize with you that this is can be very frustrating. I have gotten into a habit of always using dplyr::select, dplyr::filter and dplyr::summarize. Otherwise you spend needless time frustrated about why your code isn't working.
We can also use aggregate
aggregate(Cases ~ Country + Date, my_data, sum)
I have data organized by two ID variables, Year and Country, like so:
Year Country VarA VarB
2015 USA 1 3
2016 USA 2 2
2014 Canada 0 10
2015 Canada 6 5
2016 Canada 7 8
I'd like to keep Year as an ID variable, but create multiple columns for VarA and VarB, one for each value of Country (I'm not picky about column order), to make the following table:
Year VarA.Canada VarA.USA VarB.Canada VarB.USA
2014 0 NA 10 NA
2015 6 1 5 3
2016 7 2 8 2
I managed to do this with the following code:
require(data.table)
require(reshape2)
data <- as.data.table(read.table(header=TRUE, text='Year Country VarA VarB
2015 USA 1 3
2016 USA 2 2
2014 Canada 0 10
2015 Canada 6 5
2016 Canada 7 8'))
molten <- melt(data, id.vars=c('Year', 'Country'))
molten[,variable:=paste(variable, Country, sep='.')]
recast <- dcast(molten, Year ~ variable)
But this seems a bit hacky (especially editing the default-named variable field). Can I do it with fewer function calls? Ideally I could just call one function, specifying the columns to drop as IDs and the formula for creating new variable names.
Using dcast you can cast multiple value.vars at once (from data.table v1.9.6 on). Try:
dcast(data, Year ~ Country, value.var = c("VarA","VarB"), sep = ".")
# Year VarA.Canada VarA.USA VarB.Canada VarB.USA
#1: 2014 0 NA 10 NA
#2: 2015 6 1 5 3
#3: 2016 7 2 8 2
So I have this:
Staff Result Date Days
1 50 2007 4
1 75 2006 5
1 60 2007 3
2 20 2009 3
2 11 2009 2
And I want to get to this:
Staff Result Date Days
1 55 2007 7
1 75 2006 5
2 15 2009 5
I want to have the Staff ID and Date be unique in each row, but I want to sum 'Days' and mean 'Result'
I can't work out how to do this in R, I'm sure I need to do lots of aggregations but I keep getting different results to what I am aiming for.
Many thanks
the simplest way to do this is to group_by Staff and Date and summarise the results with dplyr package:
require(dplyr)
df <- data.frame(Staff = c(1,1,1,2,2),
Result = c(50, 75, 60, 20, 11),
Date = c(2007, 2006, 2007, 2009, 2009),
Days = c(4, 5, 3, 3, 2))
df %>%
group_by(Staff, Date) %>%
summarise(Result = floor(mean(Result)),
Days = sum(Days)) %>%
data.frame
Staff Date Result Days
1 1 2006 75 5
2 1 2007 55 7
3 2 2009 15 5
You can aggregate on two variables by using a formula and then merge the two aggregates
merge(aggregate(Result ~ Staff + Date, data=df, mean),
aggregate(Days ~ Staff + Date, data=df, sum))
Staff Date Result Days
1 1 2006 75.0 5
2 1 2007 55.0 7
3 2 2009 15.5 5
Here is another option with data.table
library(data.table)
setDT(df1)[, .(Result = floor(mean(Result)), Days = sum(Days)), .(Staff, Date)]
# Staff Date Result Days
#1: 1 2007 55 7
#2: 1 2006 75 5
#3: 2 2009 15 5