How can I combine rows based on a specific parameter in R - r

I have a dataframe which looks like this:
ID Smoker Asthma Age Sex COPD Event_Date
1 1 0 0 65 M 0 12-2009
2 1 0 1 65 M 0 21-2009
3 1 0 1 65 M 0 23-2009
4 2 1 0 67 M 0 19-2010
5 2 1 0 67 M 0 21-2010
6 2 1 1 67 M 1 01-2011
7 2 1 1 67 M 1 02-2011
8 3 2 1 77 F 0 09-2015
9 3 2 1 77 F 1 10-2015
10 3 2 1 77 F 1 10-2015
I would like to know whether it would be possible it combine my rows in order to achieve a dataset like this:
ID Smoker Asthma Age Sex COPD Event_Data
1 0 1 65 M 0 12-2009
2 1 1 66 M 1 19-2010
3 2 1 77 F 1 09-2015
I have tried using the unique function, however this doesn't give me my desired output and repeats the ID for multiple rows.
This is an example of the code i've tried
Data2<-unique(Data)
I do not just want the first row because I want to include each column status. For example, just getting the first row would not include the COPD status which occurs in the later rows for each ID.

Alternative Solution:
library(dplyr)
d %>%
group_by(ID, Age, Sex, Smoker) %>%
summarise(Asthma = !is.na(match(1, Asthma)),
COPD = !is.na(match(1, COPD)),
Event_Date = first(Event_Date)) %>%
ungroup %>%
mutate_if(is.logical, as.numeric)
# A tibble: 3 x 7
ID Age Sex Smoker Asthma COPD Event_Date
<int> <int> <fct> <int> <dbl> <dbl> <fct>
1 1 65 M 0 1 0 12-2009
2 2 67 M 1 1 1 19-2010
3 3 77 F 2 1 1 09-2015

If you want to get the (first) row for each ID you can try something like this:
d <- structure(list(ID = c(1L, 1L, 1L, 2L, 2L, 2L, 2L, 3L, 3L, 3L),
Smoker = c(0L, 0L, 0L, 1L, 1L, 1L, 1L, 2L, 2L, 2L),
Asthma = c(0L, 1L, 1L, 0L, 0L, 1L, 1L, 1L, 1L, 1L),
Age = c(65L, 65L, 65L, 67L, 67L, 67L, 67L, 77L, 77L, 77L),
Sex = structure(c(2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L),
.Label = c("F", "M"), class = "factor"),
COPD = c(0L, 0L, 0L, 0L, 0L, 1L, 1L, 0L, 1L, 1L),
Event_Date = structure(c(5L, 7L, 9L, 6L, 8L, 1L, 2L, 3L, 4L, 4L),
.Label = c("01-2011", "02-2011", "09-2015",
"10-2015", "12-2009", "19-2010",
"21-2009", "21-2010", "23-2009"),
class = "factor")),
class = "data.frame",
row.names = c("1", "2", "3", "4", "5", "6", "7", "8", "9", "10"))
d[!duplicated(d$ID), ]
# ID Smoker Asthma Age Sex COPD Event_Date
# 1 1 0 0 65 M 0 12-2009
# 4 2 1 0 67 M 0 19-2010
# 8 3 2 1 77 F 0 09-2015

Use max when you need a value further down and dplyr::first for others, here an example
library(dplyr)
df %>% group_by(ID) %>% summarise(Smoker=first(Smoker), Asthma=max(Asthma, na.rm = TRUE))

Related

Looping through a column in R as variable changes

I am a novice trying to analyze trap catch data in R and am looking for an efficient way to loop through by trap line. The first column is trap ID. The second column is the trap line that each trap is associated with. The remaining columns are values related to target catch and bycatch for each visit to the traps. I want to write code that will evaluate the data during each visit for each trap line. Here is an example of data I am working with:
Sample Data:
Data <- structure(list(Trap_ID = c(1L, 2L, 1L, 1L, 2L, 3L), Trapline = c("Cemetery",
"Cemetery", "Golf", "Church", "Church", "Church"), Target_Visit_1 = c(0L,
1L, 5L, 0L, 1L, 1L), Bycatch_Visit_1 = c(3L, 2L, 0L, 2L, 1L,
4L), Target_Visit_2 = c(1L, 1L, 2L, 0L, 1L, 0L), Bycatch_Visit_2 = c(4L,
2L, 1L, 0L, 1L, 0L)), class = "data.frame", row.names = c(NA,
-6L))
The number of traps per trapline varies. I have a code that I wrote out for each Trapline (there are 14 different traplines), but I was hoping there would be a way to consolidate it into one line of code that would calculate values while the trapline was constant, and then when it changed to the next trapline it would start a new calculation. Here is an example of how I was finding the sum of bycatch found at the Cemetery Trapline for visit 1.
CemetaryBycatch1 <- Data %>% select(Bycatch Visit 1 %>% filter(Data$Trapline == "Cemetery")
sum(CemetaryBycatch1)
As of right now I have code like this written out for each trapline for each visit, but with 14 traplines and 8 total visits, I would like to avoid having to write out so many lines of code and was hoping there was a way to loop through it with one block of code that would calculate value (sum, mean, etc.) for each trap line.
Thanks
Does something like this help you?
You can add a filter for Trapline in between group_by and summarise_all.
Code:
library(dplyr)
Data <- structure(list(Trap_ID = c(1L, 2L, 1L, 1L, 2L, 3L), Trapline = c("Cemetery",
"Cemetery", "Golf", "Church", "Church", "Church"), Target_Visit_1 = c(0L,
1L, 5L, 0L, 1L, 1L), Bycatch_Visit_1 = c(3L, 2L, 0L, 2L, 1L,
4L), Target_Visit_2 = c(1L, 1L, 2L, 0L, 1L, 0L), Bycatch_Visit_2 = c(4L,
2L, 1L, 0L, 1L, 0L)), class = "data.frame", row.names = c(NA,
-6L))
df
Data %>%
group_by(Trap_ID, Trapline) %>%
summarise_all(list(sum))
Output:
#> # A tibble: 6 x 6
#> # Groups: Trap_ID [3]
#> Trap_ID Trapline Target_Visit_1 Bycatch_Visit_1 Target_Visit_2 Bycatch_Visit_2
#> <int> <chr> <int> <int> <int> <int>
#> 1 1 Cemetery 0 3 1 4
#> 2 1 Church 0 2 0 0
#> 3 1 Golf 5 0 2 1
#> 4 2 Cemetery 1 2 1 2
#> 5 2 Church 1 1 1 1
#> 6 3 Church 1 4 0 0
Created on 2020-10-16 by the reprex package (v0.3.0)
Adding another row to Data:
Trap_ID Trapline Target_Visit_1 Bycatch_Visit_1 Target_Visit_2 Bycatch_Visit_2
1 Cemetery 100 200 1 4
Will give you:
#> # A tibble: 6 x 6
#> # Groups: Trap_ID [3]
#> Trap_ID Trapline Target_Visit_1 Bycatch_Visit_1 Target_Visit_2 Bycatch_Visit_2
#> <int> <chr> <int> <int> <int> <int>
#> 1 1 Cemetery 100 203 2 8
#> 2 1 Church 0 2 0 0
#> 3 1 Golf 5 0 2 1
#> 4 2 Cemetery 1 2 1 2
#> 5 2 Church 1 1 1 1
#> 6 3 Church 1 4 0 0
Created on 2020-10-16 by the reprex package (v0.3.0)

How to change the values within the group?

I created a column of nesting success with a value of "1" if nest's fate was "rearing" or "fledged", and 0 if nest's fate was "nest failed". For some cases, the nest's fate was "rearing" in the first visit and "failed" for the second visit. In such cases, success of a single nest turned out to be both 1 and 0 (see nest "D063" and "D063").
How to remove "1"s or assign "NA", and only keep "0"s in the cases with both 1 and 0 in the success of the same nest?
In other words, I'd like to have only one success outcome per nest (single 1 or 0), not multiple. And, I want to keep all the rows.
My data looks like this:
Example data:
structure(list(date = structure(c(4L, 2L, 1L, 5L, 3L, 1L, 5L,
2L, 1L, 5L, 3L, 1L, 5L, 2L, 1L), .Label = c("14/06/2018", "17/05/2018",
"21/05/2018", "5/05/2018", "6/05/2018"), class = "factor"), nest.code = structure(c(1L,
1L, 1L, 2L, 2L, 2L, 3L, 3L, 3L, 4L, 4L, 4L, 5L, 5L, 5L), .Label = c("D046",
"D047", "D062", "D063", "W18003"), class = "factor"), year = c(2018L,
2018L, 2018L, 2018L, 2018L, 2018L, 2018L, 2018L, 2018L, 2018L,
2018L, 2018L, 2018L, 2018L, 2018L), species = structure(c(1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L), .Label = c("AA",
"BB"), class = "factor"), visit = c(1L, 2L, 3L, 1L, 2L, 3L, 1L,
2L, 3L, 1L, 2L, 3L, 1L, 2L, 3L), eggs = c(1L, 0L, 0L, 1L, 0L,
0L, 2L, 0L, 0L, 1L, 0L, 0L, 1L, 0L, 0L), chicks = c(0L, NA, NA,
0L, 1L, 0L, 0L, 2L, 0L, 0L, 1L, 0L, 0L, NA, 1L), fate = structure(c(2L,
4L, 5L, 2L, 4L, 3L, 2L, 4L, 3L, 2L, 4L, 3L, 2L, 5L, 1L), .Label = c("fledged",
"incubating", "nest failed", "rearing", "unknown"), class = "factor"),
success = c(NA, 1L, NA, NA, 1L, 0L, NA, 1L, 0L, NA, 1L, 0L,
NA, NA, 1L)), class = "data.frame", row.names = c(NA, -15L
))
This is the code I tried:
datanew <- data %>%
group_by(year, species, nest.code)%>%
mutate(Real_success = ifelse(success ==1 & 0, 0, success))
I'm not sure how you imagine it to look in the end. Do you want to have all rows preserved, do you want to have it ordered in some way. Anyway, this is what I came up with:
UPDATE: Sry, I missed "fledged" in the first answer
dat %>%
group_by(year, species, nest.code)%>%
arrange(year, species, nest.code, success) %>%
mutate(success = ifelse(row_number() > 1, NA, success))
# A tibble: 15 x 9
# Groups: year, species, nest.code [5]
date nest.code year species visit eggs chicks outcome success
<fct> <fct> <int> <fct> <int> <int> <int> <fct> <int>
1 17/05/2018 D046 2018 AA 2 0 NA rearing 1
2 5/05/2018 D046 2018 AA 1 1 0 incubating NA
3 14/06/2018 D046 2018 AA 3 0 NA unknown NA
4 14/06/2018 D047 2018 AA 3 0 0 nest failed 0
5 21/05/2018 D047 2018 AA 2 0 1 rearing NA
6 6/05/2018 D047 2018 AA 1 1 0 incubating NA
7 14/06/2018 D062 2018 AA 3 0 0 nest failed 0
8 17/05/2018 D062 2018 AA 2 0 2 rearing NA
9 6/05/2018 D062 2018 AA 1 2 0 incubating NA
10 14/06/2018 D063 2018 AA 3 0 0 nest failed 0
11 21/05/2018 D063 2018 AA 2 0 1 rearing NA
12 6/05/2018 D063 2018 AA 1 1 0 incubating NA
13 14/06/2018 W18003 2018 BB 3 0 1 fledged 1
14 6/05/2018 W18003 2018 BB 1 1 0 incubating NA
15 17/05/2018 W18003 2018 BB 2 0 NA unknown NA
there definitly will be some easier way to do this. No pro in dplyr myself.
If it works, I'm happy.
Here's an approach that puts a zero in all rows for nests with at least one fail, a 1 if there is at least one success and no fail, and NA otherwise:
library(dplyr)
mydata %>%
group_by(year, species, nest.code) %>%
mutate(real_success = case_when(
sum(1-success, na.rm = T) > 0 ~ 0, # There was a fail
sum(success, na.rm = T) > 0 ~ 1,
TRUE ~ NA_real_)) %>%
ungroup()
# A tibble: 15 x 10
date nest.code year species visit eggs chicks fate success real_success
<fct> <fct> <int> <fct> <int> <int> <int> <fct> <int> <dbl>
1 5/05/2018 D046 2018 AA 1 1 0 incubating NA 1
2 17/05/2018 D046 2018 AA 2 0 NA rearing 1 1
3 14/06/2018 D046 2018 AA 3 0 NA unknown NA 1
4 6/05/2018 D047 2018 AA 1 1 0 incubating NA 0
5 21/05/2018 D047 2018 AA 2 0 1 rearing 1 0
6 14/06/2018 D047 2018 AA 3 0 0 nest fail… 0 0
7 6/05/2018 D062 2018 AA 1 2 0 incubating NA 0
8 17/05/2018 D062 2018 AA 2 0 2 rearing 1 0
9 14/06/2018 D062 2018 AA 3 0 0 nest fail… 0 0
10 6/05/2018 D063 2018 AA 1 1 0 incubating NA 0
11 21/05/2018 D063 2018 AA 2 0 1 rearing 1 0
12 14/06/2018 D063 2018 AA 3 0 0 nest fail… 0 0
13 6/05/2018 W18003 2018 BB 1 1 0 incubating NA 1
14 17/05/2018 W18003 2018 BB 2 0 NA unknown NA 1
15 14/06/2018 W18003 2018 BB 3 0 1 fledged 1 1

Subsetting a dataframe based on summation of rows of a given column

I am dealing with data with three variables (i.e. id, time, gender). It looks like
df <-
structure(
list(
id = c(1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L),
time = c(21L, 3L, 4L, 9L, 5L, 9L, 10L, 6L, 27L, 3L, 4L, 10L),
gender = c(1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 1L, 1L, 1L, 1L)
),
.Names = c("id", "time", "gender"),
class = "data.frame",
row.names = c(NA,-12L)
)
That is, each id has four observations for time and gender. I want to subset this data in R based on the sums of the rows of variable time which first gives a value which is greater than or equal to 25 for each id. Notice that for id 2 all observations will be included and for id 3 only the first observation is involved. The expected results would look like:
df <-
structure(
list(
id = c(1L, 1L, 1L, 2L, 2L, 2L, 2L, 3L ),
time = c(21L, 3L, 4L, 5L, 9L, 10L, 6L, 27L ),
gender = c(1L, 1L, 1L, 0L, 0L, 0L, 0L, 1L)
),
.Names = c("id", "time", "gender"),
class = "data.frame",
row.names = c(NA,-8L)
)
Any help on this is highly appreciated.
One option is using lag of cumsum as:
library(dplyr)
df %>% group_by(id,gender) %>%
filter(lag(cumsum(time), default = 0) < 25 )
# # A tibble: 8 x 3
# # Groups: id, gender [3]
# id time gender
# <int> <int> <int>
# 1 1 21 1
# 2 1 3 1
# 3 1 4 1
# 4 2 5 0
# 5 2 9 0
# 6 2 10 0
# 7 2 6 0
# 8 3 27 1
Using data.table: (Updated based on feedback from #Renu)
library(data.table)
setDT(df)
df[,.SD[shift(cumsum(time), fill = 0) < 25], by=.(id,gender)]
Another option would be to create a logical vector for each 'id', cumsum(time) >= 25, that is TRUE when the cumsum of 'time' is equal to or greater than 25.
Then you can filter for rows where the cumsum of this vector is less or equal then 1, i.e. filter for entries until the first TRUE for each 'id'.
df %>%
group_by(id) %>%
filter(cumsum( cumsum(time) >= 25 ) <= 1)
# A tibble: 8 x 3
# Groups: id [3]
# id time gender
# <int> <int> <int>
# 1 1 21 1
# 2 1 3 1
# 3 1 4 1
# 4 2 5 0
# 5 2 9 0
# 6 2 10 0
# 7 2 6 0
# 8 3 27 1
Can try dplyr construction:
dt <- groupby(df, id) %>%
#sum time within groups
mutate(sum_time = cumsum(time))%>%
#'select' rows, which fulfill the condition
filter(sum_time < 25) %>%
#exclude sum_time column from the result
select (-sum_time)

Apply function across multiple columns

Please find here a very small subset of a long data.table I am working with
dput(dt)
structure(list(id = 1:15, pnum = c(4298390L, 4298390L, 4298390L,
4298558L, 4298558L, 4298559L, 4298559L, 4299026L, 4299026L, 4299026L,
4299026L, 4300436L, 4300436L, 4303566L, 4303566L), invid = c(15L,
101L, 102L, 103L, 104L, 103L, 104L, 106L, 107L, 108L, 109L, 87L,
111L, 2L, 60L), fid = structure(c(1L, 1L, 1L, 2L, 2L, 2L, 2L,
4L, 4L, 4L, 4L, 3L, 3L, 2L, 2L), .Label = c("CORN", "DowCor",
"KIM", "Texas"), class = "factor"), dom_kn = c(1L, 0L, 0L, 0L,
1L, 0L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 1L), prim_kn = c(1L,
0L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L), pat_kn = c(1L,
0L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L), net_kn = c(1L,
0L, 0L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 1L), age_kn = c(1L,
0L, 0L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L), legclaims = c(5L,
0L, 0L, 2L, 5L, 2L, 5L, 0L, 0L, 0L, 0L, 5L, 0L, 5L, 2L), n_inv = c(3L,
3L, 3L, 2L, 2L, 2L, 2L, 4L, 4L, 4L, 4L, 2L, 2L, 2L, 2L)), .Names = c("id",
"pnum", "invid", "fid", "dom_kn", "prim_kn", "pat_kn", "net_kn",
"age_kn", "legclaims", "n_inv"), class = "data.frame", row.names = c(NA,
-15L))
I am looking to apply a tweaked greater than comparison in 5 different columns.
Within each pnum (patent), there are multiple invid (inventors). I want to compare the values of the columns dom_kn, prim_kn, pat_kn, net_kn, and age_kn per row, to the values in the other rows with the same pnum. The comparison is simply > and if the value is indeed bigger than the other, one "point" should be attributed.
So for the first row pnum == 4298390 and invid == 15, you can see the values in the five columns are all 1, while the values for invid == 101 | 102 are all zero. This means that if we individually compare (is greater than?) each value in the first row to each cell in the second and third row, the total sum would be 10 points. In every single comparison, the value in the first row is bigger and there are 10 comparisons.
The number of comparisons is by design 5 * (n_inv -1).
The result I am looking for for row 1 should then be 10 / 10 = 1.
For pnum == 4298558 the columns net_kn and age_kn both have values 1 in the two rows (for invid 103 and 104), so that each should get 0.5 points (if there would be three inventors with value 1, everyone should get 0.33 points). The same goes for pnum == 4298558.
For the next pnum == 4299026 all values are zero so every comparison should result in 0 points.
Thus note the difference: There are three different dyadic comparisons
1 > 0 --> assign 1
1 = 1 --> assign 1 / number of positive values in column subset
0 = 0 --> assign 0
Desired result
An extra column result in the data.table with values 1 0 0 0.2 0.8 0.2 0.8 0 0 0 0 1 0 0.8 0.2
Any suggestions on how to compute this efficiently?
Thanks!
vars = grep('_kn', names(dt), value = T)
# all you need to do is simply assign the correct weight and sum the numbers up
dt[, res := 0]
for (var in vars)
dt[, res := res + get(var) / .N, by = c('pnum', var)]
# normalize
dt[, res := res/sum(res), by = pnum]
# id pnum invid fid dom_kn prim_kn pat_kn net_kn age_kn legclaims n_inv res
# 1: 1 4298390 15 CORN 1 1 1 1 1 5 3 1.0
# 2: 2 4298390 101 CORN 0 0 0 0 0 0 3 0.0
# 3: 3 4298390 102 CORN 0 0 0 0 0 0 3 0.0
# 4: 4 4298558 103 DowCor 0 0 0 1 1 2 2 0.2
# 5: 5 4298558 104 DowCor 1 1 1 1 1 5 2 0.8
# 6: 6 4298559 103 DowCor 0 0 0 1 1 2 2 0.2
# 7: 7 4298559 104 DowCor 1 1 1 1 1 5 2 0.8
# 8: 8 4299026 106 Texas 0 0 0 0 0 0 4 NaN
# 9: 9 4299026 107 Texas 0 0 0 0 0 0 4 NaN
#10: 10 4299026 108 Texas 0 0 0 0 0 0 4 NaN
#11: 11 4299026 109 Texas 0 0 0 0 0 0 4 NaN
#12: 12 4300436 87 KIM 1 1 1 1 1 5 2 1.0
#13: 13 4300436 111 KIM 0 0 0 0 0 0 2 0.0
#14: 14 4303566 2 DowCor 1 1 1 1 1 5 2 0.8
#15: 15 4303566 60 DowCor 1 0 0 1 0 2 2 0.2
Dealing with the above NaN case (arguably the correct answer), is left to the reader.
Here's a fastish solution using dplyr:
library(dplyr)
dt %>%
group_by(pnum) %>% # group by pnum
mutate_each(funs(. == max(.) & max(.) != 0), ends_with('kn')) %>%
#give a 1 if the value is the max, and not 0. Only for the column with kn
mutate_each(funs(. / sum(.)) , ends_with('kn')) %>%
#correct for multiple maximums
select(ends_with('kn')) %>%
#remove all non kn columns
do(data.frame(x = rowSums(.[-1]), y = sum(.[-1]))) %>%
#make a new data frame with x = rowsums for each indvidual
# and y the colusums
mutate(out = x/y)
#divide by y (we could just use /5 if we always have five columns)
giving your desired output in the column out:
Source: local data frame [15 x 4]
Groups: pnum [6]
pnum x y out
(int) (dbl) (dbl) (dbl)
1 4298390 5 5 1.0
2 4298390 0 5 0.0
3 4298390 0 5 0.0
4 4298558 1 5 0.2
5 4298558 4 5 0.8
6 4298559 1 5 0.2
7 4298559 4 5 0.8
8 4299026 NaN NaN NaN
9 4299026 NaN NaN NaN
10 4299026 NaN NaN NaN
11 4299026 NaN NaN NaN
12 4300436 5 5 1.0
13 4300436 0 5 0.0
14 4303566 4 5 0.8
15 4303566 1 5 0.2
The NaNs come from the groups with no winners, convert them back using eg:
x[is.na(x)] <- 0

calculate the rate under the same in using R

I have a question to calculate the rate under the same id numbers.
Here is the sample dataset d:
id answer
1 1
1 0
1 0
1 1
1 1
1 1
1 0
2 0
2 0
2 0
3 1
3 0
The ideal output is
id rate freq
1 4/7 (=0.5714) 7
2 0 3
3 1/2 (=0.5) 2
Thanks.
Just for fun, you can use aggregate
> aggregate(answer~id, function(x) c(rate=mean(x), freq=length(x)), data=df1)
id answer.rate answer.freq
1 1 0.5714286 7.0000000
2 2 0.0000000 3.0000000
3 3 0.5000000 2.0000000
Try
library(data.table)
setDT(df1)[,list(rate= mean(answer), freq=.N) ,id]
# id rate freq
#1: 1 0.5714286 7
#2: 2 0.0000000 3
#3: 3 0.5000000 2
Or
library(dplyr)
df1 %>%
group_by(id) %>%
summarise(rate=mean(answer), freq=n())
data
df1 <- structure(list(id = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L,
3L, 3L), answer = c(1L, 0L, 0L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 1L,
0L)), .Names = c("id", "answer"), class = "data.frame",
row.names = c(NA, -12L))

Resources