Recoding missing data in longitudinal data frames with R - r

I have a data frame with a similar longitudinal structure as data:
data = data.frame (
ID = c("a","a","a","b","b","b","c","c", "c"),
period = c(1,2,3,1,2,3,1,2,3),
size = c(3,3,NA, NA, NA,1, 14,14, 14))
The values of the variable size are fixed so that each period has the same value for size. Yet some observations have missing values. My aim consists of replacing these missing values
with the value of size associated with the periods where there is no missing (e.g. 3 for ID "a" and 1 for ID "b").
The desired data frame should look something similar to:
data.1
ID period value
a 1 3
a 2 3
a 3 3
b 1 1
b 2 1
b 3 1
c 1 14
c 2 14
c 3 14
I have tried different combinations of the formula below but I don't get the result I am looking for.
library(dplyr)
data.1 = data %>% group_by(ID) %>%
mutate(new.size = ifelse(is.na(size), !is.na(size),
ifelse(!is.na(size), size, 0)))
That yields the following:
data.1
Source: local data frame [9 x 4]
Groups: ID [3]
ID period size new.size
(fctr) (dbl) (dbl) (dbl)
1 a 1 3 3
2 a 2 3 3
3 a 3 NA 0
4 b 1 NA 0
5 b 2 NA 0
6 b 3 1 1
7 c 1 14 14
8 c 2 14 14
9 c 3 14 14
I would be grateful if someone could give me a hint on how to get the right solution.

here another solution using dplyr with na.omit
group_by(data, ID) %>%
mutate(value=na.omit(size)[1])
Source: local data frame [9 x 4]
Groups: ID [3]
ID period size value
<fctr> <dbl> <dbl> <dbl>
1 a 1 3 3
2 a 2 3 3
3 a 3 NA 3
4 b 1 NA 1
5 b 2 NA 1
6 b 3 1 1
7 c 1 14 14
8 c 2 14 14
9 c 3 14 14
note that you can replace na.omit with max(size, na.rm=TRUE) if you are looking for maximum for example.

How about this with base R:
vals <- unique(na.omit(data[, c("ID", "size")]))
data$size <- vals$size[match(data$ID, vals$ID)]
ID period size
1 a 1 3
2 a 2 3
3 a 3 3
4 b 1 1
5 b 2 1
6 b 3 1
7 c 1 14
8 c 2 14
9 c 3 14

To correct your code, you can try the following with dplyr
library(dplyr)
data %>% group_by(ID) %>%
mutate(new.size = ifelse(is.na(size), size[!is.na(size)],size))
# ID period size new.size
# (fctr) (dbl) (dbl) (dbl)
#1 a 1 3 3
#2 a 2 3 3
#3 a 3 NA 3
#4 b 1 NA 1
#5 b 2 NA 1
#6 b 3 1 1
#7 c 1 14 14
#8 c 2 14 14
#9 c 3 14 14
Or a base R alternative with ave
data$new.size <- ave(data$size,data$ID, FUN=function(x)unique(x[!is.na(x)]))
data$new.size
#[1] 3 3 3 1 1 1 14 14 14

Related

Expanding Data Frame with cumsum in R

I've got a data frame with historc F1 data that looks like this:
Driver
Race number
Position
Number of Career Podiums
Farina
1
1
1
Fagioli
1
2
1
Parnell
1
3
1
Fangio
2
1
1
Ascari
2
2
1
Chiron
2
3
1
...
...
...
...
Moss
47
1
4
Fangio
47
2
23
Kling
47
3
2
now I want to extend it in a way that for every race there is not only the top 3 of that specific Race but also everyone that has had a top 3 before so I can create a racing bar chart. The final data frame should look like this
Driver
Race number
Position
Number of Career Podiums
Farina
1
1
1
Fagioli
1
2
1
Parnell
1
3
1
Fangio
2
1
1
Ascari
2
2
1
Chiron
2
3
1
Farina
2
NA
1
Fagioli
2
NA
1
Parnell
2
NA
1
Parsons
3
1
1
Holland
3
2
1
Rose
3
3
1
Farina
3
NA
1
Fagioli
3
NA
1
Parnell
3
NA
1
Fangio
3
NA
1
Ascari
3
NA
1
Chiron
3
NA
1
Is there any easy way to do this? I couldnt find someone with a similar problem on google.
If I correctly understand your problem, you have only observations for the top3 drivers of every race. But you want to have observations for every driver that has ever achieved a top3 position in your dataset across all races.
For example in the following dataset, driver D only has an observation for the second race where they achieved the first place, but not the other races:
dat <- data.frame(driver = c("A", "B", "C", "D", "A", "B", "B", "A", "C"),
race_number = rep(1:3, each = 3),
position = rep(1:3, 3))
print(dat)
driver race_number position
1 A 1 1
2 B 1 2
3 C 1 3
4 D 2 1
5 A 2 2
6 B 2 3
7 B 3 1
8 A 3 2
9 C 3 3
To add entries for driver D for the races number 1 and 2 you could use tidyr's expand() function or if you want to use base R you could achieve the same using expand.grid() and unique(). This would leave you with a dataframe object containing all possible combinations between the drivers and the race numbers. Afterwards you simply have to left or right join the result with the initial dataframe.
A solution using standard tidyverse packages tidyr and dplyr could look like this:
library(dplyr)
library(tidyr)
dat %>%
expand(driver, race_number) %>%
left_join(dat)
# A tibble: 12 x 4
driver race_number position previous_podium_positions
<chr> <int> <int> <dbl>
1 A 1 1 1
2 A 2 2 2
3 A 3 2 3
4 B 1 2 1
5 B 2 3 2
6 B 3 1 3
7 C 1 3 1
8 C 2 NA NA
9 C 3 3 2
10 D 1 NA NA
11 D 2 1 1
12 D 3 NA NA
Note that the "new" observations will naturally have NAs for the position and the number of previous podium positions. The latter could be added easily via the following approach, which counts the previous
dat %>%
expand(driver, race_number) %>%
left_join(dat) %>%
arrange(race_number) %>%
mutate(previous_podium_positions = ifelse(is.na(previous_podium_positions),0,1)) %>%
group_by(driver) %>%
mutate(previous_podium_positions = cumsum(previous_podium_positions))
Joining, by = c("driver", "race_number")
# A tibble: 12 x 4
# Groups: driver [4]
driver race_number position previous_podium_positions
<chr> <int> <int> <dbl>
1 A 1 1 1
2 B 1 2 1
3 C 1 3 1
4 D 1 NA 0
5 A 2 2 2
6 B 2 3 2
7 C 2 NA 1
8 D 2 1 1
9 A 3 2 3
10 B 3 1 3
11 C 3 3 2
12 D 3 NA 1
I hope this helped. Just a brief disclaimer, these may very well be not the most resource or time-efficient solutions but rather the fastest/easiest way to solve the issue.

Split information from two columns, R, tidyverse

i've got some data in two columns:
# A tibble: 16 x 2
code niveau
<chr> <dbl>
1 A 1
2 1 2
3 2 2
4 3 2
5 4 2
6 5 2
7 B 1
8 6 2
9 7 2
My desired output is:
A tibble: 16 x 3
code niveau cat
<chr> <dbl> <chr>
1 A 1 A
2 1 2 A
3 2 2 A
4 3 2 A
5 4 2 A
6 5 2 A
7 B 1 B
8 6 2 B
I there a tidy way to convert these data without looping through it?
Here some dummy data:
data<-tibble(code=c('A', 1,2,3,4,5,'B', 6,7,8,9,'C',10,11,12,13), niveau=c(1, 2,2,2,2,2,1,2,2,2,2,1,2,2,2,2))
desired_output<-tibble(code=c('A', 1,2,3,4,5,'B', 6,7,8,9,'C',10,11,12,13), niveau=c(1, 2,2,2,2,2,1,2,2,2,2,1,2,2,2,2),
cat=c(rep('A', 6),rep('B', 5), rep('C', 5)))
Nicolas
Probably, you can create a new column cat and replace code values with NA where there is a number. We can then use fill to replace missing values with previous non-NA value.
library(dplyr)
data %>% mutate(cat = replace(code, grepl('\\d', code), NA)) %>% tidyr::fill(cat)
# A tibble: 16 x 3
# code niveau cat
# <chr> <dbl> <chr>
# 1 A 1 A
# 2 1 2 A
# 3 2 2 A
# 4 3 2 A
# 5 4 2 A
# 6 5 2 A
# 7 B 1 B
# 8 6 2 B
# 9 7 2 B
#10 8 2 B
#11 9 2 B
#12 C 1 C
#13 10 2 C
#14 11 2 C
#15 12 2 C
#16 13 2 C
We can use str_detect from stringr
library(dplyr)
library(stringr)
library(tidyr)
data %>%
mutate(cat = replace(code, str_detect(code, '\\d'), NA)) %>%
fill(cat)

Count distinct values that are not the same as the current row's values

Suppose I have a data frame:
df <- data.frame(SID=sample(1:4,15,replace=T), Var1=c(rep("A",5),rep("B",5),rep("C",5)), Var2=sample(2:4,15,replace=T))
which comes out to something like this:
SID Var1 Var2
1 4 A 2
2 3 A 2
3 4 A 3
4 3 A 3
5 1 A 4
6 1 B 2
7 3 B 2
8 4 B 4
9 4 B 4
10 3 B 2
11 2 C 2
12 2 C 2
13 4 C 4
14 2 C 4
15 3 C 3
What I hope to accomplish is to find the count of unique SIDs (see below under update, this should have said count of unique (SID, Var1) combinations) where the given row's Var1 is excluded from this count and the count is grouped on Var2. So for the example above, I would like to output:
SID Var1 Var2 Count.Excluding.Var1
1 4 A 2 3
2 3 A 2 3
3 4 A 3 1
4 3 A 3 1
5 1 A 4 3
6 1 B 2 3
7 3 B 2 3
8 4 B 4 3
9 4 B 4 3
10 3 B 2 3
11 2 C 2 4
12 2 C 2 4
13 4 C 4 2
14 2 C 4 2
15 3 C 3 2
For the 1st observation, we have a count of 3 because there are 3 unique combinations of (SID, Var1) for the given Var2 value (2, in this case) where Var1 != A (Var1 value of 1st observation) -- specifically, the count includes observation 6, 7 and 11, but not 12 because we already accounted for a (SID, Var1)=(2,C) and not row 2 because we do not want Var1 to be "A". All of these rows have the same Var2 value.
I'd preferably like to use dplyr functions and the %>% operator.
&
UPDATE
I apologize for the confusion and my incorrect explanation above. I have corrected what I intended on asking for in the paranthesis, but I am leaving my original phrasing as well because majority of answers seem to interpret it this way.
As for the example, I apologize for not setting the seed. There seems to have been some confusion with regards to the Count.Excluding.Var1 for rows 11 and 12. With unique (SID, Var1) combinations, rows 11 and 12 should make sense as these count rows 1,2,6, and 7 xor 8.
A simple mapply can do the trick. But as OP requested for %>% based solution, an option could be as:
df %>% mutate(Count.Excluding.Var1 =
mapply(function(x,y)nrow(unique(df[df$Var1 != x & df$Var2 == y,1:2])),.$Var1,.$Var2))
# SID Var1 Var2 Count.Excluding.Var1
# 1 4 A 2 3
# 2 2 A 3 3
# 3 4 A 4 3
# 4 4 A 4 3
# 5 3 A 4 3
# 6 4 B 3 1
# 7 3 B 3 1
# 8 3 B 3 1
# 9 4 B 2 3
# 10 2 B 3 1
# 11 2 C 2 2
# 12 4 C 4 2
# 13 1 C 4 2
# 14 1 C 2 2
# 15 3 C 4 2
Data:
The above results are based on origional data provided by OP.
df <- data.frame(SID=sample(1:4,15,replace=T), Var1=c(rep("A",5),rep("B",5),rep("C",5)), Var2=sample(2:4,15,replace=T))
could not think of a dplyr solution, but here's one with apply
df$Count <- apply(df, 1, function(x) length(unique(df$SID[(df$Var1 != x['Var1']) & (df$Var2 == x['Var2'])])))
# SID Var1 Var2 Count
# 1 4 A 2 3
# 2 3 A 2 3
# 3 4 A 3 1
# 4 3 A 3 1
# 5 1 A 4 2
# 6 1 B 2 3
# 7 3 B 2 3
# 8 4 B 4 3
# 9 4 B 4 3
# 10 3 B 2 3
# 11 2 C 2 3
# 12 2 C 2 3
# 13 4 C 4 2
# 14 2 C 4 2
# 15 3 C 3 2
Here is a dplyr solution, as requested. For future reference, please use set.seed so we can reproduce your desired output with sample, else I have to enter data by hand...
I think this is your logic? You want the n_distinct(SID) for each Var2, but for each row, you want to exclude rows which have the same Var1 as the current row. So a key observation here is row 3, where a simple grouped summarise would yield a count of 2. Of the rows with Var2 = 3, row 3 has SID = 4, row 4 has SID = 3, row 15 has SID = 3, but we don't count row 3 or row 4, so final count is one unique SID.
Here we get first the count of unique SID for each Var2, then the count of unique SID for each Var1, Var2 combo. First count is too large by the amount of additional unique SID for each combo, so we subtract it and add one. There is an edge case where for a Var1, there is only one corresponding Var2. This should return 0 since you exclude all the possible values of SID. I added two rows to illustrate this.
library(tidyverse)
df <- read_table2(
"SID Var1 Var2
4 A 2
3 A 2
4 A 3
3 A 3
1 A 4
1 B 2
3 B 2
4 B 4
4 B 4
3 B 2
2 C 2
2 C 2
4 C 4
2 C 4
3 C 3
1 D 5
2 D 5"
)
df %>%
group_by(Var2) %>%
mutate(SID_per_Var2 = n_distinct(SID)) %>%
group_by(Var1, Var2) %>%
mutate(SID_per_Var1Var2 = n_distinct(SID)) %>%
ungroup() %>%
add_count(Var1) %>%
add_count(Var1, Var2) %>%
mutate(
Count.Excluding.Var1 = if_else(
n > nn,
SID_per_Var2 - SID_per_Var1Var2 + 1,
0
)
) %>%
select(SID, Var1, Var2, Count.Excluding.Var1)
#> # A tibble: 17 x 4
#> SID Var1 Var2 Count.Excluding.Var1
#> <int> <chr> <int> <dbl>
#> 1 4 A 2 3.
#> 2 3 A 2 3.
#> 3 4 A 3 1.
#> 4 3 A 3 1.
#> 5 1 A 4 3.
#> 6 1 B 2 3.
#> 7 3 B 2 3.
#> 8 4 B 4 3.
#> 9 4 B 4 3.
#> 10 3 B 2 3.
#> 11 2 C 2 4.
#> 12 2 C 2 4.
#> 13 4 C 4 2.
#> 14 2 C 4 2.
#> 15 3 C 3 2.
#> 16 1 D 5 0.
#> 17 2 D 5 0.
Created on 2018-04-12 by the reprex package (v0.2.0).
Here's a solution using purrr - you can wrap this in a mutate statement if you want, but I don't know that it adds much in this particular case.
library(purrr)
df$Count.Excluding.Var1 = map_int(1:nrow(df), function(n) {
df %>% filter(Var2 == Var2[n], Var1 != Var1[n]) %>% distinct() %>% nrow()
})
(Updated with input from comments by Calum You. Thanks!)
A 100% tidyverse solution:
library(tidyverse) # dplyr + purrr
df %>%
group_by(Var2) %>%
mutate(count = map_int(Var1,~n_distinct(SID[.x!=Var1],Var1[.x!=Var1])))
# # A tibble: 15 x 4
# # Groups: Var2 [3]
# SID Var1 Var2 count
# <int> <chr> <int> <int>
# 1 4 A 2 3
# 2 3 A 2 3
# 3 4 A 3 1
# 4 3 A 3 1
# 5 1 A 4 3
# 6 1 B 2 3
# 7 3 B 2 3
# 8 4 B 4 3
# 9 4 B 4 3
# 10 3 B 2 3
# 11 2 C 2 4
# 12 2 C 2 4
# 13 4 C 4 2
# 14 2 C 4 2
# 15 3 C 3 2

Shifting rows up in columns and flush remaining ones

I have a problem with moving the rows to one upper row. When the rows become completely NA I would like to flush those rows (see the pic below). My current approach for this solution however still keeping the second rows.
Here is my approach
data <- data.frame(gr=c(rep(1:3,each=2)),A=c(1,NA,2,NA,4,NA), B=c(NA,1,NA,3,NA,7),C=c(1,NA,4,NA,5,NA))
> data
gr A B C
1 1 1 NA 1
2 1 NA 1 NA
3 2 2 NA 4
4 2 NA 3 NA
5 3 4 NA 5
6 3 NA 7 NA
so using this approach
data.frame(apply(data,2,function(x){x[complete.cases(x)]}))
gr A B C
1 1 1 1 1
2 1 2 3 4
3 2 4 7 5
4 2 1 1 1
5 3 2 3 4
6 3 4 7 5
As we can see still I am having the second rows in each group!
The expected output
> data
gr A B C
1 1 1 1 1
2 2 2 3 4
3 3 4 7 5
thanks!
If there's at most one valid value per gr, you can use na.omit then take the first value from it:
data %>% group_by(gr) %>% summarise_all(~ na.omit(.)[1])
# [1] is optional depending on your actual data
# A tibble: 3 x 4
# gr A B C
# <int> <dbl> <dbl> <dbl>
#1 1 1 1 1
#2 2 2 3 4
#3 3 4 7 5
You can do it with dplyr like this:
data$ind <- rep(c(1,2), replace=TRUE)
data %>% fill(A,B,C) %>% filter(ind == 2) %>% mutate(ind=NULL)
gr A B C
1 1 1 1 1
2 2 2 3 4
3 3 4 7 5
Depending on how consistent your full data is, this may need to be adjusted.
One more solution using data.table:-
data <- data.frame(gr=c(rep(1:3,each=2)),A=c(1,NA,2,NA,4,NA), B=c(NA,1,NA,3,NA,7),C=c(1,NA,4,NA,5,NA))
library(data.table)
library(zoo)
setDT(data)
data[, A := na.locf(A), by = gr]
data[, B := na.locf(B), by = gr]
data[, C := na.locf(C), by = gr]
data <- unique(data)
data
gr A B C
1: 1 1 1 1
2: 2 2 3 4
3: 3 4 7 5

calculate each chunk by group using dplyr?

How can I get the expected calculation using dplyr package?
row value group expected
1 2 1 =NA
2 4 1 =4-2
3 5 1 =5-4
4 6 2 =NA
5 11 2 =11-6
6 12 1 =NA
7 15 1 =15-12
I tried
df=read.table(header=1, text=' row value group
1 2 1
2 4 1
3 5 1
4 6 2
5 11 2
6 12 1
7 15 1')
df %>% group_by(group) %>% mutate(expected=value-lag(value))
How can I calculate for each chunk (row 1-3, 4-5, 6-7) although row 1-3 and 6-7 are labelled as the same group number?
Here is a similar approach. I created a new group variable using cumsum. Whenever the difference between two numbers in group is not 0, R assigns a new group number. If you have more data, this approach may be helpful.
library(dplyr)
mutate(df, foo = cumsum(c(T, diff(group) != 0))) %>%
group_by(foo) %>%
mutate(out = value - lag(value))
# row value group foo out
#1 1 2 1 1 NA
#2 2 4 1 1 2
#3 3 5 1 1 1
#4 4 6 2 2 NA
#5 5 11 2 2 5
#6 6 12 1 3 NA
#7 7 15 1 3 3
As your group variable is not useful for this, create a new variable aux and use it as the grouping variable:
library(dplyr)
df$aux <- rep(seq_along(rle(df$group)$values), times = rle(df$group)$lengths)
df %>% group_by(aux) %>% mutate(expected = value - lag(value))
Source: local data frame [7 x 5]
Groups: aux
row value group aux expected
1 1 2 1 1 NA
2 2 4 1 1 2
3 3 5 1 1 1
4 4 6 2 2 NA
5 5 11 2 2 5
6 6 12 1 3 NA
7 7 15 1 3 3
Here is an option using data.table_1.9.5. The devel version introduced new functions rleid and shift (default type is "lag" and fill is "NA") that can be useful for this.
library(data.table)
setDT(df)[, expected:=value-shift(value) ,by = rleid(group)][]
# row value group expected
#1: 1 2 1 NA
#2: 2 4 1 2
#3: 3 5 1 1
#4: 4 6 2 NA
#5: 5 11 2 5
#6: 6 12 1 NA
#7: 7 15 1 3

Resources