I have a dataframe that looks like this:
value id
1 2 A
2 5 A
3 NA A
4 7 A
5 9 A
6 1 B
7 NA B
8 NA B
9 5 B
10 6 B
And I would like to calculate growth rates of the value using the id variable to group. Usually, I would do something like this:
df <- df %>% group_by(id) %>% mutate(growth = log(value) - as.numeric(lag(value)))
To get this dataframe:
value id growth
(dbl) (chr) (dbl)
1 2 A NA
2 5 A -0.3905621
3 NA A NA
4 7 A NA
5 9 A -4.8027754
6 1 B NA
7 NA B NA
8 NA B NA
9 5 B NA
10 6 B -3.2082405
Now what I want to do is to use the last non NA value as well for the growth rates. Kind of like calculating the growth rates over the "NA-gaps" as well. For example: In row 4 should be the growth rate from 5 to 7 and in row 9 should be the growth rate from 1 to 5.
Thanks!
zoo::na.locf will replace NAs with the last non-NA value, so this may work for you:
df <- df %>%
group_by(id) %>%
mutate(
valuenoNA = zoo::na.locf(value),
growth = log(valuenoNA) - as.numeric(lag(valuenoNA)))
1 2 A NA 2
2 5 A -0.3905621 5
3 NA A -3.3905621 5
4 7 A -3.0540899 7
5 9 A -4.8027754 9
6 1 B NA 1
7 NA B -1.0000000 1
8 NA B -1.0000000 1
9 5 B 0.6094379 5
10 6 B -3.2082405 6
We can use fill from tidyverse
library(tidyverse)
df %>%
group_by(id) %>%
fill(value) %>%
mutate(growth = log(value) - lag(value))
Related
I want to group my data in different chunks when the data is continuous. Trying to get the group column from dummy data like this:
a b group
<dbl> <dbl> <dbl>
1 1 1 1
2 2 2 1
3 3 3 1
4 4 NA NA
5 5 NA NA
6 6 NA NA
7 7 12 2
8 8 15 2
9 9 NA NA
10 10 25 3
I tried using
test %>% mutate(test = complete.cases(.)) %>%
group_by(group = cumsum(test == TRUE)) %>%
select(group, everything())
But it doesn't work as expected:
group a b test
<int> <dbl> <dbl> <lgl>
1 1 1 1 TRUE
2 2 2 2 TRUE
3 3 3 3 TRUE
4 3 4 NA FALSE
5 3 5 NA FALSE
6 3 6 NA FALSE
7 4 7 12 TRUE
8 5 8 15 TRUE
9 5 9 NA FALSE
10 6 10 25 TRUE
Any advice?
Using rle in base R -
transform(df, group1 = with(rle(!is.na(b)), rep(cumsum(values), lengths))) |>
transform(group1 = replace(group1, is.na(b), NA))
# a b group group1
#1 1 1 1 1
#2 2 2 1 1
#3 3 3 1 1
#4 4 NA NA NA
#5 5 NA NA NA
#6 6 NA NA NA
#7 7 12 2 2
#8 8 15 2 2
#9 9 NA NA NA
#10 10 25 3 3
A couple of approaches to consider if you wish to use dplyr for this.
First, you could look at transition from non-complete cases (using lag) to complete cases.
library(dplyr)
test %>%
mutate(test = complete.cases(.)) %>%
group_by(group = cumsum(test & !lag(test, default = F))) %>%
mutate(group = replace(group, !test, NA))
Alternatively, you could add row numbers to your data.frame. Then, you could filter to include only complete cases, and group_by enumerating with cumsum based on gaps in row numbers. Then, join back to original data.
test$rn <- seq.int(nrow(test))
test %>%
filter(complete.cases(.)) %>%
group_by(group = c(0, cumsum(diff(rn) > 1)) + 1) %>%
right_join(test) %>%
arrange(rn) %>%
dplyr::select(-rn)
Output
a b group
<int> <int> <dbl>
1 1 1 1
2 2 2 1
3 3 3 1
4 4 NA NA
5 5 NA NA
6 6 NA NA
7 7 12 2
8 8 15 2
9 9 NA NA
10 10 25 3
Using data.table, get rleid then remove group IDs for NAs, then fix the sequence with factor to integer conversion:
library(data.table)
setDT(test)[, group1 := {
x <- complete.cases(test)
grp <- rleid(x)
grp[ !x ] <- NA
as.integer(factor(grp))
}]
# a b group group1
# 1: 1 1 1 1
# 2: 2 2 1 1
# 3: 3 3 1 1
# 4: 4 NA NA NA
# 5: 5 NA NA NA
# 6: 6 NA NA NA
# 7: 7 12 2 2
# 8: 8 15 2 2
# 9: 9 NA NA NA
# 10: 10 25 3 3
I would like summarize a dataframe using different grouping variables for each summary I wish to be carried out. As an example I have three variables (x1, x2, x3). I want to group the dataframe by x1 and get the number of observations in that group, but I want to do the same for x2 and x3.
I would like to accomplish this with the same block of piping but so far the only solution I have come up with is to save multiple outputs for each individual grouping I would like.
To reproduce my dataframe:
x1 <- c(0,1,1,2,2,3,3,3,4,4,5,6,6,7,8,9,9,10)
x2 <- c(0,0,1,1,0,1,2,0,0,2,1,0,3,4,2,3,0,3)
x3 <- c(0,1,0,1,2,2,1,3,4,2,4,6,3,3,6,6,9,7)
df <- data.frame(x1,x2,x3)
My expected output would look something like this, where x is the min and max number across the variables and n_x1-3 are the number of observations at a specific number and using that variable as a grouping variable:
x n_x1 n_x2 n_x3
1 0 1 7 2
2 1 2 4 3
3 2 2 3 3
4 3 3 3 3
5 4 2 1 2
6 5 1 NA NA
7 6 2 NA 3
8 7 1 NA 1
9 8 1 NA NA
10 9 2 NA 1
11 10 1 NA NA
So far I have come up with summarizing and grouping by each variable individually and then joining them all together as a last step.
x1_count <- df %>%
group_by(x1) %>%
summarise(n_x1=n())
x2_count <- df %>%
group_by(x2) %>%
summarise(n_x2=n())
x3_count <- df %>%
group_by(x3) %>%
summarise(n_x3=n())
all_count <- full_join(x1_count, x2_count,
by=c("x1"="x2")) %>%
full_join(., x3_count,
by=c("x1"="x3")) %>%
rename("x"="x1")
Is there some type of work around where I wouldn't have to output multiple dataframes and later join them together. I would prefer a cleaner more elegant solution.
a simple tidyr solution
library(tidyr)
df %>%
pivot_longer(everything(),names_to="variables",values_to="values") %>%
group_by(variables,values) %>%
summarize(n_x=n()) %>%
ungroup() %>%
pivot_wider(names_from = variables,values_from=n_x)
# A tibble: 11 x 4
values x1 x2 x3
<dbl> <int> <int> <int>
1 0 1 7 2
2 1 2 4 3
3 2 2 3 3
4 3 3 3 3
5 4 2 1 2
6 5 1 NA NA
7 6 2 NA 3
8 7 1 NA 1
9 8 1 NA NA
10 9 2 NA 1
11 10 1 NA NA
We can use a simple map with full_join
library(dplyr)
library(purrr)
map(names(df), ~ df %>%
count(!!rlang::sym(.x)) %>%
rename_at(1, ~ 'x')) %>%
reduce(full_join, by = 'x') %>%
rename_at(-1, ~ str_c('n_x', seq_along(.)))
# x n_x1 n_x2 n_x3
#1 0 1 7 2
#2 1 2 4 3
#3 2 2 3 3
#4 3 3 3 3
#5 4 2 1 2
#6 5 1 NA NA
#7 6 2 NA 3
#8 7 1 NA 1
#9 8 1 NA NA
#10 9 2 NA 1
#11 10 1 NA NA
Or using a simple base R option
t(table(c(col(df)), unlist(df)))
I have the following Data frame
group <- c(2,2,2,2,4,4,4,4,5,5,5,5)
D <- c(NA,2,NA,NA,NA,2,3,NA,NA,NA,1,1)
df <- data.frame(group, D)
df
group D
1 2 NA
2 2 2
3 2 NA
4 2 NA
5 4 NA
6 4 2
7 4 3
8 4 NA
9 5 NA
10 5 NA
11 5 1
12 5 1
I would like to only keep groups that contain non consecutive NA values at least once. in this case group 5 would be removed because it does not contain non consecutive NA values, but only consecutive NA values. group 2 and 4 remain because they do contain non consecutive NA values (NA values separated by row(s) with a non NA value).
therefore the resulting data frame would look like this:
df2
group D
1 2 NA
2 2 2
3 2 NA
4 2 NA
5 4 NA
6 4 2
7 4 3
8 4 NA
any ideas :)?
How about using difference between the index of NA-values per group?
library(dplyr)
df %>% group_by(group) %>% filter(any(diff(which(is.na(D))) > 1))
## A tibble: 8 x 2
## Groups: group [2]
# group D
# <dbl> <dbl>
#1 2. NA
#2 2. 2.
#3 2. NA
#4 2. NA
#5 4. NA
#6 4. 2.
#7 4. 3.
#8 4. NA
I'm not sure this would catch all potential edge cases but it seems to work for the given example.
The sample data as following:
x <- read.table(header=T, text="
ID CostType1 Cost1 CostType2 Cost2
1 a 10 c 1
2 b 2 c 20
3 a 1 b 50
4 a 40 c 1
5 c 2 b 30
6 a 60 c 3
7 c 10 d 1
8 a 20 d 2")
I want the second and third columns (CostType1 and CostType 2) to be the the names of new columns and fill the corresponding cost to certain cost type. If there's no match, filled with NA. The ideal format will be following:
a b c d
1 10 NA 1 NA
2 NA 2 20 NA
3 1 50 NA NA
4 40 1 NA NA
5 NA 30 2 NA
6 60 NA 3 NA
7 NA NA 10 1
8 20 NA NA 2
A solution using tidyverse. We can first get how many groups are there. In this example, there are two groups. We can convert each group, combine them, and then summarize the data frame with the first non-NA value in the column.
library(tidyverse)
# Get the group numbers
g <- (ncol(x) - 1)/2
x2 <- map_dfr(1:g, function(i){
# Transform the data frame one group at a time
x <- x %>%
select(ID, ends_with(as.character(i))) %>%
spread(paste0("CostType", i), paste0("Cost", i))
return(x)
}) %>%
group_by(ID) %>%
# Select the first non-NA value if there are multiple values
summarise_all(funs(first(.[!is.na(.)])))
x2
# # A tibble: 8 x 5
# ID a b c d
# <int> <int> <int> <int> <int>
# 1 1 10 NA 1 NA
# 2 2 NA 2 20 NA
# 3 3 1 50 NA NA
# 4 4 40 NA 1 NA
# 5 5 NA 30 2 NA
# 6 6 60 NA 3 NA
# 7 7 NA NA 10 1
# 8 8 20 NA NA 2
A base solution using reshape
x1 <- setNames(x[,c("ID", "CostType1", "Cost1")], c("ID", "CostType", "Cost"))
x2 <- setNames(x[,c("ID", "CostType2", "Cost2")], c("ID", "CostType", "Cost"))
reshape(data=rbind(x1, x2), idvar="ID", timevar="CostType", v.names="Cost", direction="wide")
I have data of the following:
> data <- data.frame(unique=1:9, grouping=rep(c('a', 'b', 'c'), each=3), value=sample(1:30, 9))
> data
unique grouping value
1 1 a 15
2 2 a 21
3 3 a 26
4 4 b 8
5 5 b 6
6 6 b 4
7 7 c 17
8 8 c 1
9 9 c 3
I would like to create a table that looks like this:
a b c
1 15 8 17
2 21 6 1
3 26 6 3
I am using tidyr::spread and not getting the correct result:
> data %>% spread(grouping, value)
unique a b c
1 1 15 NA NA
2 2 21 NA NA
3 3 26 NA NA
4 4 NA 8 NA
5 5 NA 6 NA
6 6 NA 4 NA
7 7 NA NA 17
8 8 NA NA 1
9 9 NA NA 3
Or
> data %>% select(grouping, value) %>% spread(grouping, value)
Error: Duplicate identifiers for rows (1, 2, 3), (4, 5, 6), (7, 8, 9)
Is there a way to do this also when one group (c) has a different length than the others?
We need to create a sequence column to avoid the duplicate identifiers row Error.
library(tidyr)
library(dplyr)
data %>%
group_by(grouping) %>%
mutate(id = row_number()) %>%
select(-unique) %>%
spread(grouping, value) %>%
select(-id)
# a b c
# (int) (int) (int)
#1 15 8 17
#2 21 6 1
#3 26 4 3