Aggregate including all levels? - r

I have an R data frame with factor and numeric columns, and I would like to calculate certain summaries of the numeric columns by various groupings of the factors. In particular, I would like to be able to summarise by multiple combinations of the factors at once, and return the results in a single object.
For example, say I'm using the warpbreaks data frame, which has columns breaks (integer), wool (factor with levels "A", "B") and tension (factor with levels "L", "M" and "H"). If I want to get the average number of breaks for each combination of wool and tension, I know I can use aggregate(breaks ~ wool + tension, data = warpbreaks, mean) and it will give me something like:
wool tension breaks
1 A L 44.55556
2 B L 28.22222
...
6 B H 18.77778
But I'd like to also calculate the means across just wool, and just tension, and for the dataset as a whole, and return something like:
wool tension breaks
1 NA NA 24.14815
2 NA L 36.38889
...
5 A NA 31.03704
...
7 A L 44.55556
...
12 B H 18.77778
I tried a few variations of formulas in the aggregate function and couldn't find anything suitable, is this something that can be done simply?

If you have an arbitrary number of inputs that you want to treat as a whole + individually, you can set up a loop via Map:
## set formula inputs
rhs <- c("wool","tension")
lhs <- "breaks"
## map it
Map(
\(d,l,r) aggregate(d[l], d[r], FUN=mean),
list(warpbreaks),
list(lhs),
c(list(rhs), rhs)
)
#[[1]]
# wool tension breaks
#1 A L 44.55556
#2 B L 28.22222
#3 A M 24.00000
#4 B M 28.77778
#5 A H 24.55556
#6 B H 18.77778
#
#[[2]]
# wool breaks
#1 A 31.03704
#2 B 25.25926
#
#[[3]]
# tension breaks
#1 L 36.38889
#2 M 26.38889
#3 H 21.66667
You can extend the rhs inputs however you see fit for many combinations, e.g.:
rhs <- 1:3
c(list(rhs), combn(rhs, 2, simplify=FALSE), rhs)
#[[1]]
#[1] 1 2 3
#
#[[2]]
#[1] 1 2
#
#[[3]]
#[1] 1 3
#
#[[4]]
#[1] 2 3
#
#[[5]]
#[1] 1
#
#[[6]]
#[1] 2
#
#[[7]]
#[1] 3

You can calculate aggregate separately and combine them with bind_rows.
dplyr::bind_rows(aggregate(breaks ~ wool + tension, data = warpbreaks, mean),
aggregate(breaks ~ wool, data = warpbreaks, mean),
aggregate(breaks ~ tension, data = warpbreaks, mean))
# wool tension breaks
# <fct> <fct> <dbl>
# 1 A L 44.556
# 2 A M 24
# 3 A H 24.556
# 4 B L 28.222
# 5 B M 28.778
# 6 B H 18.778
# 7 A NA 31.037
# 8 B NA 25.259
# 9 NA L 36.389
#10 NA M 26.389
#11 NA H 21.667
Or in dplyr -
library(dplyr)
bind_rows(
warpbreaks %>% group_by(wool, tension) %>% summarise(breaks = mean(breaks), .groups = 'drop'),
warpbreaks %>% group_by(wool) %>% summarise(breaks = mean(breaks)),
warpbreaks %>% group_by(tension) %>% summarise(breaks = mean(breaks))
)
For any number of inputs.
library(dplyr)
library(purrr)
cols <- c('wool', 'tension')
map_df(seq_along(cols), function(x) combn(cols, x, function(y) {
warpbreaks %>% group_by(across(all_of(y))) %>% summarise(breaks = mean(breaks))
}, simplify = FALSE) %>% bind_rows())

Related

How to show values as percentage of column total in R like in excel pivot table? [duplicate]

I am trying to divide each cell in a data frame by the sum of the column. For example, I have a data frame df:
sample a b c
a2 1 4 6
a3 5 5 4
I would like to create a new data frame that takes each cell in and divides by the sum of the column, like so:
sample a b c
a2 .167 .444 .6
a3 .833 .556 .4
I have seen answers using sweep(), but that looks like its for matrices, and I have data frames. I understand how to use colSums(), but I'm not sure how to write a function that loops through every cell in the column, and then divides by the column sum. Thanks for the help!
Solution 1
Here are two dplyr solutions. We can use mutate_at or mutate_if to efficiently specify which column we want to apply an operation, or under what condition we want to apply an operation.
library(dplyr)
# Apply the operation to all column except sample
dat2 <- dat %>%
mutate_at(vars(-sample), funs(./sum(.)))
dat2
# sample a b c
# 1 a2 0.1666667 0.4444444 0.6
# 2 a3 0.8333333 0.5555556 0.4
# Apply the operation if the column is numeric
dat2 <- dat %>%
mutate_if(is.numeric, funs(./sum(.)))
dat2
# sample a b c
# 1 a2 0.1666667 0.4444444 0.6
# 2 a3 0.8333333 0.5555556 0.4
Solution 2
We can also use the map_at and map_if function from the purrr package. However, since the output is a list, we will need as.data.frame from base R or as_data_frame from dplyr to convert the list to a data frame.
library(dplyr)
library(purrr)
# Apply the operation to column a, b, and c
dat2 <- dat %>%
map_at(c("a", "b", "c"), ~./sum(.)) %>%
as_data_frame()
dat2
# # A tibble: 2 x 4
# sample a b c
# <chr> <dbl> <dbl> <dbl>
# 1 a2 0.167 0.444 0.600
# 2 a3 0.833 0.556 0.400
# Apply the operation if the column is numeric
dat2 <- dat %>%
map_if(is.numeric, ~./sum(.)) %>%
as_data_frame()
dat2
# # A tibble: 2 x 4
# sample a b c
# <chr> <dbl> <dbl> <dbl>
# 1 a2 0.167 0.444 0.600
# 2 a3 0.833 0.556 0.400
Solution 3
We can also use the .SD and .SDcols from the data.table package.
library(data.table)
# Convert to data.table
setDT(dat)
dat2 <- copy(dat)
dat2[, (c("a", "b", "c")) := lapply(.SD, function(x) x/sum(x)), .SDcols = c("a", "b", "c")]
dat2[]
# sample a b c
# 1: a2 0.1666667 0.4444444 0.6
# 2: a3 0.8333333 0.5555556 0.4
Solution 4
We can also use the lapply function to loop through all column except the first column to perform the operation.
dat2 <- dat
dat2[, -1] <- lapply(dat2[, -1], function(x) x/sum(x))
dat2
# sample a b c
# 1 a2 0.1666667 0.4444444 0.6
# 2 a3 0.8333333 0.5555556 0.4
We can also use apply to loop through all columns but add an if-else statement in the function to make sure only perform the operation on the numeric columns.
dat2 <- dat
dat2[] <- lapply(dat2[], function(x){
# Check if the column is numeric
if (is.numeric(x)){
return(x/sum(x))
} else{
return(x)
}
})
dat2
# sample a b c
# 1 a2 0.1666667 0.4444444 0.6
# 2 a3 0.8333333 0.5555556 0.4
Solution 5
A dplyr and tidyr solution based on gather and spread.
library(dplyr)
library(tidyr)
dat2 <- dat %>%
gather(Column, Value, -sample) %>%
group_by(Column) %>%
mutate(Value = Value/sum(Value)) %>%
spread(Column, Value)
dat2
# # A tibble: 2 x 4
# sample a b c
# * <chr> <dbl> <dbl> <dbl>
# 1 a2 0.167 0.444 0.600
# 2 a3 0.833 0.556 0.400
Performance Evaluation
I am curious about which method has the best performance. So I conduct the following performance evaluation using the microbenchmark package with a data frame having the same column names as OP's example but with 1000000 rows.
library(dplyr)
library(tidyr)
library(purrr)
library(data.table)
library(microbenchmark)
set.seed(100)
dat <- data_frame(sample = paste0("a", 1:1000000),
a = rpois(1000000, lambda = 3),
b = rpois(1000000, lambda = 3),
c = rpois(1000000, lambda = 3))
# Convert the data frame to a data.table for later perofrmance evaluation
dat_dt <- as.data.table(dat)
head(dat)
# # A tibble: 6 x 4
# sample a b c
# <chr> <int> <int> <int>
# 1 a1 2 5 2
# 2 a2 2 5 5
# 3 a3 3 2 4
# 4 a4 1 2 2
# 5 a5 3 3 1
# 6 a6 3 6 1
In addition to all the methods I proposed, I also interested two other methods proposed by others: the prop.table method proposed by Henrik in the comments, and the apply method by Spacedman. I called all my solutions with m1_1, m1_2, m2_1, ... to m5. If there are two methods in one solution, I used _ to separate them. I also called the prop.table method as m6 and the apply method as m7. Notice that I modified m6 to have an output as a data frame so that all the methods can have data frame, tibble, or data.table output.
Here is the code I used to assess the performance.
per <- microbenchmark(m1_1 = {dat2 <- dat %>% mutate_at(vars(-sample), funs(./sum(.)))},
m1_2 = {dat2 <- dat %>% mutate_if(is.numeric, funs(./sum(.)))},
m2_1 = {dat2 <- dat %>%
map_at(c("a", "b", "c"), ~./sum(.)) %>%
as_data_frame()
},
m2_2 = {dat2 <- dat %>%
map_if(is.numeric, ~./sum(.)) %>%
as_data_frame()},
m3 = {dat_dt2 <- copy(dat_dt)
dat_dt2[, c("a", "b", "c") := lapply(.SD, function(x) x/sum(x)),
.SDcols = c("a", "b", "c")]},
m4_1 = {dat2 <- dat
dat2[, -1] <- lapply(dat2[, -1], function(x) x/sum(x))},
m4_2 = {dat2 <- dat
dat2[] <- lapply(dat2[], function(x){
if (is.numeric(x)){
return(x/sum(x))
} else{
return(x)
}
})},
m5 = {dat2 <- dat %>%
gather(Column, Value, -sample) %>%
group_by(Column) %>%
mutate(Value = Value/sum(Value)) %>%
spread(Column, Value)},
m6 = {dat2 <- dat
dat2[-1] <- prop.table(as.matrix(dat2[-1]), margin = 2)},
m7 = {dat2 <- dat
dat2[, -1] = apply(dat2[, -1], 2, function(x) {x/sum(x)})}
)
print(per)
# Unit: milliseconds
# expr min lq mean median uq max neval
# m1_1 23.335600 24.326445 28.71934 25.134798 27.465017 75.06974 100
# m1_2 20.373093 21.202780 29.73477 21.967439 24.897305 216.27853 100
# m2_1 9.452987 9.817967 17.83030 10.052634 11.056073 175.00184 100
# m2_2 10.009197 10.342819 16.43832 10.679270 11.846692 163.62731 100
# m3 16.195868 17.154327 34.40433 18.975886 46.521868 190.50681 100
# m4_1 8.100504 8.342882 12.66035 8.778545 9.348634 181.45273 100
# m4_2 8.130833 8.499926 15.84080 8.766979 9.732891 172.79242 100
# m5 5373.395308 5652.938528 5791.73180 5737.383894 5825.141584 6660.35354 100
# m6 117.038355 150.688502 191.43501 166.665125 218.837502 325.58701 100
# m7 119.680606 155.743991 199.59313 174.007653 215.295395 357.02775 100
library(ggplot2)
autoplot(per)
The result shows that methods based on lapply (m4_1 and m4_2) are the fastest, while the tidyr approach (m5) is the slowest, indicating that when row numbers are large it is not a good idea to use the gather and spread method.
DATA
dat <- read.table(text = "sample a b c
a2 1 4 6
a3 5 5 4",
header = TRUE, stringsAsFactors = FALSE)
Given this:
> d = data.frame(sample=c("a2","a3"),a=c(1,5),b=c(4,5),c=c(6,4))
> d
sample a b c
1 a2 1 4 6
2 a3 5 5 4
You can replace every column other than the first by applying over the rest:
> d[,-1] = apply(d[,-1],2,function(x){x/sum(x)})
> d
sample a b c
1 a2 0.1666667 0.4444444 0.6
2 a3 0.8333333 0.5555556 0.4
If you don't want d being stomped on make a copy beforehand.
You could do this in dplyr as well.
sample <- c("a2", "a3")
a <- c(1, 5)
b <- c(4, 5)
c <- c(6, 4)
dat <- data.frame(sample, a, b, c)
dat
library(dplyr)
dat %>%
mutate(
a.PCT = round(a/sum(a), 3),
b.PCT = round(b/sum(b), 3),
c.PCT = round(c/sum(c), 3))
sample a b c a.PCT b.PCT c.PCT
1 a2 1 4 6 0.167 0.444 0.6
2 a3 5 5 4 0.833 0.556 0.4
You can use the transpose of the matrix and then transpose again:
t(t(as.matrix(df))/colSums(df))
try apply:
mat <- matrix(1:6, ncol=3)
apply(mat,2, function(x) x / sum(x))
okay, if you have not numeric values in you columns you can force them to be numeric:
df <- data.frame( a=c('a', 'b'), b=c(3,4), d=c(1,6))
apply(df,2, function(x) {
x <- as.numeric(x)
x / sum(x)
})

Dividing each cell in a data set by the column sum in R

I am trying to divide each cell in a data frame by the sum of the column. For example, I have a data frame df:
sample a b c
a2 1 4 6
a3 5 5 4
I would like to create a new data frame that takes each cell in and divides by the sum of the column, like so:
sample a b c
a2 .167 .444 .6
a3 .833 .556 .4
I have seen answers using sweep(), but that looks like its for matrices, and I have data frames. I understand how to use colSums(), but I'm not sure how to write a function that loops through every cell in the column, and then divides by the column sum. Thanks for the help!
Solution 1
Here are two dplyr solutions. We can use mutate_at or mutate_if to efficiently specify which column we want to apply an operation, or under what condition we want to apply an operation.
library(dplyr)
# Apply the operation to all column except sample
dat2 <- dat %>%
mutate_at(vars(-sample), funs(./sum(.)))
dat2
# sample a b c
# 1 a2 0.1666667 0.4444444 0.6
# 2 a3 0.8333333 0.5555556 0.4
# Apply the operation if the column is numeric
dat2 <- dat %>%
mutate_if(is.numeric, funs(./sum(.)))
dat2
# sample a b c
# 1 a2 0.1666667 0.4444444 0.6
# 2 a3 0.8333333 0.5555556 0.4
Solution 2
We can also use the map_at and map_if function from the purrr package. However, since the output is a list, we will need as.data.frame from base R or as_data_frame from dplyr to convert the list to a data frame.
library(dplyr)
library(purrr)
# Apply the operation to column a, b, and c
dat2 <- dat %>%
map_at(c("a", "b", "c"), ~./sum(.)) %>%
as_data_frame()
dat2
# # A tibble: 2 x 4
# sample a b c
# <chr> <dbl> <dbl> <dbl>
# 1 a2 0.167 0.444 0.600
# 2 a3 0.833 0.556 0.400
# Apply the operation if the column is numeric
dat2 <- dat %>%
map_if(is.numeric, ~./sum(.)) %>%
as_data_frame()
dat2
# # A tibble: 2 x 4
# sample a b c
# <chr> <dbl> <dbl> <dbl>
# 1 a2 0.167 0.444 0.600
# 2 a3 0.833 0.556 0.400
Solution 3
We can also use the .SD and .SDcols from the data.table package.
library(data.table)
# Convert to data.table
setDT(dat)
dat2 <- copy(dat)
dat2[, (c("a", "b", "c")) := lapply(.SD, function(x) x/sum(x)), .SDcols = c("a", "b", "c")]
dat2[]
# sample a b c
# 1: a2 0.1666667 0.4444444 0.6
# 2: a3 0.8333333 0.5555556 0.4
Solution 4
We can also use the lapply function to loop through all column except the first column to perform the operation.
dat2 <- dat
dat2[, -1] <- lapply(dat2[, -1], function(x) x/sum(x))
dat2
# sample a b c
# 1 a2 0.1666667 0.4444444 0.6
# 2 a3 0.8333333 0.5555556 0.4
We can also use apply to loop through all columns but add an if-else statement in the function to make sure only perform the operation on the numeric columns.
dat2 <- dat
dat2[] <- lapply(dat2[], function(x){
# Check if the column is numeric
if (is.numeric(x)){
return(x/sum(x))
} else{
return(x)
}
})
dat2
# sample a b c
# 1 a2 0.1666667 0.4444444 0.6
# 2 a3 0.8333333 0.5555556 0.4
Solution 5
A dplyr and tidyr solution based on gather and spread.
library(dplyr)
library(tidyr)
dat2 <- dat %>%
gather(Column, Value, -sample) %>%
group_by(Column) %>%
mutate(Value = Value/sum(Value)) %>%
spread(Column, Value)
dat2
# # A tibble: 2 x 4
# sample a b c
# * <chr> <dbl> <dbl> <dbl>
# 1 a2 0.167 0.444 0.600
# 2 a3 0.833 0.556 0.400
Performance Evaluation
I am curious about which method has the best performance. So I conduct the following performance evaluation using the microbenchmark package with a data frame having the same column names as OP's example but with 1000000 rows.
library(dplyr)
library(tidyr)
library(purrr)
library(data.table)
library(microbenchmark)
set.seed(100)
dat <- data_frame(sample = paste0("a", 1:1000000),
a = rpois(1000000, lambda = 3),
b = rpois(1000000, lambda = 3),
c = rpois(1000000, lambda = 3))
# Convert the data frame to a data.table for later perofrmance evaluation
dat_dt <- as.data.table(dat)
head(dat)
# # A tibble: 6 x 4
# sample a b c
# <chr> <int> <int> <int>
# 1 a1 2 5 2
# 2 a2 2 5 5
# 3 a3 3 2 4
# 4 a4 1 2 2
# 5 a5 3 3 1
# 6 a6 3 6 1
In addition to all the methods I proposed, I also interested two other methods proposed by others: the prop.table method proposed by Henrik in the comments, and the apply method by Spacedman. I called all my solutions with m1_1, m1_2, m2_1, ... to m5. If there are two methods in one solution, I used _ to separate them. I also called the prop.table method as m6 and the apply method as m7. Notice that I modified m6 to have an output as a data frame so that all the methods can have data frame, tibble, or data.table output.
Here is the code I used to assess the performance.
per <- microbenchmark(m1_1 = {dat2 <- dat %>% mutate_at(vars(-sample), funs(./sum(.)))},
m1_2 = {dat2 <- dat %>% mutate_if(is.numeric, funs(./sum(.)))},
m2_1 = {dat2 <- dat %>%
map_at(c("a", "b", "c"), ~./sum(.)) %>%
as_data_frame()
},
m2_2 = {dat2 <- dat %>%
map_if(is.numeric, ~./sum(.)) %>%
as_data_frame()},
m3 = {dat_dt2 <- copy(dat_dt)
dat_dt2[, c("a", "b", "c") := lapply(.SD, function(x) x/sum(x)),
.SDcols = c("a", "b", "c")]},
m4_1 = {dat2 <- dat
dat2[, -1] <- lapply(dat2[, -1], function(x) x/sum(x))},
m4_2 = {dat2 <- dat
dat2[] <- lapply(dat2[], function(x){
if (is.numeric(x)){
return(x/sum(x))
} else{
return(x)
}
})},
m5 = {dat2 <- dat %>%
gather(Column, Value, -sample) %>%
group_by(Column) %>%
mutate(Value = Value/sum(Value)) %>%
spread(Column, Value)},
m6 = {dat2 <- dat
dat2[-1] <- prop.table(as.matrix(dat2[-1]), margin = 2)},
m7 = {dat2 <- dat
dat2[, -1] = apply(dat2[, -1], 2, function(x) {x/sum(x)})}
)
print(per)
# Unit: milliseconds
# expr min lq mean median uq max neval
# m1_1 23.335600 24.326445 28.71934 25.134798 27.465017 75.06974 100
# m1_2 20.373093 21.202780 29.73477 21.967439 24.897305 216.27853 100
# m2_1 9.452987 9.817967 17.83030 10.052634 11.056073 175.00184 100
# m2_2 10.009197 10.342819 16.43832 10.679270 11.846692 163.62731 100
# m3 16.195868 17.154327 34.40433 18.975886 46.521868 190.50681 100
# m4_1 8.100504 8.342882 12.66035 8.778545 9.348634 181.45273 100
# m4_2 8.130833 8.499926 15.84080 8.766979 9.732891 172.79242 100
# m5 5373.395308 5652.938528 5791.73180 5737.383894 5825.141584 6660.35354 100
# m6 117.038355 150.688502 191.43501 166.665125 218.837502 325.58701 100
# m7 119.680606 155.743991 199.59313 174.007653 215.295395 357.02775 100
library(ggplot2)
autoplot(per)
The result shows that methods based on lapply (m4_1 and m4_2) are the fastest, while the tidyr approach (m5) is the slowest, indicating that when row numbers are large it is not a good idea to use the gather and spread method.
DATA
dat <- read.table(text = "sample a b c
a2 1 4 6
a3 5 5 4",
header = TRUE, stringsAsFactors = FALSE)
Given this:
> d = data.frame(sample=c("a2","a3"),a=c(1,5),b=c(4,5),c=c(6,4))
> d
sample a b c
1 a2 1 4 6
2 a3 5 5 4
You can replace every column other than the first by applying over the rest:
> d[,-1] = apply(d[,-1],2,function(x){x/sum(x)})
> d
sample a b c
1 a2 0.1666667 0.4444444 0.6
2 a3 0.8333333 0.5555556 0.4
If you don't want d being stomped on make a copy beforehand.
You could do this in dplyr as well.
sample <- c("a2", "a3")
a <- c(1, 5)
b <- c(4, 5)
c <- c(6, 4)
dat <- data.frame(sample, a, b, c)
dat
library(dplyr)
dat %>%
mutate(
a.PCT = round(a/sum(a), 3),
b.PCT = round(b/sum(b), 3),
c.PCT = round(c/sum(c), 3))
sample a b c a.PCT b.PCT c.PCT
1 a2 1 4 6 0.167 0.444 0.6
2 a3 5 5 4 0.833 0.556 0.4
You can use the transpose of the matrix and then transpose again:
t(t(as.matrix(df))/colSums(df))
try apply:
mat <- matrix(1:6, ncol=3)
apply(mat,2, function(x) x / sum(x))
okay, if you have not numeric values in you columns you can force them to be numeric:
df <- data.frame( a=c('a', 'b'), b=c(3,4), d=c(1,6))
apply(df,2, function(x) {
x <- as.numeric(x)
x / sum(x)
})

How can I replace a factor levels with the top n levels (by number of occurances)

This question is related to How can I replace a factor levels with the top n levels (by some metric), plus [other]?. As a metric I want to use the number of occurrences of the factor. I know I can do it by making a list of the occurrences, but I was wondering if there is a prettier way.
Example:
library(data.table);
library(plyr);
fac <- data.table(score = as.factor(c(3,4,5,3,3,3,5)));
ocCnt <- data.table(lapply(fac,count)$score);
fac$occurrence <- 0;
for(i in 1:length(fac$score)){fac$occurrence[i]<-ocCnt[x==fac$score[i]]$freq};
Then I could use the function described in the referenced question/answer:
hotfactor= function(fac,by,n=10,o="other") {
levels(fac)[rank(-xtabs(by~fac))[levels(fac)]>n] <- o
fac
}
To continue the example, if we want only to see the most popular factor we do:
hotfactor(fac$score,fac$occurrence,1);
To get the answer:
[1] 3 other other 3 3 3 other
Levels: 3 other
So my question is, can I do this without having to add a list which counts the occurrences?
Note that I want to do this for the n most popular factors (not just for the most popular factor).
Use table and which.max:
score <- factor(c(3,4,5,3,3,3,5))
levels(score)[- which.max(table(score))] <- "other"
#[1] 3 other other 3 3 3 other
#Levels: 3 other
Obviously this breaks ties by taking the first maximum value.
If you want to keep the top two levels:
score <- factor(c(3, 4,5,3,3,3,5), levels =c(4,3,5))
levels(score)[!levels(score) %in% names(sort(table(score), decreasing = TRUE)[1:2])] <- "other"
#[1] 3 other 5 3 3 3 5
#Levels: other 3 5
If you don't know how many levels you need to group say, 90% of your data and are willing to use dplyr, you could do something along the following lines:
library(dplyr)
df <- data.frame(
f = factor(mapply(rep, letters[1:5], 2^(1:5)) %>% unlist(use.names = F))
)
df %>%
count(f, sort = T) %>%
mutate(p = cumsum(n) / nrow(df))
# A tibble: 5 x 3
# f n p
# <fctr> <int> <dbl>
# 1 e 32 0.5161290
# 2 d 16 0.7741935
# 3 c 8 0.9032258
# 4 b 4 0.9677419
# 5 a 2 1.0000000
(top <- df %>%
count(f, sort = T) %>%
mutate(p = cumsum(n) / nrow(df)) %>%
filter(cumall(p < .91)) %>%
select(f) %>%
unlist(use.names = F))
# [1] e d c
# Levels: a b c d e
levels(df$f) <- factor(c(levels(df$f), 'z'))
df$f[!df$f %in% top] <- 'z'
df %>%
count(f, sort = T) %>%
mutate(p = cumsum(n) / nrow(df))
# A tibble: 4 x 3
# f n p
# <fctr> <int> <dbl>
# 1 e 32 0.5161290
# 2 d 16 0.7741935
# 3 c 8 0.9032258
# 4 z 6 1.0000000

Linear model and dplyr - a better solution?

I got a lot of good feedback on a question I recently asked and was guided to use dplyr to transform some data. I'm having an issue with lm() and trying to find a slope from this transformed data and thought I'd open up a new question.
First I have data that looks like this:
Var1 Var2 Var3 Time Temp
a w j 9/9/2014 20
a w j 9/9/2014 15
a w k 9/20/2014 10
a w j 9/10/2014 0
b x L 9/12/2014 30
b x L 9/12/2014 10
b y k 9/13/2014 20
b y k 9/13/2014 15
c z j 9/14/2014 20
c z j 9/14/2014 10
c z k 9/14/2014 11
c w l 9/10/2014 45
a d j 9/22/2014 20
a d k 9/15/2014 4
a d l 9/15/2014 23
a d k 9/15/2014 11
And I want it in the form of this (values for Slope and Pearson simulated for illustration):
V1 V2 V3 Slope Pearson
a w j -3 -0.9
a w k 2 0
a d j 1.5 0.6
a d k 0 0.5
a d l -0.5 -0.6
b x L 12 0.7
b y k 4 0.6
c z j -1 -0.5
c z k -3 -0.4
c w l -10 -0.9
The slope being a linear-least-squares slope. In theory, the script would look like so:
library(dplyr)
data <- read.table("clipboard",sep="\t",quote="",header=T)
newdata = summarise(group_by(data
,Var1
,Var2
,Var3
)
,Slope = lm(Temp ~ Time)$coeff[2]
,Pearson = cor(Time, Temp, method="pearson")
)
But R throws an error like it can't find Time or Temp. It can run lm(data$Temp ~ data$Time)$coeff[2], but returns the slope for the entire data set and not the subsetted form that I'm looking for. cor() seems to run just fine in the group_by section, so is there a specific syntax I need to pass to lm() to have it run in a similar manner or use a different function entirely to get a slope passed from the subset?
You have several issues here.
If you group your data by 3 variables (or even 2) you don't have enough distinct values in order to run a linear regression model
Pearson requires two numeric values, while Time is a factor which converting to numeric won't make much sense
The third issue here is that you will need to use do in order to run your linear model
Here's an illustration for grouping only on V1
data %>%
group_by(Var1) %>% # You can add here additional grouping variables if your real data set enables it
do(mod = lm(Temp ~ Time, data = .)) %>%
mutate(Slope = summary(mod)$coeff[2]) %>%
select(-mod)
# Source: local data frame [3 x 2]
# Groups: <by row>
#
# Var1 Slope
# 1 a 12.66667
# 2 b -2.50000
# 3 c -31.33333
If you do have two numeric variables, you can use do in order to calculate correlation too, for example (I will create some dummy numeric variables for illustration)
data %>%
mutate(test1 = sample(1:3, n(), replace = TRUE), # Creating some numeric variables
test2 = sample(1:3, n(), replace = TRUE)) %>%
group_by(Var1) %>%
do(mod = lm(Temp ~ Time, data = .),
mod2 = cor(.$test1, .$test2, method = "pearson")) %>%
mutate(Slope = summary(mod)$coeff[2],
Pearson = mod2[1]) %>%
select(-mod, -mod2)
# Source: local data frame [3 x 3]
# Groups: <by row>
#
# Var1 Slope Pearson
# 1 a 12.66667 0.25264558
# 2 b -2.50000 -0.09090909
# 3 c -31.33333 0.30151134
Bonus solution: you can do this quite efficiently/easily with data.table package too
library(data.table)
setDT(data)[, list(Slope = summary(lm(Temp ~ Time))$coeff[2]), Var1]
# Var1 Slope
# 1: a 12.66667
# 2: b -2.50000
# 3: c -31.33333
Or if we want to create some dummy variables too
library(data.table)
setDT(data)[, `:=`(test1 = sample(1:3, .N, replace = TRUE),
test2 = sample(1:3, .N, replace = TRUE))][,
list(Slope = summary(lm(Temp ~ Time))$coeff[2],
Pearson = cor(test1, test2, method = "pearson")), Var1]
# Var1 Slope Pearson
# 1: a 12.66667 -0.02159168
# 2: b -2.50000 -0.81649658
# 3: c -31.33333 -1.00000000

Bind data frames on longer identifiers R

I've got two data frames in which the unique identifiers common to both frames differ in the number of observations. I would like to create a dataframe from both in which the observations from each frame are taken if they have more observations for a common identifier. For example:
f1 <- data.frame(x = c("a", "a", "b", "c", "c", "c"), y = c(1,1,2,3,3,3))
f2 <- data.frame(x = c("a","b", "b", "c", "c"), y = c(4,5,5,6,6))
I would like this to generate a merge based on the longer x such that it produces:
x y
a 1
a 1
b 5
b 5
c 3
c 3
c 3
Any and all thoughts would be great.
Here's a solution using split
dd<-rbind(cbind(f1, s="f1"), cbind(f2, s="f2"))
keep<-unsplit(lapply(split(dd$s, dd$x), FUN=function(x) {
y<-table(x)
x == names(y[which.max(y)])
}), dd$x)
dd <- dd[keep,]
Normally i'd prefer to use the ave function here but because i'm changing data.types from a factor to a logical, it wasn't as appropriate so I basically copied the idea that ave uses and used split.
dplyr solution
library(dplyr)
First we combine the data:
with rbind() and introduce a new variable called ref to know where each observation came from:
both <- rbind( f1, f2 )
both$ref <- rep( c( "f1", "f2" ) , c( nrow(f1), nrow(f2) ) )
then count the observations:
make another new variable that contains how many observations for each ref and x combination:
both_with_counts <- both %>%
group_by( ref ,x ) %>%
mutate( counts = n() )
then filter for the largest count:
both_with_counts %>% group_by( x ) %>% filter( n==max(n) )
note: you could also select only the x and y cols with select(x,y)...
this gives:
## Source: local data frame [7 x 4]
## Groups: x
##
## x y ref counts
## 1 a 1 f1 2
## 2 a 1 f1 2
## 3 c 3 f1 3
## 4 c 3 f1 3
## 5 c 3 f1 3
## 6 b 5 f2 2
## 7 b 5 f2 2
Altogether now...
what_I_want <-
rbind(cbind(f1,ref = "f1"),cbind(f2,ref = "f2")) %>%
group_by(ref,x) %>%
mutate(counts = n()) %>%
group_by( x ) %>%
filter( counts==max(counts) ) %>%
select( x, y )
and thus:
> what_I_want
# Source: local data frame [7 x 2]
# Groups: x
#
# x y
# 1 a 1
# 2 a 1
# 3 c 3
# 4 c 3
# 5 c 3
# 6 b 5
# 7 b 5
Not a elegant answer but still give the desired result. Hope this help.
f1table <- data.frame(table(f1$x))
colnames(f1table) <- c("x","freq")
f1new <- merge(f1,f1table)
f2table <- data.frame(table(f2$x))
colnames(f2table) <- c("x","freq")
f2new <- merge(f2,f2table)
table <- rbind(f1table, f2table)
table <- table[with(table, order(x,-freq)), ]
table <- table[!duplicated(table$x), ]
data <-rbind(f1new, f2new)
merge(data, table, by=c("x","freq"))[,c(1,3)]
x y
1 a 1
2 a 1
3 b 5
4 b 5
5 c 3
6 c 3
7 c 3

Resources