There are hundreds of levels in a column and not all of them really add value - as in, about 60% of levels account for <80% (they don't occur many a times in the dataframe) and also expected to not influence the outcome. Objective is to eliminate those levels that do not contribute more than 80%.
Could someone help? Thanks in advance
Here is a simple process that spots values that account for less than 80% of the dataset (rows) and groups them together using a new value. This process uses a character column and not a factor column.
library(dplyr)
# example dataset
dt = data.frame(type = c("A","A","A","B","B","B","c","D"),
value = 1:8, stringsAsFactors = F)
dt
# type value
# 1 A 1
# 2 A 2
# 3 A 3
# 4 B 4
# 5 B 5
# 6 B 6
# 7 c 7
# 8 D 8
# count number of rows for each type
dt %>% count(type)
# # A tibble: 4 x 2
# type n
# <chr> <int>
# 1 A 3
# 2 B 3
# 3 c 1
# 4 D 1
# add cumulative percentages
dt %>%
count(type) %>%
mutate(Prc = n/sum(n),
CumPrc = cumsum(Prc))
# # A tibble: 4 x 4
# type n Prc CumPrc
# <chr> <int> <dbl> <dbl>
# 1 A 3 0.375 0.375
# 2 B 3 0.375 0.750
# 3 c 1 0.125 0.875
# 4 D 1 0.125 1.000
# pick the types you want to group together
dt %>%
count(type) %>%
mutate(Prc = n/sum(n),
CumPrc = cumsum(Prc)) %>%
filter(CumPrc > 0.80) %>%
pull(type) -> types_to_group
# group them
dt %>% mutate(type_upd = ifelse(type %in% types_to_group, "Rest", type))
# type value type_upd
# 1 A 1 A
# 2 A 2 A
# 3 A 3 A
# 4 B 4 B
# 5 B 5 B
# 6 B 6 B
# 7 c 7 Rest
# 8 D 8 Rest
Related
I need to summarize a data.frame across multiple columns in a generic way:
the first summarize operation is easy, e.g. a simple median, and is straightforward;
the second summarize then includes a condition on another column, e.g. taking the value where these is a minimum (by group) in another column:
set.seed(4)
myDF = data.frame(i = rep(1:3, each=3),
j = rnorm(9),
a = sample.int(9),
b = sample.int(9),
c = sample.int(9),
d = 'foo')
# i j a b c d
# 1 1 0.2167549 4 5 5 foo
# 2 1 -0.5424926 7 7 4 foo
# 3 1 0.8911446 3 9 1 foo
# 4 2 0.5959806 8 6 8 foo
# 5 2 1.6356180 6 8 3 foo
# 6 2 0.6892754 1 4 6 foo
# 7 3 -1.2812466 9 1 7 foo
# 8 3 -0.2131445 5 2 2 foo
# 9 3 1.8965399 2 3 9 foo
myDF %>% group_by(i) %>% summarize(across(where(is.numeric), median, .names="med_{col}"),
best_a = a[[which.min(j)]],
best_b = b[[which.min(j)]],
best_c = c[[which.min(j)]])
# # A tibble: 3 x 8
# i med_j med_a med_b med_c best_a best_b best_c
# * <int> <dbl> <int> <int> <int> <int> <int> <int>
# 1 1 0.217 4 7 4 7 7 4
# 2 2 0.689 6 6 6 8 6 8
# 3 3 -0.213 5 2 7 9 1 7
How can I define this second summarize operation in a generic way (i.e., not manually as done above)?
Hence I would need something like this (which obviously does not work as j is not recognized):
myfns = list(med = ~median(.),
best = ~.[[which.min(j)]])
myDF %>% group_by(i) %>% summarize(across(where(is.numeric), myfns, .names="{fn}_{col}"))
# Error: Problem with `summarise()` input `..1`.
# x object 'j' not found
# ℹ Input `..1` is `across(where(is.numeric), myfns, .names = "{fn}_{col}")`.
# ℹ The error occurred in group 1: i = 1.
Use another across to get corresponding values in column a:c where j is minimum.
library(dplyr)
myDF %>%
group_by(i) %>%
summarize(across(where(is.numeric), median, .names="med_{col}"),
across(a:c, ~.[which.min(j)],.names = 'best_{col}'))
# i med_j med_a med_b med_c best_a best_b best_c
#* <int> <dbl> <int> <int> <int> <int> <int> <int>
#1 1 0.217 4 7 4 7 7 4
#2 2 0.689 6 6 6 8 6 8
#3 3 -0.213 5 2 7 9 1 7
To do it in the same across statement :
myDF %>%
group_by(i) %>%
summarize(across(where(is.numeric), list(med = median,
best = ~.[which.min(j)]),
.names="{fn}_{col}"))
Consider the following two tibbles:
library(tidyverse)
a <- tibble(time = c(-1, 0), value = c(100, 200))
b <- tibble(id = rep(letters[1:2], each = 3), time = rep(1:3, 2), value = 1:6)
So a and b have the same columns and b has an additional column called id.
I want to do the following: group b by id and then add tibble a on top of each group.
So the output should look like this:
# A tibble: 10 x 3
id time value
<chr> <int> <int>
1 a -1 100
2 a 0 200
3 a 1 1
4 a 2 2
5 a 3 3
6 b -1 100
7 b 0 200
8 b 1 4
9 b 2 5
10 b 3 6
Of course there are multiple workarounds to achieve this (like loops for example). But in my case I have a large number of IDs and a very large number of columns.
I would be thankful if anyone could point me towards the direction of a solution within the tidyverse.
Thank you
We can expand the data frame a with id from b and then bind_rows them together.
library(tidyverse)
a2 <- expand(a, id = b$id, nesting(time, value))
b2 <- bind_rows(a2, b) %>% arrange(id, time)
b2
# # A tibble: 10 x 3
# id time value
# <chr> <dbl> <dbl>
# 1 a -1 100
# 2 a 0 200
# 3 a 1 1
# 4 a 2 2
# 5 a 3 3
# 6 b -1 100
# 7 b 0 200
# 8 b 1 4
# 9 b 2 5
# 10 b 3 6
split from base R will divide a data frame into a list of subsets based on an index.
b %>%
split(b[["id"]]) %>%
lapply(bind_rows, a) %>%
lapply(select, -"id") %>%
bind_rows(.id = "id")
# # A tibble: 10 x 3
# id time value
# <chr> <dbl> <dbl>
# 1 a 1 1
# 2 a 2 2
# 3 a 3 3
# 4 a -1 100
# 5 a 0 200
# 6 b 1 4
# 7 b 2 5
# 8 b 3 6
# 9 b -1 100
# 10 b 0 200
An idea (via base R) is to split your data frame and create a new one with id + the other data frame and rbind, i.e.
df = do.call(rbind, lapply(split(b, b$id), function(i)rbind(data.frame(id = i$id[1], a), i)))
which gives
id time value
a.1 a -1 100
a.2 a 0 200
a.3 a 1 1
a.4 a 2 2
a.5 a 3 3
b.1 b -1 100
b.2 b 0 200
b.3 b 1 4
b.4 b 2 5
b.5 b 3 6
NOTE: You can remove the rownames by simply calling rownames(df) <- NULL
We can nest and add the relevant rows to each nested item :
library(tidyverse)
b %>%
nest(-id) %>%
mutate(data= map(data,~bind_rows(a,.x))) %>%
unnest
# # A tibble: 10 x 3
# id time value
# <chr> <dbl> <dbl>
# 1 a -1 100
# 2 a 0 200
# 3 a 1 1
# 4 a 2 2
# 5 a 3 3
# 6 b -1 100
# 7 b 0 200
# 8 b 1 4
# 9 b 2 5
# 10 b 3 6
Maybe not the most efficient way, but easy to follow:
library(tidyverse)
a <- tibble(time = c(-1, 0), value = c(100, 200))
b <- tibble(id = rep(letters[1:2], each = 3), time = rep(1:3, 2), value =
1:6)
a.a <- a %>% add_column(id = rep("a",length(a)))
a.b <- a %>% add_column(id = rep("b",length(a)))
joint <- bind_rows(b,a.a,a.b)
(joint <- arrange(joint,id))
Is there a way to replicate the rows of a Spark's dataframe using the functions of sparklyr/dplyr?
sc <- spark_connect(master = "spark://####:7077")
df_tbl <- copy_to(sc, data.frame(row1 = 1:3, row2 = LETTERS[1:3]), "df")
This is the desired output, saved into a new spark tbl:
> df2_tbl
row1 row2
<int> <chr>
1 1 A
2 1 A
3 1 A
4 2 B
5 2 B
6 2 B
7 3 C
8 3 C
9 3 C
With sparklyr you can use array and explode as suggested by #Oli:
df_tbl %>%
mutate(arr = explode(array(1, 1, 1))) %>%
select(-arr)
# # Source: lazy query [?? x 2]
# # Database: spark_connection
# row1 row2
# <int> <chr>
# 1 1 A
# 2 1 A
# 3 1 A
# 4 2 B
# 5 2 B
# 6 2 B
# 7 3 C
# 8 3 C
# 9 3 C
and generalized
library(rlang)
df_tbl %>%
mutate(arr = !!rlang::parse_quo(
paste("explode(array(", paste(rep(1, 3), collapse = ","), "))")
)) %>% select(-arr)
# # Source: lazy query [?? x 2]
# # Database: spark_connection
# row1 row2
# <int> <chr>
# 1 1 A
# 2 1 A
# 3 1 A
# 4 2 B
# 5 2 B
# 6 2 B
# 7 3 C
# 8 3 C
# 9 3 C
where you can easily adjust number of rows.
The idea that comes to mind first is to use the explode function (it is exactly what it is meant for in Spark). Yet arrays do not seem to be supported in SparkR (to the best of my knowledge).
> structField("a", "array")
Error in checkType(type) : Unsupported type for SparkDataframe: array
I can however propose two other methods:
A straightforward but not very elegant one:
head(rbind(df, df, df), n=30)
# row1 row2
# 1 1 A
# 2 2 B
# 3 3 C
# 4 1 A
# 5 2 B
# 6 3 C
# 7 1 A
# 8 2 B
# 9 3 C
Or with a for loop for more genericity:
df2 = df
for(i in 1:2) df2=rbind(df, df2)
Note that this would also work with union.
The second, more elegant method (because it only implies one spark operation) is based on a cross join (Cartesian product) with a dataframe of size 3 (or any other number):
j <- as.DataFrame(data.frame(s=1:3))
head(drop(crossJoin(df, j), "s"), n=100)
# row1 row2
# 1 1 A
# 2 1 A
# 3 1 A
# 4 2 B
# 5 2 B
# 6 2 B
# 7 3 C
# 8 3 C
# 9 3 C
I am not aware of a cluster side version of R's repfunction. We can however use a join to emulate it cluster side.
df_tbl <- copy_to(sc, data.frame(row1 = 1:3, row2 = LETTERS[1:3]), "df")
replyr <- function(data, n, sc){
joiner_frame <- copy_to(sc, data.frame(joiner_index = rep(1,n)), "tmp_joining_frame", overwrite = TRUE)
data %>%
mutate(joiner_index = 1) %>%
left_join(joiner_frame) %>%
select(-joiner_index)
}
df_tbl2 <- replyr(df_tbl, 3, sc)
# row1 row2
# <int> <chr>
# 1 1 A
# 2 1 A
# 3 1 A
# 4 2 B
# 5 2 B
# 6 2 B
# 7 3 C
# 8 3 C
# 9 3 C
It gets the job done, but it is a bit dirty since the tmp_joining_frame will persist. I'm not sure how well this will work given lazy evaluation on multiple calls to the function.
I'd like to calculate relative changes of measured variables in a data.frame by group with dplyr.
The changes are with respect to a first baseline value at time==0.
I can easily do this in the following example:
# with this easy example it works
df.easy <- data.frame( id =c(1,1,1,2,2,2)
,time=c(0,1,2,0,1,2)
,meas=c(5,6,9,4,5,6))
df.easy %>% dplyr::group_by(id) %>% dplyr::mutate(meas.relative =
meas/meas[time==0])
# Source: local data frame [6 x 4]
# Groups: id [2]
#
# id time meas meas.relative
# <dbl> <dbl> <dbl> <dbl>
# 1 1 0 5 1.00
# 2 1 1 6 1.20
# 3 1 2 9 1.80
# 4 2 0 4 1.00
# 5 2 1 5 1.25
# 6 2 2 6 1.50
However, when there are id's with no measuremnt at time==0, this doesn't work.
A similar question is this, but I'd like to get an NA as a result instead of simply taking the first occurence as baseline.
# how to output NA in case there are id's with no measurement at time==0?
df <- data.frame( id =c(1,1,1,2,2,2,3,3)
,time=c(0,1,2,0,1,2,1,2)
,meas=c(5,6,9,4,5,6,5,6))
# same approach now gives an error:
df %>% dplyr::group_by(id) %>% dplyr::mutate(meas.relative = meas/meas[time==0])
# Error in mutate_impl(.data, dots) :
# incompatible size (0), expecting 2 (the group size) or 1
Let's try to return NA in case no measurement at time==0 was taken, using ifelse
df %>% dplyr::group_by(id) %>% dplyr::mutate(meas.relative = ifelse(any(time==0), meas/meas[time==0], NA) )
# Source: local data frame [8 x 4]
# Groups: id [3]
#
# id time meas meas.relative
# <dbl> <dbl> <dbl> <dbl>
# 1 1 0 5 1
# 2 1 1 6 1
# 3 1 2 9 1
# 4 2 0 4 1
# 5 2 1 5 1
# 6 2 2 6 1
# 7 3 1 5 NA
# 8 3 2 6 NA>
Wait, why is above the relative measurement 1?
identical(
df %>% dplyr::group_by(id) %>% dplyr::mutate(meas.relative = ifelse(any(time==0), meas, NA) ),
df %>% dplyr::group_by(id) %>% dplyr::mutate(meas.relative = ifelse(any(time==0), meas[time==0], NA) )
)
# TRUE
It seems that the ifelse prevents meas to pick the current line, but selects always the subset where time==0.
How can I calculate relative changes when there are IDs with no baseline measurement?
Your issue was in the ifelse(). According to the ifelse documentation it returns "A vector of the same length...as test". Since any(time==0) is of length 1 for each group (TRUE or FALSE) only the first observation of the meas / meas[time==0] was being selected. This was then repeated to fill each group.
To fix this all I did was rep the any() to be the length of the group. I believe this should work:
df %>% dplyr::group_by(id) %>%
dplyr::mutate(meas.relative = ifelse(rep(any(time==0),times = n()), meas/meas[time==0], NA) )
# id time meas meas.relative
# <dbl> <dbl> <dbl> <dbl>
# 1 1 0 5 1.00
# 2 1 1 6 1.20
# 3 1 2 9 1.80
# 4 2 0 4 1.00
# 5 2 1 5 1.25
# 6 2 2 6 1.50
# 7 3 1 5 NA
# 8 3 2 6 NA
To see how this was working incorrectly in your case try:
ifelse(TRUE,c(1,2,3),NA)
#[1] 1
Edit: A data.table solution with the same concept:
as.data.table(df)[, meas.rel := ifelse(rep(any(time==0), .N), meas/meas[time==0], NA_real_)
,by=id]
I need to add a new row to each id group where the key= "n" and value is the total - a + b
x <- data_frame( id = c(1,1,1,2,2,2,2),
key = c("a","b","total","a","x","b","total"),
value = c(1,2,10,4,1,3,12) )
# A tibble: 7 × 3
id key value
<dbl> <chr> <dbl>
1 1 a 1
2 1 b 2
3 1 total 10
4 2 a 4
5 2 x 1
6 2 b 3
7 2 total 12
In this example, the new rows should be
1 n 7
2 n 5
I tried getting the a+b subtotal and joining that to the total count to get the difference, but after using nine dplyr verbs I seem to be going in the wrong direction. Thanks.
This isn't a join, it's just binding new rows on:
x %>% group_by(id) %>%
summarize(
value = sum(value[key == 'total']) - sum(value[key %in% c('a', 'b')]),
key = 'n'
) %>%
bind_rows(x) %>%
select(id, key, value) %>% # back to original column order
arrange(id, key) # and a start a row order
# # A tibble: 9 × 3
# id key value
# <dbl> <chr> <dbl>
# 1 1 a 1
# 2 1 b 2
# 3 1 n 7
# 4 1 total 10
# 5 2 a 4
# 6 2 b 3
# 7 2 n 5
# 8 2 total 12
# 9 2 x 1
Here's a way using data.table, binding rows as in Gregor's answer:
library(data.table)
setDT(x)
dcast(x, id ~ key)[, .(id, key = "n", value = total - a - b)][, rbind(.SD, x)][order(id)]
id key value
1: 1 n 7
2: 1 a 1
3: 1 b 2
4: 1 total 10
5: 2 n 5
6: 2 a 4
7: 2 x 1
8: 2 b 3
9: 2 total 12