I have a dt:
library(data.table)
DT <- data.table(a = c(1,2,3,4,5), b = c(4,5,6,7,8), c = c("X","X","X","Y","Y") )
I want to add one column d, within each group of column C:
the first row value should be the same as b[i],
the second to last row within each group should be d[i-1] + 2*b[i]
Intended results:
a b c d
1: 1 4 X 4
2: 2 5 X 14
3: 3 6 X 26
4: 4 7 Y 7
5: 5 8 Y 23
I tried to use functions such as shift but I struggle to update rows dynamically (so to speak) here,
wonder if there is any elegant data.table style solution?
We can use cumsum and subtract the first row using [1]:
DT[, d := cumsum(2 * b) - b[1], .(c)][]
#> a b c d
#> 1: 1 4 X 4
#> 2: 2 5 X 14
#> 3: 3 6 X 26
#> 4: 4 7 Y 7
#> 5: 5 8 Y 23
Here we can use accumulate
library(purrr)
library(data.table)
DT[, d := accumulate(b, ~ .x + 2 *.y), by = c]
Or with Reduce and accumulate = TRUE from base R
DT[, d := Reduce(function(x, y) x + 2 * y, b, accumulate = TRUE), by = c]
Related
I want to recursively filter a dataframe, d by an arbitrary number of conditions (represented as rows in another dataframe z).
I begin with a dataframe d:
d <- data.frame(x = 1:10, y = letters[1:10])
The second dataframe z, has columns x1 and x2, which are lower and upper limits to filter d$x. This dataframe z may grow to be an arbitrary number of rows long.
z <- data.frame(x1 = c(1,3,8), x2 = c(1,4,10))
I want to return all rows of d for which d$x <= z$x1[i] and d$x >= z$x2[i] for all i, where i = nrow(z).
So for this toy example, exclude everything from 1:1, 3:4, 8:10, inclusive.
x y
2 2 b
5 5 e
6 6 f
7 7 g
We can create a sequence between x1 and x2 values and use anti_join to select rows from d that are not present in z.
library(tidyverse)
remove <- z %>%
mutate(x = map2(x1, x2, seq)) %>%
unnest(x) %>%
select(x)
anti_join(d, remove)
# x y
#1 2 b
#2 5 e
#3 6 f
#4 7 g
We can use a non-equi join
library(data.table)
i1 <- setDT(d)[z, .I, on = .(x >=x1, x <= x2), by = .EACHI]$I
i1
#[1] 1 3 4 8 9 10
d[i1]
# x y
#1: 1 a
#2: 3 c
#3: 4 d
#4: 8 h
#5: 9 i
#6: 10 j
d[!i1]
# x y
#1: 2 b
#2: 5 e
#3: 6 f
#4: 7 g
Or using fuzzyjoin
library(fuzzyjoin)
library(dplyr)
fuzzy_inner_join(d, z, by = c('x' = 'x1', 'x' = 'x2'),
match_fun = list(`>=`, `<=`)) %>%
select(names(d))
# A tibble: 6 x 2
# x y
# <int> <fct>
#1 1 a
#2 3 c
#3 4 d
#4 8 h
#5 9 i
#6 10 j
Or to get the rows not in 'x' from 'd'
fuzzy_anti_join(d, z, by = c('x' = 'x1', 'x' = 'x2'),
match_fun = list(`>=`, `<=`)) %>%
select(names(d))
# A tibble: 4 x 2
# x y
# <int> <fct>
#1 2 b
#2 5 e
#3 6 f
#4 7 g
So I am trying to learn data.tableand came accros the .SDnotation in a cheat sheet online link. So the example uses square brackets with .SD to subset rows. But why not just subset rows with i? So .SD[c(1, .N)]subsets rows right? And why should I subset rows like this?
library(data.table)
DT <- data.table(A = letters[c(1, 1, 1, 2, 2)],
B = 1:5,
C = 6:10)
DT
#> A B C
#> 1: a 1 6
#> 2: a 2 7
#> 3: a 3 8
#> 4: b 4 9
#> 5: b 5 10
# Method 1
DT[, .SD[c(1, .N)], by = A]
#> A B C
#> 1: a 1 6
#> 2: a 3 8
#> 3: b 4 9
#> 4: b 5 10
# method 2
DT[c(1, .N), .SD, by = A]
#> A B C
#> 1: a 1 6
#> 2: b 5 10
In the second case, we are specifying the i with index where .N is the last row, while in first case, it is the last row of each group
DT[c(1, .N)]
is similar to
DT[c(1, .N), .SD, by = A]
Only difference is that the rows specified in the i would be used for processing/changing for grouping info by 'A'
I have very large data.table that I want to trim down in this fashion:
Only one unique id
If there is any other data than "X" in the same log, that other should stay
If only X, then the first X should stay
If there is more than one other than "X", then all those should stay, separated by commas, but not the "X".
Sample dataset:
library(data.table)
dt <- data.table(
id=c(1,1,2,3,3,4,4,4,5,5),
log=c(11,11,11,12,12,12,12,12,13,13),
art=c("X", "Y", "X", "X", "X", "Z", "X", "Y","X", "X")
)
dt
id log art
1: 1 11 X
2: 1 11 Y
3: 2 11 X
4: 3 12 X
5: 3 12 X
6: 4 12 Z
7: 4 12 X
8: 4 12 Y
9: 5 13 X
10: 5 13 X
Required output:
id log art
1 11 Y
2 11 Y
3 12 Z,Y
4 12 Z,Y
5 13 X
Here is one method, though there maybe a more efficient approach.
unique(dt[,.(id, log)])[dt[, .(art=if(.N == 1 | all(art == "X"))
art[1] else toString(unique(art[art != "X"]))),
by=log], on="log"]
which returns
id log art
1: 1 11 Y
2: 2 11 Y
3: 3 12 Z, Y
4: 4 12 Z, Y
5: 5 13 X
perform a left join of the desired values of art by each log onto the unique pairs of ID and log. This assumes that no ID spans two logs, which is the case in the example.
We can try
dt[, .(art = if(all(art=="X")) "X" else
toString(unique(art[art != "X"]))), .(id, logbld = log)]
# id logbld art
#1: 1 11 Y
#2: 2 11 X
#3: 3 12 X
#4: 4 12 Z, Y
#5: 5 13 X
Just wanted to try this with dplyr:
library(data.table)
library(dplyr)
dat <- setDT(dt %>% group_by(id) %>%
unique() %>%
summarise(bldlog = mean(log),
art = gsub("X,|,X", "",paste(art, collapse = ","))))
dat
# id bldlog art
# 1: 1 11 Y
# 2: 2 11 X
# 3: 3 12 X
# 4: 4 12 Z,Y
# 5: 5 13 X
There are many answers for how to split a dataframe, for example How to split a data frame?
However, I'd like to split a dataframe so that the smaller dataframes contain the last row of the previous dataframe and the first row of the following dataframe.
Here's an example
n <- 1:9
group <- rep(c("a","b","c"), each = 3)
data.frame(n = n, group)
n group
1 1 a
2 2 a
3 3 a
4 4 b
5 5 b
6 6 b
7 7 c
8 8 c
9 9 c
I'd like the output to look like:
d1 <- data.frame(n = 1:4, group = c(rep("a",3),"b"))
d2 <- data.frame(n = 3:7, group = c("a",rep("b",3),"c"))
d3 <- data.frame(n = 6:9, group = c("b",rep("c",3)))
d <- list(d1, d2, d3)
d
[[1]]
n group
1 1 a
2 2 a
3 3 a
4 4 b
[[2]]
n group
1 3 a
2 4 b
3 5 b
4 6 b
5 7 c
[[3]]
n group
1 6 b
2 7 c
3 8 c
4 9 c
What is an efficient way to accomplish this task?
Suppose DF is the original data.frame, the one with columns n and group. Let n be the number of rows in DF. Now define a function extract which given a sequence of indexes ix enlarges it to include the one prior to the first and after the last and then returns those rows of DF. Now that we have defined extract, split the vector 1, ..., n by group and apply extract to each component of the split.
n <- nrow(DF)
extract <- function(ix) DF[seq(max(1, min(ix) - 1), min(n, max(ix) + 1)), ]
lapply(split(seq_len(n), DF$group), extract)
$a
n group
1 1 a
2 2 a
3 3 a
4 4 b
$b
n group
3 3 a
4 4 b
5 5 b
6 6 b
7 7 c
$c
n group
6 6 b
7 7 c
8 8 c
9 9 c
Or why not try good'ol by, which "[a]ppl[ies] a Function to a Data Frame Split by Factors [INDICES]".
by(data = df, INDICES = df$group, function(x){
id <- c(min(x$n) - 1, x$n, max(x$n) + 1)
na.omit(df[id, ])
})
# df$group: a
# n group
# 1 1 a
# 2 2 a
# 3 3 a
# 4 4 b
# --------------------------------------------------------------------------------
# df$group: b
# n group
# 3 3 a
# 4 4 b
# 5 5 b
# 6 6 b
# 7 7 c
# --------------------------------------------------------------------------------
# df$group: c
# n group
# 6 6 b
# 7 7 c
# 8 8 c
# 9 9 c
Although the print method of by creates a 'fancy' output, the (default) result is a list, with elements named by the levels of the grouping variable (just try str and names on the resulting object).
I was going to comment under #cdetermans answer but its too late now.
You can generalize his approach using data.table::shift (or dyplr::lag) in order to find the group indices and then run a simple lapply on the ranges, something like
library(data.table) # v1.9.6+
indx <- setDT(df)[, which(group != shift(group, fill = TRUE))]
lapply(Map(`:`, c(1L, indx - 1L), c(indx, nrow(df))), function(x) df[x,])
# [[1]]
# n group
# 1: 1 a
# 2: 2 a
# 3: 3 a
# 4: 4 b
#
# [[2]]
# n group
# 1: 3 a
# 2: 4 b
# 3: 5 b
# 4: 6 b
# 5: 7 c
#
# [[3]]
# n group
# 1: 6 b
# 2: 7 c
# 3: 8 c
# 4: 9 c
Could be done with data.frame as well, but is there ever a reason not to use data.table? Also this has the option to be executed with parallelism.
library(data.table)
n <- 1:9
group <- rep(c("a","b","c"), each = 3)
df <- data.table(n = n, group)
df[, `:=` (group = factor(df$group))]
df[, `:=` (group_i = seq_len(.N), group_N = .N), by = "group"]
library(doParallel)
groups <- unique(df$group)
foreach(i = seq(groups)) %do% {
df[group == groups[i] | (as.integer(group) == i + 1 & group_i == 1) | (as.integer(group) == i - 1 & group_i == group_N), c("n", "group"), with = FALSE]
}
[[1]]
n group
1: 1 a
2: 2 a
3: 3 a
4: 4 b
[[2]]
n group
1: 3 a
2: 4 b
3: 5 b
4: 6 b
5: 7 c
[[3]]
n group
1: 6 b
2: 7 c
3: 8 c
4: 9 c
Here is another dplyr way:
library(dplyr)
data =
data_frame(n = n, group) %>%
group_by(group)
firsts =
data %>%
slice(1) %>%
ungroup %>%
mutate(new_group = lag(group)) %>%
slice(-1)
lasts =
data %>%
slice(n()) %>%
ungroup %>%
mutate(new_group = lead(group)) %>%
slice(-n())
bind_rows(firsts, data, lasts) %>%
mutate(final_group =
ifelse(is.na(new_group),
group,
new_group) ) %>%
arrange(final_group, n) %>%
group_by(final_group)
Suppose I have a dataframe:
x y
a 1
b 2
a 3
a 4
b 5
c 6
a 7
d 8
a 9
b 10
e 12
b 13
c 15
I want to create another dataframe that includes only the x values that occur at least 3 times (a and b, in this case), and their highest corresponding y values.
So I want the output as:
x y
a 9
b 13
Here 9 and 13 are the highest values of a and b respectively
I tried using:
sort-(table(x,y))
but it did not work.
The data.table package is great for this. If df is the original data, you can do
library(data.table)
setDT(df)[, .(y = max(y)[.N >= 3]), by=x]
# x y
# 1: a 9
# 2: b 13
.N is an integer that tells us how many rows are in each group (which we've set to x here). So we just subset max(y) such that .N is at least three.
Here's one way, using subset to omit any x that occur less than 3 times, and then aggregate to find the maximum value by group:
d <- read.table(text='x y
a 1
b 2
a 3
a 4
b 5
c 6
a 7
d 8
a 9
b 10
e 12
b 13
c 15', header=TRUE)
with(subset(d, x %in% names(which(table(d$x) >= 3))),
aggregate(list(y=y), list(x=x), max))
# x y
# 1 a 9
# 2 b 13
And for good measure, a dplyr approach:
library(dplyr)
d %>%
group_by(x) %>%
filter(n() >= 3) %>%
summarise(max(y))
# Source: local data frame [2 x 2]
#
# x max(y)
# 1 a 9
# 2 b 13