dplyr: add rows within group_by groups - r

Is there a better way to add rows within group_by() groups than using bind_rows()? Here's an example that's a little clunky:
df <- data.frame(a=c(1,1,1,2,2), b=1:5)
df %>%
group_by(a) %>%
do(bind_rows(data.frame(a=.$a[1], b=0), ., data.frame(a=.$a[1], b=10)))
The idea is that columns that we're already grouping on could be inferred from the groups.
I was wondering whether something like this could work instead:
df %>%
group_by(a) %>%
insert(b=0, .at=0) %>%
insert(b=10)
Like append(), it could default to inserting after all existing elements, and it could be smart enough to use group values for any columns unspecified. Maybe use NA for non-grouping columns unspecified.
Is there an existing convenient syntax I've missed, or would this be helpful?

Here's an approach using data.table:
library(data.table)
setDT(df)
rbind(df, expand.grid(b = c(0, 10), a = df[ , unique(a)]))[order(a, b)]
Depending on your actual context this much simpler alternative would work too:
df[ , .(b = c(0, b, 10)), by = a]
(and we can simply use c(0, b, 10) in j if we don't care about keeping the name b)
The former has the advantage that it will work even if df has more columns -- just have to set fill = TRUE for rbind.data.table.

Related

convert tidyverse rowwise operation to data.table solution

I have a dataframe with millions of rows and tens of columns that I need to apply a rowwise operation. My solution below works using dplyr but I hope a switch to data.table will speed things up. Any help converting the below code to a data.table version would be appreciated.
library(tidyverse)
library(trend)
df = structure(list(id = 1:2, var = c(3L, 9L), col1_x = c("[(1,2,3)]",
"[(100,90,80,70,60,50,40,30,20)]"), col2_x = c("[(2,4,6)]", "[(100,50,25,12,6,3,1,1,1)]"
)), class = "data.frame", row.names = c(NA, -2L))
df = df %>%
mutate(across(ends_with("x"),~ gsub("[][()]", "", .)))
x_cols = df %>%
select(ends_with("x")) %>%
names()
df = df %>%
rowwise() %>%
mutate(across(all_of(x_cols) ,~ ifelse(var<=4,0,sens.slope(as.numeric(unlist(strsplit(., ','))))$estimates[[1]]),.)) %>%
ungroup()
While what #Ritchie Sacramento wrote is absolutely true, here's the information you asked for.
First, I want to start with set or :=. When you see the keyword set (which can just be part of the function name) or the := symbol, you've told data.table not to make copies of the data. Without declaring or declaration (that pesky = or <-), you've changed the data table. This is one of the key methods to prevent wasted memory with this package.
Keep in mind that the environment pane in RStudio is triggered to update when it registers that operator (= or <-), creating something new. Since you did a replace-in-place, the environment pane may reflect incorrect information. You can use the refresh icon (top right of the pane), or you can print the object to the console to check.
As soon as you declare anything that the pane identifies, everything in the pane is updated.
Change a data frame to a data.table. (Notice that keyword—set!) Both of these do the same thing. However, one copies everything in memory and makes it again. (Naming the frame the same thing does not prevent copies.)
setDT(df)
df <- data.table(df)
I'm not going to start with your first code blurb. I'm starting with the name extraction.
You wrote:
x_cols = df %>%
select(ends_with("x")) %>%
names()
# [1] "col1_x" "col2_x"
There are many ways to get this information. This is what I did. Note that this doesn't really have anything to do with data.table. I just used base R here. You could use a data frame the same way.
xcols <- names(df)[endsWith(names(df), 'x')]
# [1] "col1_x" "col2_x"
I'm going to use this object, xcols in the remaining examples. (Why keep reiterating the same declaration?)
You wrote the following to remove the brackets and parentheses.
df = df %>%
mutate(across(ends_with("x"),~ gsub("[][()]", "", .)))
# id var col1_x col2_x
# 1 1 3 1,2,3 2,4,6
# 2 2 9 100,90,80,70,60,50,40,30,20 100,50,25,12,6,3,1,1,1
There are several ways you could do this, whether in a data frame or a data.table. Here are a couple of methods you can use with data.table. These do the exact same thing as each other and your code.
Note the :=, which means the table changed.
In the first example, I used .SD and .SDcols. These are data column selection tools. You use .SD in place of the column name when you want to use more than one column. Then use .SDcols to tell data.table what columns you're trying to use. By annotating (xcols), where xcols is my variable representing my column names to use, this tells data.table to replace the data in the columns used for the aggregation.
The difference between these two is how I used lapply, which doesn't have anything to do with data.table. If you need more info on this function, you can ask me, or you can look through the many Q & As out there already.
df[,
(xcols) := lapply(.SD, function(k) gsub("[][()]", "", k)),
.SDcols = xcols]
df[,
(xcols) := lapply(.SD, gsub, pattern = "[][()]",
replacement = ""),
.SDcols = xcols]
Your last request was based on this code.
df %>%
rowwise() %>%
mutate(across(all_of(x_cols),
~ifelse(var <= 5, 0, sens.slope(
as.numeric(unlist(
strsplit(., ','))))$estimates[[1]]),.)) %>%
ungroup()
Since you used var to delineate when to apply this, I've used the by argument (as in dplyr's group_by). In terms of the other requirements, you'll see .SD and lapply again.
df[,
(xcols) := lapply(.SD,
function(k) {
ifelse(var <= 3, 0,
sens.slope(as.numeric(strsplit(k, ",")[[1]])
)$estimates[[1]])
}), by = var, .SDcols = xcols]
If you think about how these differ, you may find that, in a lot of ways, they aren't all that different. For example, in this last translation, you may see a similar approach in dplyr that I used.
df %>% group_by(var) %>%
mutate(across(all_of(x_cols),
~ifelse(var <= 5, 0, sens.slope(
as.numeric(unlist(
strsplit(., ','))))$estimates[[1]])))

Using a key to clean data in two corresponding columns

I have a large data frame (6 million rows, 20 columns) where data in one column corresponds to data in another column. I created a key that I now want to use to fix rows that have the wrong value. As a small example:
key = data.frame(animal = c('dog', 'cat', 'bird'),
sound = c('bark', 'meow', 'chirp'))
The data frame looks like this (minus the other columns of data):
df = data.frame(id = c(1, 2, 3, 4),
animal = c('dog', 'cat', 'bird', 'cat'),
sound = c('meow', 'bark', 'chirp', 'chirp'))
I swear I have done this before but can't remember my solution. Any ideas?
Using dplyr. If you want to fix sound according to animal,
library(dplyr)
df <- df %>%
mutate(sound = sapply(animal, function(x){key %>% filter(animal==x) %>% pull(sound)}))
should do the trick. If you want to fix animal according to sound:
df <- df %>%
mutate(animal = sapply(sound, function(x){key %>% filter(sound==x) %>% pull(animal)}))
I'm not sure about relative efficiency, but it's simpler to replace the partially incorrect column completely. It may not even cost you very much time (since you have to look up values anyway to determine that an animal/sound pair is mismatched).
library(tidyverse)
df %>% select(-sound) %>% full_join(key, by = "animal")
For 6 million rows, you may be better off using data.table. If you convert df and key to data tables (as.data.table()) that will take some up-front computational time but may speed up subsequent operations; you can use tidyverse operations on data.table objects without doing any further modifications, but native data.table operations might be faster:
library(data.table
dft <- as.data.table(df)
k <- as.data.table(key)
merge(dft[,-"sound"], k, by = "animal")
I haven't bothered to do any benchmarking (would need much larger examples to be able to measure any differences).

Use a function of the data in dplyr::summarise

Assume I have a function of a data.frame which gives a single number back, now I would like to use the summarise in dplyr where the new variable should be this function applied for the data.frame grouped by another variable.
This is a stupid example
df <- data.frame(id=rep(c("A","B"),each=5),diff=rnorm(10))
func<-function(data){
mean(data$diff)
}
I know this example is easily done using summarise(Mean = mean(diff)), but the points is not solving this example but in general using summarise with a function of a data.frame
My try so far has been
df %>% group_by(id) %>% summarise(New = func(.))
but it gives the same value for every group, which is the overall function.
Hope everything is clear.
I'm not sure I understand what you are trying to do, and I'm not familiar with the differences between the plyr and dplyr packages. The most straightforward way to do what I think you're trying to do is with daply:
> daply(df, .(id), func)
A B
-0.0301488 0.2088815
As akrun pointed out in the comments, you can do this using do in dplyr:
df %>% group_by(id) %>% do(data.frame(New=func(.)))
You can also add other variables, though you have to use .$:
df %>% group_by(id) %>% do(data.frame(New=func(.), SmthElse = sd(.$diff)))
# id New SmthElse
#1 A 0.1934552 1.0932424
#2 B -0.4161216 0.4841031
That said, the simpler and faster performance solution is using data.table:
library(data.table)
dt = as.data.table(df) # or convert in place using setDT
dt[, .(New = func(.SD), SmthElse = sd(diff)), by = id]
# id New SmthElse
#1: A 0.1934552 1.0932424
#2: B -0.4161216 0.4841031

Using gtools::mixedsort or alternatives with dplyr::arrange

I am trying to order a dataframe by making use of dplyr::arrange. The issue is that the column I am trying to sort on contains both a fixed string followed by a number, as for instance generated by the dummycode below.
dummydf<-data.frame(values=rnorm(100),sortcol=paste0("ABC",sample(1:100,100,replace=FALSE)))
By default, using dummydf %>% arrange(sortcol) would generate a df which is sorted alphanumerically (?) but this is of course not the desired result:
values sortcol
0.708081720 ABC1
0.041348322 ABC10
1.730962886 ABC100
0.423480861 ABC11
-1.545837266 ABC12
-1.345539947 ABC13
-0.078998792 ABC14
0.088712174 ABC15
0.670583024 ABC16
1.238837680 ABC17
-1.459044293 ABC18
-2.028535223 ABC19
0.779514385 ABC2
1.360509910 ABC20
In this example, I would like to sort the column as gtools::mixedsort would do, making sure ABC2 follows ABC1 and is not preceed by ABC1-19 and ABC100 mixedsort(as.character(dummydf$sortcol)) would do that trick.
Now, I am aware I could do this by using sub in my arrange argument: dummydf %>% arrange(as.numeric(sub("ABC","",sortcol))) but that is mainly because my string is something fixed (although any regex could be used to capture the last digits following any string I suppose).
I am just wondering: is there a more "elegant" and generic way to get this done with dplyr::arrange, in the same fashion as gtools::mixedsort?
Kind regards,
FM
Here's a functional solution making use of the mysterious identity order(order(x)) == rank(x).
mixedrank = function(x) order(gtools::mixedorder(x))
dummydf %>% dplyr::arrange(mixedrank(sortcol))
I don't see this answer posted so I'll throw it out. You can use mixedorder with slice to arrange it.
dummydf %>%
slice(mixedorder(sortcol))
Using data.table
library(data.table)
dummydf = data.table(dummydf)
dummydf[gtools::mixedorder(as.character(sortcol))]
Honestly just copied your example and stuck it in as the select argument in the data.table syntax. You already did all the hard work :).
Credit to Akhil Nair for his data.table answer which is what the first code snippet derives from. If you like the data.table answer but still want magrittr piping, you can consider calculating a new column and using piping with data.table to get your output:
dummydf %>%
dplyr::mutate(row_lookup = gtools::mixedorder(as.character(sortcol))) %>%
data.table::data.table() %>%
.[.$row_lookup]
I think it's debatable whether that helps or detracts from the readability.
If you don't want to call data.table, you can go through some extra contortions to calculate a column you can use dplyr::arrange on. Here's one example:
library(dplyr)
bind_cols(dummydf,
dummydf %>%
tibble::rowid_to_column("order") %>%
mutate(rowname = gtools::mixedorder(as.character(sortcol))) %>%
arrange(rowname) %>%
select(order)) %>%
arrange(order)
I think this code is more confusing to read and isn't worth those extra contortions to avoid data.table.
Here is a solution that will allow for sorting if there are repeats and multiple conditions to sort. Most previous answers are not generic: they freeze the ordering at level 1.
df <- data.frame(values = rnorm(100),
sortcol1 = paste0("ASORT", sample(1:100, 100, replace = TRUE)),
sortcol2 = paste0("BSORT", sample(1:100, 100, replace = TRUE)),
stringsAsFactors = F)
df %>%
mutate(
`sortcol1` = factor(`sortcol1`, ordered = T, levels = unique(gtools::mixedsort(`sortcol1`))),
`sortcol2` = factor(`sortcol2`, ordered = T, levels = unique(gtools::mixedsort(`sortcol2`)))
) %>%
arrange(`sortcol1`, `sortcol2`)

Working with temporary columns (created on-the-fly) more efficiently in a dataframe

Consider the following dataframe:
df <- data.frame(replicate(5,sample(1:10, 10, rep=TRUE)))
If I want to divide each row by its sum (to make a probability distribution), I need to do something like this:
df %>% mutate(rs = rowSums(.)) %>% mutate_each(funs(. / rs), -rs) %>% select(-rs)
This really feels inefficient:
Create an rs column
Divide each of the values by their corresponding row rowSums()
Remove the temporarily created column to clean up the original dataframe.
When working with existing columns, it feels much more natural:
df %>% summarise_each(funs(weighted.mean(., X1)), -X1)
Using dplyr, would there a better way to work with temporary columns (created on-the-fly) than having to add and remove them after processing ?
I'm also interested in how data.table would handle such a task.
As I mentioned in a comment above I don't think that it makes sense to keep that data in either a data.frame or a data.table, but if you must, the following will do it without converting to a matrix and illustrates how to create a temporary variable in the data.table j-expression:
dt = as.data.table(df)
dt[, names(dt) := {sums = Reduce(`+`, .SD); lapply(.SD, '/', sums)}]
Why not considering base R as well:
as.data.frame(as.matrix(df)/rowSums(df))
Or just with your data.frame:
df/rowSums(df)

Resources