Is there a way to specify a column name as an argument? - r

Let's say I want to create a function that replicates a column of choice, for the sake of an example.
testdata <- data.frame(
"diff1" = c(seq(1:10)),
"diff2" = c(seq(21:30))
)
goal <- testdata %>%
mutate(newdiff1 = diff1)
So I create a function
funtest <- function(dat,var,newvar){
dat %>%
mutate(newvar = var)
}
however,
test2 <- funtest(testdata,diff1,newdiff1)
would return an error:
Error: object 'diff1' not found
This format works
nondesiredformat <- funtest(testdata,testdata$diff1,newdiff1)
but this will cause the new variable to be always called "newvar", instead of our third argument.
is there a way to change the function so the arguments in test2 may work?
Thank you

In the function, we can use {{}} for doing the evaluation i.e. !! + enquo for unquoted variable names passed into function and for assignment, use the := instead of =
funtest <- function(dat,var,newvar){
dat %>%
mutate({{newvar}} := {{var}})
}
funtest(testdata, diff1, newdiff1)
# diff1 diff2 newdiff1
#1 1 1 1
#2 2 2 2
#3 3 3 3
#4 4 4 4
#5 5 5 5
#6 6 6 6
#7 7 7 7
#8 8 8 8
#9 9 9 9
#10 10 10 10

you can use bquote for this:
eval(bquote(
dat %>%
mutate(.(newvar) := .(var))
))
you could also update old school in your particular case
dat[[newvar]] = dat[[var]]

If you start to write functions with variable names with arguments, you might find data.table more convenient than dplyr. I recently wrote a post on the subject. Standard evaluation is easier to handle with data.table than dplyr, in my opinion.
With data.table, you have several ways to use column names as argument
Using get
You can use get that maps a name with a value in a certain scope. Here the scope is your data.table:
library(data.table)
funtest <- function(dat,var,newvar){
dat[, (newvar) := get(var)]
}
:= is an update-by-reference operator. If you want to know more about it, data.table vignettes are a good place to start. Calling the function:
dt = data.table(iris)
funtest(dt, "Species","x")[]
Sepal.Length Sepal.Width Petal.Length Petal.Width Species x
1: 5.1 3.5 1.4 0.2 setosa setosa
2: 4.9 3.0 1.4 0.2 setosa setosa
3: 4.7 3.2 1.3 0.2 setosa setosa
4: 4.6 3.1 1.5 0.2 setosa setosa
5: 5.0 3.6 1.4 0.2 setosa setosa
---
146: 6.7 3.0 5.2 2.3 virginica virginica
147: 6.3 2.5 5.0 1.9 virginica virginica
148: 6.5 3.0 5.2 2.0 virginica virginica
149: 6.2 3.4 5.4 2.3 virginica virginica
150: 5.9 3.0 5.1 1.8 virginica virginica
Using .SD
You can also use .SD that means Subset of Data. This is more convenient when you have several variables quoted. It avoids the !!!rlang::sym necessary for dplyr.
You can find yourself making complicated computations with a very concise syntax:
df[, newcolnames := lapply(.SD, mean), by = grouping_var, .SDcols = xvars]

Related

How to tidily create multiple columns from sets of columns?

I'm looking to use a non-across function from mutate to create multiple columns. My problem is that the variable in the function will change along with the crossed variables. Here's an example:
needs=c('Sepal.Length','Petal.Length')
iris %>% mutate_at(needs, ~./'{col}.Width')
This obviously doesn't work, but I'm looking to divide Sepal.Length by Sepal.Width and Petal.Length by Petal.Width.
I think your needs should be something which is common in both the columns.
You can select the columns based on the pattern in needs and divide the data based on position. !! and := is used to assign name of the new columns.
library(dplyr)
library(rlang)
needs = c('Sepal','Petal')
purrr::map_dfc(needs, ~iris %>%
select(matches(.x)) %>%
transmute(!!paste0(.x, '_divide') := .[[1]]/.[[2]]))
# Sepal_divide Petal_divide
#1 1.457142857 7.000000000
#2 1.633333333 7.000000000
#3 1.468750000 6.500000000
#4 1.483870968 7.500000000
#...
#...
If you want to add these as new columns you can do bind_cols the above with iris.
Here is a base R approach based that the columns you want to divide have a similar name pattern,
res <- sapply(split.default(iris[-ncol(iris)], sub('\\..*', '', names(iris[-ncol(iris)]))), function(i) i[1] / i[2])
iris[names(res)] <- res
head(iris)
# Sepal.Length Sepal.Width Petal.Length Petal.Width Species Petal.Petal.Length Sepal.Sepal.Length
#1 5.1 3.5 1.4 0.2 setosa 7.00 1.457143
#2 4.9 3.0 1.4 0.2 setosa 7.00 1.633333
#3 4.7 3.2 1.3 0.2 setosa 6.50 1.468750
#4 4.6 3.1 1.5 0.2 setosa 7.50 1.483871
#5 5.0 3.6 1.4 0.2 setosa 7.00 1.388889
#6 5.4 3.9 1.7 0.4 setosa 4.25 1.384615

Constructing lists using tidyeval tools (like `!!` and `:=`)

I am looking a way for easy list constructing based on R's tidyeval framework as defined in the rlang package.
Below is what I want to achieve:
a <- "item_name"
b <- "item_value"
identical(
list(!!a := !!b), # list(!!a := b) is of course also fine
list(item_name = "item_value")
)
What I can obtain at the moment is:
list(!!a := !!b)
# output
[[1]]
`:=`(!(!a), !(!b)
Alternatively it can get perhaps a little bit better when adding quosure:
quo(list(!!a := !!b))
# output
<quosure: global>
~list(`:=`("item_name", "item_value"))
Unfortunately I have no idea how to proceed further from here.
In other words I would like to have a similar effect like what we can get in the dplyr package:
transmute(iris, !!a := b)
# first few rows
Sepal.Length Sepal.Width Petal.Length Petal.Width Species item_name
1 5.1 3.5 1.4 0.2 setosa item_value
2 4.9 3.0 1.4 0.2 setosa item_value
3 4.7 3.2 1.3 0.2 setosa item_value
4 4.6 3.1 1.5 0.2 setosa item_value
5 5.0 3.6 1.4 0.2 setosa item_value
6 5.4 3.9 1.7 0.4 setosa item_value
You can use rlang::list2() which supports name-unquoting with := and splicing with !!!.
Note that you shouldn't unquote the argument itself since list2() is not a quoting function, it is just like list() with a few more syntactic features:
a <- "item_name"
b <- "item_value"
list2(!!a := b)

Smart spreadsheet parsing (managing group sub-header and sum rows, etc)

Say you have a set of spreadsheets formatted like so:
Is there an established method/library to parse this into R without having to individually edit the source spreadsheets? The aim is to parse header rows and dispense with sum rows so the output is the raw data, like so:
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
1 5.1 3.5 1.4 0.2 setosa
2 4.9 3.0 1.4 0.2 setosa
3 4.7 3.2 1.3 0.2 setosa
4 7.0 3.2 4.7 1.4 versicolor
5 6.4 3.2 4.5 1.5 versicolor
6 6.9 3.1 4.9 1.5 versicolor
7 5.7 2.8 4.1 1.3 versicolor
8 6.3 3.3 6.0 2.5 virginica
9 5.8 2.7 5.1 1.9 virginica
10 7.1 3.0 5.9 2.1 virginica
I can certainly hack a tailored solution to this, but wondering there is something a bit more developed/elegant than read.csv and a load of logic.
Here's a reproducible demo csv dataset (can't assume an equal number of lines per group..), although I'm hoping the solution can transpose to *.xlsx:
,Sepal.Length,Sepal.Width,Petal.Length,Petal.Width
,,,,
Setosa,,,,
1,5.1,3.5,1.4,0.2
2,4.9,3,1.4,0.2
3,4.7,3.2,1.3,0.2
Mean,4.9,3.23,1.37,0.2
,,,,
Versicolor,,,,
1,7,3.2,4.7,1.4
2,6.4,3.2,4.5,1.5
3,6.9,3.1,4.9,1.5
Mean,6.77,3.17,4.7,1.47
,,,,
Virginica,,,,
1,6.3,3.3,6,2.5
2,5.8,2.7,5.1,1.9
3,7.1,3,5.9,2.1
Mean,6.4,3,5.67,2.17
There is a variety of ways to present spreadsheets so it would be hard to have a consistent methodology for all presentations. However, it is possible to transform the data once it is loaded in R. Here's an example with your data. It uses the function na.locf from package zoo.
x <- read.csv(text=",Sepal.Length,Sepal.Width,Petal.Length,Petal.Width
,,,,
Setosa,,,,
1,5.1,3.5,1.4,0.2
2,4.9,3,1.4,0.2
3,4.7,3.2,1.3,0.2
Mean,4.9,3.23,1.37,0.2
,,,,
Versicolor,,,,
1,7,3.2,4.7,1.4
2,6.4,3.2,4.5,1.5
3,6.9,3.1,4.9,1.5
Mean,6.77,3.17,4.7,1.47
,,,,
Virginica,,,,
1,6.3,3.3,6,2.5
2,5.8,2.7,5.1,1.9
3,7.1,3,5.9,2.1
Mean,6.4,3,5.67,2.17", header=TRUE, stringsAsFactors=FALSE)
library(zoo)
x <- x[x$X!="Mean",] #remove Mean line
x$Species <- x$X #create species column
x$Species[grepl("[0-9]",x$Species)] <- NA #put NA if Species contains numbers
x$Species <- na.locf(x$Species) #carry last observation if NA
x <- x[!rowSums(is.na(x))>0,] #remove lines with NA
X Sepal.Length Sepal.Width Petal.Length Petal.Width Species
3 1 5.1 3.5 1.4 0.2 Setosa
4 2 4.9 3.0 1.4 0.2 Setosa
5 3 4.7 3.2 1.3 0.2 Setosa
9 1 7.0 3.2 4.7 1.4 Versicolor
10 2 6.4 3.2 4.5 1.5 Versicolor
11 3 6.9 3.1 4.9 1.5 Versicolor
15 1 6.3 3.3 6.0 2.5 Virginica
16 2 5.8 2.7 5.1 1.9 Virginica
17 3 7.1 3.0 5.9 2.1 Virginica
I just recently did something similar. Here was my solution:
iris <- read.csv(text=",Sepal.Length,Sepal.Width,Petal.Length,Petal.Width
,,,,
Setosa,,,,
1,5.1,3.5,1.4,0.2
2,4.9,3,1.4,0.2
3,4.7,3.2,1.3,0.2
Mean,4.9,3.23,1.37,0.2
,,,,
Versicolor,,,,
1,7,3.2,4.7,1.4
2,6.4,3.2,4.5,1.5
3,6.9,3.1,4.9,1.5
Mean,6.77,3.17,4.7,1.47
,,,,
Virginica,,,,
1,6.3,3.3,6,2.5
2,5.8,2.7,5.1,1.9
3,7.1,3,5.9,2.1
Mean,6.4,3,5.67,2.17", header=TRUE, stringsAsFactors=FALSE)
First I used a which splits at an index.
split_at <- function(x, index) {
N <- NROW(x)
s <- cumsum(seq_len(N) %in% index)
unname(split(x, s))
}
Then you define that index using:
iris[,1] <- stringr::str_trim(iris[,1])
index <- which(iris[,1] %in% c("Virginica", "Versicolor", "Setosa"))
The rest is just using purrr::map_df to perform actions on each data.frame in the list that's returned. You can add some additional flexibility for removing unwanted rows if needed.
split_at(iris, index) %>%
.[2:length(.)] %>%
purrr::map_df(function(x) {
Species <- x[1,1]
x <- x[-c(1,NROW(x) - 1, NROW(x)),]
data.frame(x, Species = Species)
})

Sum column every n column in a data frame R

I have a df(A) with 10 column and 300 row. I need to sum every two column, between them, in this way:
A[,1]+A[,2] = # first result
A[,3]+A[,4] = # second result
A[,5]+A[,6]= # third result
....
A[,9]+A[,10] # last result
The expected final result is a new dataframe with 5 column and 300 row.
Any way to do this? with tapply or loop for?
I know that i can try with the upon example, but i'm looking for a fast method
Thank you
We could use sapply:
df <- data.frame(replicate(expr=rnorm(100),n = 10))
sapply(seq(1,9,by=2),function(i) rowSums(df[,i:(i+1)]))
You can do it without *apply loops.
Sample data:
df <- head(iris[-5])
df
# Sepal.Length Sepal.Width Petal.Length Petal.Width
#1 5.1 3.5 1.4 0.2
#2 4.9 3.0 1.4 0.2
#3 4.7 3.2 1.3 0.2
#4 4.6 3.1 1.5 0.2
#5 5.0 3.6 1.4 0.2
#6 5.4 3.9 1.7 0.4
Now you can use vector recycling of a logicals:
df[c(TRUE,FALSE)] + df[c(FALSE,TRUE)]
# Sepal.Length Petal.Length
#1 8.6 1.6
#2 7.9 1.6
#3 7.9 1.5
#4 7.7 1.7
#5 8.6 1.6
#6 9.3 2.1
It's a bit cryptic but I it should be fast. We add each column to the adjacent column. Then delete the unnecessary results with c(T,F) which recycles through odd columns:
(A[1:(ncol(A)-1)] + A[2:ncol(A)])[c(T,F)]

splitting a data.table, then modifying by reference

I have a use-case where I need to split a data.table, then apply different modify-by-reference operations to each partition. However, splitting forces copying of each table.
Here's a toy example on the iris dataset:
#split the data
DT <- data.table(iris)
out <- split(DT, DT$Species)
#assign partitions to global environment
NAMES <- as.character(unique(DT$Species))
lapply(seq_along(out), function(x) {
assign(NAMES[x], out[[x]], envir=.GlobalEnv)})
#modify by reference, same function applied to different columns for different partitions
#would do this programatically in real use case
virginica[ ,summ:=sum(Petal.Length)]
setosa[ ,summ:=sum(Petal.Width)]
#rbind all (again, programmatic)
do.call(rbind, list(virginica, setosa))
Then I get the following warning:
Warning message:
In `[.data.table`(out$virginica, , `:=`(cumPedal, cumsum(Petal.Width))) :
Invalid .internal.selfref detected and fixed by taking a copy of the whole table so that := can add this new column by reference.
I know this is related to putting data.tables in lists. Is there any workaround for this use case, or a way to avoid using split? Note that in the real case, I want to modify by reference programatically, so hardcoding a solution won't work.
Here's an example of using .EACHI to achieve what it sounds like you're trying to do:
## Create a data.table that indicates the pairs of keys to columns
New <- data.table(
Species = c("virginica", "setosa", "versicolor"),
FunCol = c("Petal.Length", "Petal.Width", "Sepal.Length"))
## Set the key of your original data.table
setkey(DT, Species)
## Now use .EACHI
DT[New, temp := cumsum(get(FunCol)), by = .EACHI][]
# Sepal.Length Sepal.Width Petal.Length Petal.Width Species temp
# 1: 5.1 3.5 1.4 0.2 setosa 0.2
# 2: 4.9 3.0 1.4 0.2 setosa 0.4
# 3: 4.7 3.2 1.3 0.2 setosa 0.6
# 4: 4.6 3.1 1.5 0.2 setosa 0.8
# 5: 5.0 3.6 1.4 0.2 setosa 1.0
# ---
# 146: 6.7 3.0 5.2 2.3 virginica 256.9
# 147: 6.3 2.5 5.0 1.9 virginica 261.9
# 148: 6.5 3.0 5.2 2.0 virginica 267.1
# 149: 6.2 3.4 5.4 2.3 virginica 272.5
# 150: 5.9 3.0 5.1 1.8 virginica 277.6
## Basic verification
head(cumsum(DT["setosa", ]$Petal.Width), 5)
# [1] 0.2 0.4 0.6 0.8 1.0
tail(cumsum(DT["virginica", ]$Petal.Length), 5)

Resources