failed when I was trying to use defuse-inject pattern in dplyr group_map function - r

Following is an example of what I am trying to achieve.
library(dplyr)
tbl.data <- tidyquant::tq_get(c("GS", "C", "BAC"))
to.xts <- function(group, group_key, date_col, price_col){
a <- group %>% dplyr::pull({{ price_col }})
b <- group %>% dplyr::pull({{ date_col }})
x <- xts::xts(a, order.by=b)
colnames(x) <- key$symbol
x
}
make.xts <- function(data, date_col, price_col){
data %>%
group_by(symbol) %>%
group_map(~to.xts(.x, .y, date_col, price_col))
}
# Failed example one:
tbl.data %>% group_by(symbol) %>% group_map(to.xts, date, close)
# Failed example two:
make.xts(tbl.data, date, close)
# Error in `dplyr::pull()`:
# ! Can't extract column with `!!enquo(var)`.
# ✖ `!!enquo(var)` must be numeric or character, not a function.
# Run `rlang::last_error()` to see where the error occurred.
# However, If I single out a group myself and apply `to.xts` to that group it'll work. The only thing changed, which I doubt that it would have effect on the function itself, is that the `group_key` is now a string (it was a data-variable in the context of `group_map`'s `.f`)
gs.grp <- tbl.data %>% dplyr::filter(symbol=="GS")
gs.grp %>% to.xts("GS", date, col)
# A simply pull operation would also work.
gs.grp %>% dplyr::pull(close)
I don't quite understand what has changed internally; why this is the case and what's not correct here?
Given the error message that it seems dplyr::pull is doing
defuse (enquo) and inject (!!) itself internally, therefore I shall not use embracing operator; however, without it it didn't work either and caused the same error.

I haven't used group_map function much , here is an alternative version that you can try -
library(dplyr)
library(purrr)
tbl.data <- tidyquant::tq_get(c("GS", "C", "BAC"))
to.xts <- function(group, symbol, date, price){
a <- group %>% dplyr::pull({{ price }})
b <- group %>% dplyr::pull({{ date }})
x <- xts::xts(a, order.by=b)
colnames(x) <- symbol
x
}
tbl.data %>% split(.$symbol) %>% imap(~to.xts(.x, .y, date, close))
If you want them in one xts object as separate column.
tbl.data %>%
split(.$symbol) %>%
imap(~to.xts(.x, .y, date, close)) %>%
{do.call(merge, .)}
# BAC C GS
#2013-01-02 12.03 41.25 131.66
#2013-01-03 11.96 41.39 130.94
#2013-01-04 12.11 42.43 134.51
#2013-01-07 12.09 42.47 134.26
#2013-01-08 11.98 42.46 133.05
#2013-01-09 11.43 42.04 134.32

Using group_map using a formula to apply to each group:
library(dplyr)
#library(xts)
tbl.data %>%
group_by(symbol) %>%
group_map(~ to.xts(.x, .y, date, close))
to.xts <- function(group, group_key, date, price) {
a <- group %>% dplyr::pull({{ price }})
b <- group %>% dplyr::pull({{ date }})
x <- xts::xts(a, order.by = b)
colnames(x) <- group_key
x
}
Very similar notation to purr that it is based on.
In the formula, you can use
. or .x to refer to the subset of rows of .tbl for the given group
.y to refer to the key, a one row tibble with one column per grouping variable that identifies the group
(See the documentation)
Alternatively, we could also use pivoting to avoid your function, and put it inside one xts-object.
library(dplyr)
library(tidyr)
#library(xts)
tbl.data %>%
pivot_wider(date, names_from = symbol, values_from = low) %>%
xts::xts(order.by = .$date) %>%
.[,-1]
Output:
GS C BAC
2013-01-02 "130" "40.7" "11.9"
2013-01-03 "130" "41.0" "11.9"
2013-01-04 "130" "41.6" "11.9"
2013-01-07 "133" "42.0" "12.0"
2013-01-08 "133" "42.0" "11.9"
2013-01-09 "133" "41.8" "11.3"
2013-01-10 "134" "42.0" "11.5"

This doesn't answer your question, but I wanted to point out that this is much less complicated if you avoid the tidyverse patterns completely.
symbols <- c("GS", "C", "BAC")
# Environment to hold data
my_data <- new.env()
# Tell getSymbols() to load the data into 'my_data'
getSymbols(symbols, env = my_data)
# Combine all the close prices into one xts object
price_data <- Reduce(merge, lapply(my_data, Cl))
# Remove ".Close" from column names
colnames(price_data) <- sub(".Close", "", colnames(price_data), fixed = TRUE)

Related

Pass a variable name to a user written function that uses dyplr

I am trying to write a function that index variables names.
In particular, in my function, I use mutate to encode a variable that I have without changing its name. Does anyone knows how I can index a variable on the left end side of mutate?
Here is an example
library(tydiverse)
# first create relevant dataset
iris <- iris%>% group_by(Species) %>% mutate(mean_Length=mean(Sepal.Length))
# second create my function
userfunction <- function(var){
newdata <- iris %>%
select(mean_Length,{var}) %>% distinct() %>%
mutate(get(var)= # this is what causes my function to fail. How can i refer to the `var` here?
factor(get(var),get(var))) %>%
arrange(get(var)) #
return(newdata)
}
# this function produces the following error # Error: unexpected '}' in "}"
#note that if I change the reference to its original string the function works
userfunction2 <- function(var){
newdata <- iris %>%
select(mean_Length,{var}) %>% distinct() %>%
mutate(Species= # without reference it works, but I am unable to use the function for multiple variables.
factor(get(var),get(var))) %>%
arrange(get(var)) #
return(newdata)
}
encodedata<- userfunction2("Species")
Thanks a lot in advance for your help
Best
Here is a working example that goes into a similar direction as Limey's answer:
iris <- datasets::iris %>%
group_by(Species) %>%
mutate(mean_Length=mean(Sepal.Length)) %>%
ungroup()
userfunction <- function(var){
iris %>%
transmute(mean_Length, "temp" = iris[[var]]) %>%
distinct() %>%
mutate("{var}" := factor(temp)) %>%
arrange(temp) %>%
select(-temp)
}
userfunction("Petal.Length")
I don't think var is your problem. I think it's the =. If you you have a enquoted variable on the left hand side of the assignment (which is effectively what you do have with get()), you need :=, not =.
See here for more details.
I would have written your function slightly differently:
userfunction <- function(data, var){
qVar <- enquo(var)
newdata <- data %>%
select(mean_Length, !! qVar) %>% distinct() %>%
mutate(!! qVar := factor(!! qVar, !! qVar)) %>%
arrange(!! qVar)
return(newdata)
}
The inclusion of the data parameter means you can include it in a pipe:
encodedata <- iris %>% userfunction(Species)
encodedata
# A tibble: 3 x 2
# Groups: Species [3]
mean_Length Species
<dbl> <fct>
1 5.01 setosa
2 5.94 versicolor
3 6.59 virginica

Problems passing column name as variable in R [duplicate]

I want to parameterise the following computation using dplyr that finds which values of Sepal.Length are associated with more than one value of Sepal.Width:
library(dplyr)
iris %>%
group_by(Sepal.Length) %>%
summarise(n.uniq=n_distinct(Sepal.Width)) %>%
filter(n.uniq > 1)
Normally I would write something like this:
not.uniq.per.group <- function(data, group.var, uniq.var) {
iris %>%
group_by(group.var) %>%
summarise(n.uniq=n_distinct(uniq.var)) %>%
filter(n.uniq > 1)
}
However, this approach throws errors because dplyr uses non-standard evaluation. How should this function be written?
You need to use the standard evaluation versions of the dplyr functions (just append '_' to the function names, ie. group_by_ & summarise_) and pass strings to your function, which you then need to turn into symbols. To parameterise the argument of summarise_, you will need to use interp(), which is defined in the lazyeval package. Concretely:
library(dplyr)
library(lazyeval)
not.uniq.per.group <- function(df, grp.var, uniq.var) {
df %>%
group_by_(grp.var) %>%
summarise_( n_uniq=interp(~n_distinct(v), v=as.name(uniq.var)) ) %>%
filter(n_uniq > 1)
}
not.uniq.per.group(iris, "Sepal.Length", "Sepal.Width")
Note that in recent versions of dplyr the standard evaluation versions of the dplyr functions have been "soft deprecated" in favor of non-standard evaluation.
See the Programming with dplyr vignette for more information on working with non-standard evaluation.
Like the old dplyr versions up to 0.5, the new dplyr has facilities for both standard evaluation (SE) and nonstandard evaluation (NSE). But they are expressed differently than before.
If you want an NSE function, you pass bare expressions and use enquo to capture them as quosures. If you want an SE function, just pass quosures (or symbols) directly, then unquote them in the dplyr calls. Here is the SE solution to the question:
library(tidyverse)
library(rlang)
f1 <- function(df, grp.var, uniq.var) {
df %>%
group_by(!!grp.var) %>%
summarise(n_uniq = n_distinct(!!uniq.var)) %>%
filter(n_uniq > 1)
}
a <- f1(iris, quo(Sepal.Length), quo(Sepal.Width))
b <- f1(iris, sym("Sepal.Length"), sym("Sepal.Width"))
identical(a, b)
#> [1] TRUE
Note how the SE version enables you to work with string arguments - just turn them into symbols first using sym(). For more information, see the programming with dplyr vignette.
In the devel version of dplyr (soon to be released 0.6.0), we can also make use of slightly different syntax for passing the variables.
f1 <- function(df, grp.var, uniq.var) {
grp.var <- enquo(grp.var)
uniq.var <- enquo(uniq.var)
df %>%
group_by(!!grp.var) %>%
summarise(n_uniq = n_distinct(!!uniq.var)) %>%
filter(n_uniq >1)
}
res2 <- f1(iris, Sepal.Length, Sepal.Width)
res1 <- not.uniq.per.group(iris, "Sepal.Length", "Sepal.Width")
identical(res1, res2)
#[1] TRUE
Here enquo takes the arguments and returns the value as a quosure (similar to substitute in base R) by evaluating the function arguments lazily and inside the summarise, we ask it to unquote (!! or UQ) so that it gets evaluated.
Here's the way to do it from rlang 0.4 using curly curly {{ pseudo operator :
library(dplyr)
not.uniq.per.group <- function(data, group.var, uniq.var) {
data %>%
group_by({{ group.var }}) %>%
summarise(n.uniq = n_distinct({{ uniq.var }})) %>%
filter(n.uniq > 1)
}
iris %>% not.uniq.per.group(Sepal.Length, Sepal.Width)
#> # A tibble: 25 x 2
#> Sepal.Length n.uniq
#> <dbl> <int>
#> 1 4.4 3
#> 2 4.6 4
#> 3 4.8 3
#> 4 4.9 5
#> 5 5 8
#> 6 5.1 6
#> 7 5.2 4
#> 8 5.4 4
#> 9 5.5 6
#> 10 5.6 5
#> # ... with 15 more rows
In the current version of dplyr (0.7.4) the use of the standard evaluation function versions (appended '_' to the function name, e.g. group_by_) is deprecated.
Instead you should rely on tidyeval when writing functions.
Here's an example of how your function would look then:
# definition of your function
not.uniq.per.group <- function(data, group.var, uniq.var) {
# enquotes variables to be used with dplyr-functions
group.var <- enquo(group.var)
uniq.var <- enquo(uniq.var)
# use '!!' before parameter names in dplyr-functions
data %>%
group_by(!!group.var) %>%
summarise(n.uniq=n_distinct(!!uniq.var)) %>%
filter(n.uniq > 1)
}
# call of your function
not.uniq.per.group(iris, Sepal.Length, Sepal.Width)
If you want to learn all about the details, there's an excellent vignette by the dplyr-team on how this works.
I've written a function in the past that does something similar to what you're doing, except that it explores all the columns outside the primary key and looks for multiple unique values per group.
find_dups = function(.table, ...) {
require(dplyr)
require(tidyr)
# get column names of primary key
pk <- .table %>% select(...) %>% names
other <- names(.table)[!(names(.table) %in% pk)]
# group by primary key,
# get number of rows per unique combo,
# filter for duplicates,
# get number of distinct values in each column,
# gather to get df of 1 row per primary key, other column,
# filter for where a columns have more than 1 unique value,
# order table by primary key
.table %>%
group_by(...) %>%
mutate(cnt = n()) %>%
filter(cnt > 1) %>%
select(-cnt) %>%
summarise_each(funs(n_distinct)) %>%
gather_('column', 'unique_vals', other) %>%
filter(unique_vals > 1) %>%
arrange(...) %>%
return
# Final dataframe:
## One row per primary key and column that creates duplicates.
## Last column indicates how many unique values of
## the given column exist for each primary key.
}
This function also works with the piping operator:
dat %>% find_dups(key1, key2)
You can avoid lazyeval by using do to call an anonymous function and then using get. This solution can be used more generally to employ multiple aggregations. I usually write the function separately.
library(dplyr)
not.uniq.per.group <- function(df, grp.var, uniq.var) {
df %>%
group_by_(grp.var) %>%
do((function(., uniq.var) {
with(., data.frame(n_uniq = n_distinct(get(uniq.var))))
}
)(., uniq.var)) %>%
filter(n_uniq > 1)
}
not.uniq.per.group(iris, "Sepal.Length", "Sepal.Width")

faster way to make new variables containing data frames to be rbinded

I want to make a bunch of new variables a,b,c,d.....z to store tibble data frames. I will then rbind the new variables that store tibble data frames and export them as a csv. How do I do this faster without having to specify the new variables each time?
a<- subset(data.frame, variable1="condition1",....,) %>% group_by() %>% summarize( a=mean())
b<-subset(data.frame, variable1="condition2",....,) %>% group_by() %>% summarize( a=mean())
....
z<-subset(data.frame, variable1="condition2",....,) %>% group_by() %>% summarize( a=mean())
rbind(a,b,....,z)
There's got to be a faster way to do this. My data set is large so having it stored in memory as partitions of a,b,c,....z is causing the computer to crash. Typing the subset conditions to form the partitions repeatedly is tedious.
You could do something like this using purrr package:
You may need to use NSE depends on what's your condition. You can reference Programming with dplyr
purrr::map_df(
c("condition1","condition2",..., "conditionn"),
# .x for each condition
~ subset(your_data_frame, variable1=.x,....,) %>% group_by(some_columns) %>% summarise(a = mean(some_columns))
)
Example using iris:
library(rlang)
conditions <- c("Petal.Length>1.5","Species == 'setosa'","Sepal.Length > 5")
map(conditions, function(x){
iris %>%
dplyr::filter(!!rlang::parse_expr(x)) %>%
head()
})
Example using iris:
conditions <- c("Petal.Length>1.5","Species == 'setosa'","Sepal.Length > 5")
map(conditions, ~ iris %>% dplyr::filter(!!rlang::parse_expr(.x)) %>% nrow())
# or (!! is almost equivalent to eval or rlang::eval_tidy())
map(conditions, ~ iris %>% dplyr::filter(eval(rlang::parse_expr(.x))) %>% nrow())
[[1]]
[1] 113
[[2]]
[1] 50
[[3]]
[1] 118
Instead of creating multiple objects in the global environemnt, rread them in a list, and bind it
library(data.table)
files <- list.files(pattern = "\\.csv", full.names = TRUE)
rbindlist(lapply(files, fread))
It would be much faster with fread than in any other option
If we are using strings to be passed onto group_by, convert the string to symbol with sym from rlang and evaluate (!!)
library(purrr)
map2_df(c("condition1", "condition2"), c("a", "b") ~ df1 %>%
group_by(!! rlang::sym(.x)) %>%
summarise(!! .y := mean(colname)))
If the 'condition1', 'condition2' etc are expressions, place it as quosure and evaluate it
map2_df(quos(condition1, condition2), c("a", "b"), ~ df1 %>%
filter(!! .x) %>%
summarise(!! .y := mean(colnames)))
Using a reproducible example
conditions <- quos(Petal.Length>1.5,Species == 'setosa',Sepal.Length > 5)
map2(conditions, c('a', 'b', 'c'), ~
iris %>%
filter(!! .x) %>%
summarise(!! .y := mean(Sepal.Length)))
#[[1]]
# a
#1 6.124779
#[[2]]
# b
#1 5.006
#[[3]]
# c
#1 6.129661
It would be a 3 column dataset if we use map2_dfc
NOTE: It is not clear whether the OP meant 'condition1', 'condition2' as expressions to be passed on for filtering the rows or not.

How to make wide data long via a function? [duplicate]

I want to parameterise the following computation using dplyr that finds which values of Sepal.Length are associated with more than one value of Sepal.Width:
library(dplyr)
iris %>%
group_by(Sepal.Length) %>%
summarise(n.uniq=n_distinct(Sepal.Width)) %>%
filter(n.uniq > 1)
Normally I would write something like this:
not.uniq.per.group <- function(data, group.var, uniq.var) {
iris %>%
group_by(group.var) %>%
summarise(n.uniq=n_distinct(uniq.var)) %>%
filter(n.uniq > 1)
}
However, this approach throws errors because dplyr uses non-standard evaluation. How should this function be written?
You need to use the standard evaluation versions of the dplyr functions (just append '_' to the function names, ie. group_by_ & summarise_) and pass strings to your function, which you then need to turn into symbols. To parameterise the argument of summarise_, you will need to use interp(), which is defined in the lazyeval package. Concretely:
library(dplyr)
library(lazyeval)
not.uniq.per.group <- function(df, grp.var, uniq.var) {
df %>%
group_by_(grp.var) %>%
summarise_( n_uniq=interp(~n_distinct(v), v=as.name(uniq.var)) ) %>%
filter(n_uniq > 1)
}
not.uniq.per.group(iris, "Sepal.Length", "Sepal.Width")
Note that in recent versions of dplyr the standard evaluation versions of the dplyr functions have been "soft deprecated" in favor of non-standard evaluation.
See the Programming with dplyr vignette for more information on working with non-standard evaluation.
Like the old dplyr versions up to 0.5, the new dplyr has facilities for both standard evaluation (SE) and nonstandard evaluation (NSE). But they are expressed differently than before.
If you want an NSE function, you pass bare expressions and use enquo to capture them as quosures. If you want an SE function, just pass quosures (or symbols) directly, then unquote them in the dplyr calls. Here is the SE solution to the question:
library(tidyverse)
library(rlang)
f1 <- function(df, grp.var, uniq.var) {
df %>%
group_by(!!grp.var) %>%
summarise(n_uniq = n_distinct(!!uniq.var)) %>%
filter(n_uniq > 1)
}
a <- f1(iris, quo(Sepal.Length), quo(Sepal.Width))
b <- f1(iris, sym("Sepal.Length"), sym("Sepal.Width"))
identical(a, b)
#> [1] TRUE
Note how the SE version enables you to work with string arguments - just turn them into symbols first using sym(). For more information, see the programming with dplyr vignette.
In the devel version of dplyr (soon to be released 0.6.0), we can also make use of slightly different syntax for passing the variables.
f1 <- function(df, grp.var, uniq.var) {
grp.var <- enquo(grp.var)
uniq.var <- enquo(uniq.var)
df %>%
group_by(!!grp.var) %>%
summarise(n_uniq = n_distinct(!!uniq.var)) %>%
filter(n_uniq >1)
}
res2 <- f1(iris, Sepal.Length, Sepal.Width)
res1 <- not.uniq.per.group(iris, "Sepal.Length", "Sepal.Width")
identical(res1, res2)
#[1] TRUE
Here enquo takes the arguments and returns the value as a quosure (similar to substitute in base R) by evaluating the function arguments lazily and inside the summarise, we ask it to unquote (!! or UQ) so that it gets evaluated.
Here's the way to do it from rlang 0.4 using curly curly {{ pseudo operator :
library(dplyr)
not.uniq.per.group <- function(data, group.var, uniq.var) {
data %>%
group_by({{ group.var }}) %>%
summarise(n.uniq = n_distinct({{ uniq.var }})) %>%
filter(n.uniq > 1)
}
iris %>% not.uniq.per.group(Sepal.Length, Sepal.Width)
#> # A tibble: 25 x 2
#> Sepal.Length n.uniq
#> <dbl> <int>
#> 1 4.4 3
#> 2 4.6 4
#> 3 4.8 3
#> 4 4.9 5
#> 5 5 8
#> 6 5.1 6
#> 7 5.2 4
#> 8 5.4 4
#> 9 5.5 6
#> 10 5.6 5
#> # ... with 15 more rows
In the current version of dplyr (0.7.4) the use of the standard evaluation function versions (appended '_' to the function name, e.g. group_by_) is deprecated.
Instead you should rely on tidyeval when writing functions.
Here's an example of how your function would look then:
# definition of your function
not.uniq.per.group <- function(data, group.var, uniq.var) {
# enquotes variables to be used with dplyr-functions
group.var <- enquo(group.var)
uniq.var <- enquo(uniq.var)
# use '!!' before parameter names in dplyr-functions
data %>%
group_by(!!group.var) %>%
summarise(n.uniq=n_distinct(!!uniq.var)) %>%
filter(n.uniq > 1)
}
# call of your function
not.uniq.per.group(iris, Sepal.Length, Sepal.Width)
If you want to learn all about the details, there's an excellent vignette by the dplyr-team on how this works.
I've written a function in the past that does something similar to what you're doing, except that it explores all the columns outside the primary key and looks for multiple unique values per group.
find_dups = function(.table, ...) {
require(dplyr)
require(tidyr)
# get column names of primary key
pk <- .table %>% select(...) %>% names
other <- names(.table)[!(names(.table) %in% pk)]
# group by primary key,
# get number of rows per unique combo,
# filter for duplicates,
# get number of distinct values in each column,
# gather to get df of 1 row per primary key, other column,
# filter for where a columns have more than 1 unique value,
# order table by primary key
.table %>%
group_by(...) %>%
mutate(cnt = n()) %>%
filter(cnt > 1) %>%
select(-cnt) %>%
summarise_each(funs(n_distinct)) %>%
gather_('column', 'unique_vals', other) %>%
filter(unique_vals > 1) %>%
arrange(...) %>%
return
# Final dataframe:
## One row per primary key and column that creates duplicates.
## Last column indicates how many unique values of
## the given column exist for each primary key.
}
This function also works with the piping operator:
dat %>% find_dups(key1, key2)
You can avoid lazyeval by using do to call an anonymous function and then using get. This solution can be used more generally to employ multiple aggregations. I usually write the function separately.
library(dplyr)
not.uniq.per.group <- function(df, grp.var, uniq.var) {
df %>%
group_by_(grp.var) %>%
do((function(., uniq.var) {
with(., data.frame(n_uniq = n_distinct(get(uniq.var))))
}
)(., uniq.var)) %>%
filter(n_uniq > 1)
}
not.uniq.per.group(iris, "Sepal.Length", "Sepal.Width")

dplyr programming: how to access columns of .x in map

Nesting a dataframe and transforming each tibble into an xts happens often enough to deserve its own function.
The input dataframe should be nested by nest_var, then each nested dataframe should be converted to an xts object order.by t_var
This is my attempt,
library(tidyverse)
library(purrr)
library(magrittr)
library(xts)
data("sample_matrix")
df <- sample_matrix %>%
as.data.frame() %>%
rownames_to_column(var='dt') %>%
gather(key=ohlc, value=val, -dt)
nest_xts <- function(df_in, nest_var, t_var) {
require(rlang)
nest_var <- enquo(nest_var)
t_var <- enquo(t_var)
df_in %>% group_by(!!nest_var) %>%
nest() %>%
mutate(data := map(data, ~xts(.x, order.by=.x[quo_name(t_var)])))
}
nest_xts(df, ohlc, dt)
but this is not accessing the columns of .x in the mutate, map combo on the last line.
Error in mutate_impl(.data, dots) :
Evaluation error: order.by requires an appropriate time-based object.
Also tried changing the last line to
mutate(data := map(data, ~xts(.x, order.by=.x$!!t_var)))
but the function does not compile;
Error: unexpected '!' in:
" nest() %>%
mutate(data := map(data, ~xts(.x, order.by=.x$!"
> }
Error: unexpected '}' in "}"
You achieved to access the column. But tbl_df[colname] isn't vector but tbl_df.
order.by = .x[quo_name(t_var)][[1]]
# or
order.by = pull(.x, quo_name(t_var))
# and (carelessly?)
df <- df %>% mutate(dt = as.Date(dt))
gives what you want.
Below is just a simplified nesting approach. Not sure if it directly addresses your needs.
library(tidyverse)
library(purrr)
library(magrittr)
library(xts)
data("sample_matrix")
df <- sample_matrix %>%
as.data.frame() %>%
rownames_to_column(var='dt') %>%
gather(key=ohlc, value=val, -dt)
nest_xts <- function(df_in, nest_var, t_var) {
require(rlang)
nest_var <- enquo(nest_var)
t_var <- enquo(t_var)
df_in %>%
group_by(!!nest_var) %>%
summarize(data = list(xts(!!t_var, order.by=as.Date(dt))))
}
result <- nest_xts(df, ohlc, dt)
result
# A tibble: 4 x 2
# ohlc data
# <chr> <list>
# 1 Close <S3: xts>
# 2 High <S3: xts>
# 3 Low <S3: xts>
# 4 Open <S3: xts>

Resources