R write data.table instruction in functional form/call - r

So if I want to get a data.table:DT's customer num sum by category mall_name, I can use:
DT[,sum(customer_num),by = mall_name]
But in some cases,like I want to apply/lapply this summary to a list of DT, I need the functional form of this compact form: I guess it would be:
`[.data.table`(DT,,sum(customer_num),by = mall_name)
how ever, this gave me a error:
Error: could not find function "[.data.table"
Is there anyone know how to write the functional form of it?

As MichaelChirico noted, if you just use
`[`(dt, ...)
this should automatically use the correct option.
As an alternative, the actual function can be found using:
data.table:::`[.data.table`()
and used as follows:
library(data.table)
dt <- data.table(x = 1:5)
listOfDts <- lapply(1:5, function(x)copy(dt))
lapply(listOfDts, function(y) data.table:::`[.data.table`(y, , x := rep(1, 5)))
listOfDts
However, be aware of the risks involved using ::: to access a library's unexported objects. These may change without notice, since they are not exported, and thus not bound to any promised functionality.
An alternative approach to using the functional version of [ would be:
dt <- data.table(x = 1:5)
listOfDts <- lapply(1:5, function(x)copy(dt))
lapply(listOfDts, function(y) y[, x := 1])

Related

how to extract multiple function output automatially in R

I have built my own function, where it returns many values. I really need to extract several values at once. For example, suppose the following is my function
myfunc <- function(x,y){
res <- x+y
res2 <- x^2
res3 <- x*2
out <- list()
out$add <- res
out$squ <- res2
out$or <- res3
out$ADD <- res+res2+res3
out$fi <- res^2+res2+res3
return(out)
}
Then,
> myres
$add
[1] 7
$squ
[1] 9
$or
[1] 6
$ADD
[1] 22
$fi
[1] 64
suppose I want to extract two values at a time, for example,
myres$add, and myres$ADD
is there a way to find them automatically in R instead of repeating it. My original function is very complicated and this will help a lot.
Perhaps, you can try something like this -
res <- myfunc(6, 4)
extract_values <- c('add', 'ADD')
res[extract_values]
#$add
#[1] 10
#$ADD
#[1] 58
You could concatenate them or join in a list:
c(myres$add, myres$squ)
list(myres$add, myres$squ)
If you only want one call to myres you could also index like this:
myres[c(1, 2)]
What you want is known as destructuring, and unfortunately R does not natively support it. There are multiple packages which support this. The one with the (IMHO) nicest syntax is my own package ‘unpack’, which allows you to write positional unpacking as follows:
c[add, ., ., ADD, .] = myfunc(3, 4)
After this, the variables add and ADD are directly available to the caller.
A similar solution (more powerful but with a less nice syntax) is provided by the ‘zeallot’ package.

Calling a character string into object names within a function

I've currently got a very lengthy and repetitive bit of code for data normalisation and inversion ((x-min)/(max-min)*-1)+1) that I want to clean up a bit.
This is a small sample of what it currently looks like:
W3_E1_Norm_New <- W3_E1_Average%>%
mutate(W3_E1_Norm_New = ((W3_E1_zoo-W3_E1_Min)/(W3_E1_Max-W3_E1_Min)*-1)+1)
W3_E2_Norm_New <- W3_E2_Average%>%
mutate(W3_E2_Norm_New = ((W3_E2_zoo-W3_E2_Min)/(W3_E2_Max-W3_E2_Min)*-1)+1)
W3_E3_Norm_New <- W3_E3_Average%>%
mutate(W3_E3_Norm_New = ((W3_E3_zoo-W3_E3_Min)/(W3_E3_Max-W3_E3_Min)*-1)+1)
Each 'W3_E1' refers to a sample ID, and at present each sample ID requires the two lines of code to be written out each time.
Ideally I'd like to write a function which can call a character string (Sample_IDs) into the names of each data frame, so something like
a_Norm_New
would return
W3_E1_Norm_New
then
W3_E2_Norm_New
etc.
Is there a way to write a function that could accomplish this?
Many thanks
I don't have your data but this should work. Define a function:
my_fun <- function (x) {
norm_new <- paste0(x,"_Norm_New")
average <- paste0(x,"_Average")
zoo <- paste0(x, "_zoo")
min <- paste0(x, "_Min")
max <- paste0(x, "_Max")
df <- get(average) %>%
mutate(new_norm = ((zoo - min) / (max - min) * - 1) + 1)
assign(df, norm_new)
}
Then run a for loop:
Sample_IDs <- c("W3_E1", "W3_E2", "W3_E3")
for (i in Sample_IDs) {
my_fun(i)
}
With data.table, it is very easy to write functions that use quoted variable names (see a blog post I wrote on the subject).
Here, we paste the pattern of your column name everywhere with the sufx variable:
library(data.table)
normalize <- function(dt, sufx = "W3_E1"){
df <- as.data.table(dt)
df[, (paste0(sufx,"_Norm_New")) := (
(get(paste0(sufx,_zoo)) - get(paste0(sufx,"_Min"))
)/(
get(paste0(sufx,"_Max")) - get(paste0(sufx,"_Min"))
)*-1)+1)
}
Here the code is not easy to read because I wanted to show that this can be done in one line but you can give more readability easily.
In this solution, you use get to unquote your variable name.

How to use data.table as super class in S4

In the R-Package data.table the manual entry for ?data.table-class says that 'data.table' can be used for inheritance in a class definition, i.e. in the contains argument in a call to setClass:
library("data.table")
setClass("Data.Table", contains = "data.table")
However, if I create an instance of a Data.Table I would have expected that I can treat it like a data.table. This is not so. The following snippet will result in an error, which, as far as I understand, is because the [.data.table function can not handle the mix of S3 and S4 dispatch:
dat <- new("Data.Table", data.table(x = 1))
dat[TRUE]
I solved this, by defining a new method for [ and coercing any Data.Table to a data.table before evaluating it therein.
setMethod(
"[",
"Data.Table",
function(x, i, j, ..., drop = TRUE) {
mc <- match.call()
mc$x <- substitute(S3Part(x, strictS3 = TRUE))
Data.Table(
eval(mc, envir = parent.frame())
)
})
And a constructor function to feel more comfortable with it:
Data.Table <- function(...) new("Data.Table", data.table(...))
dat <- Data.Table(x = 1, key = "x")
dat[1]
This is acceptable for some scenarios but I loose all get and set functions from the data.table package and I suspect that I destroyed some other features. So the question is how to implement a working S4 data.table class? I would appreciate
Pointers to similar attempts/projects
Better/alternative solutions/ideas for an implementation
Any advice on what I loose with respect to performance with the above solution
There is one related question on SO I found, which presents a similar approach. However, I think it would involve too much coding to be feasible.
I think the short answer (the problem is still as valid as it was when raised) is that using data.table as a super class in S4 is not recommendable and not possible without considerable amount of effort and certain risks of instability.
It is also not quite clear what the goal should have been with the case at hand, but let's assume there was no alternative like forking and modifying the existing data.table package.
Then, to illustrate the case mentioned above with the [, let's first initialize the example:
# replicating some code from above
library("data.table")
Data.Table <- setClass("Data.Table", contains = "data.table")
dat <- Data.Table(data.table(x = 1))
dat[1]
> Error in if (n > 0) c(NA_integer_, -n) else integer() :
argument is of length zero
dat2 <- data.table(x = 1)
Now to check [.data.table, which is a lot of code as you can see on the Github repo data.table.R, so just reproducing the relevant part in the simplest dummy way:
# initializing output
ans = vector("list", 1)
# data (just one line of code as we have just one value in our example).
# desired subscript is row 1, but we have just one column as well.
ans[[1]] <- dat[[1]][1]
# add 'names' attribute
setattr(ans, "names", "x")
# set 'class' attribute
setattr(ans, "class", class(dat))
# set 'row.names'
setattr(ans, "row.names", .set_row_names(nrow(ans)))
And there we have the error, trying to set the row.names, which doesn't work because dim(ans) and therefore nrow is NULL.
So the real problem is here with the usage of setattr(ans, "class", class(dat)), which doesn't work well (try isS4(ans) or print(ans) just afterwards). In fact, from ?class we can read about S4:
The replacement version of the function sets the class to the value provided. For classes that have a formal definition, directly replacing the class this way is strongly deprecated. The expression as(object, value) is the way to coerce an object to a particular class.
data.table's setattr, which through C uses R's setAttrib function, is similar to calling attr(ans, "class") <- "Data.Table" or class(ans) <- "Data.Table", which would screw up as well.
If you do setattr(ans, "class", class(dat2)) instead, you will see that everything is fine here, as should be with S3.
One more word of caution though:
setattr(ans, "class", "data.frame")
and then print(ans) or dim(ans) may not look very nice to you... (although ans$x is ok).
Overriding setattr() in a good way isn't trivial either and such an approach will probably not get you any farther than the approach you have outlined above. Result could be something like:
setattr_new <- function(x, name, value) {
if (name == "class" && "Data.Table" %in% value) {
value <- c("data.table", "data.frame")
}
if (name == "names" && is.data.table(x) && length(attr(x, "names")) && !is.null(value))
setnames(x, value)
else {
ans = .Call(Csetattrib, x, name, value)
if (!is.null(ans)) {
warning("Input is a length=1 logical that points to the same address as R's global TRUE value. Therefore the attribute has not been set by reference, rather on a copy. You will need to assign the result back to a variable. See https://github.com/Rdatatable/data.table/issues/1281 for more.")
x = ans
}
}
if (name == "levels" && is.factor(x) && anyDuplicated(value))
.Call(Csetlevels, x, (value <- as.character(value)), unique(value))
invisible(x)
}
godmode:::assignAnywhere("setattr", setattr_new)
identical(dat[1], dat2[1])
[1] TRUE
# then possibly convert back to S4 class if desired for further processing at the end
as(dat[1], "Data.Table")

How to use an unknown number of key columns in a data.table

I want to do the same as explained here, i.e. adding missing rows to a data.table. The only additional difficulty I'm facing is that I want the number of key columns, i.e. those rows that are used for the self-join, to be flexible.
Here is a small example that basically repeats what is done in the link mentioned above:
df <- data.frame(fundID = rep(letters[1:4], each=6),
cfType = rep(c("D", "D", "T", "T", "R", "R"), times=4),
variable = rep(c(1,3), times=12),
value = 1:24)
DT <- as.data.table(df)
idCols <- c("fundID", "cfType")
setkeyv(DT, c(idCols, "variable"))
DT[CJ(unique(df$fundID), unique(df$cfType), seq(from=min(variable), to=max(variable))), nomatch=NA]
What bothers me is the last line. I want idCols to be flexible (for instance if I use it within a function), so I don't want to type unique(df$fundID), unique(df$cfType) manually. However, I just don't find any workaround for this. All my attempts to automatically split the subset of df into vectors, as needed by CJ, fail with the error message Error in setkeyv(x, cols, verbose = verbose) : Column 'V1' is type 'list' which is not (currently) allowed as a key column type.
CJ(sapply(df[, idCols], unique))
CJ(unique(df[, idCols]))
CJ(as.vector(unique(df[, idCols])))
CJ(unique(DT[, idCols, with=FALSE]))
I also tried building the expression myself:
str <- ""
for (i in idCols) {
str <- paste0(str, "unique(df$", i, "), ")
}
str <- paste0(str, "seq(from=min(variable), to=max(variable))")
str
[1] "unique(df$fundID), unique(df$cfType), seq(from=min(variable), to=max(variable))"
But then I don't know how to use str. This all fails:
CJ(eval(str))
CJ(substitute(str))
CJ(call(str))
Does anyone know a good workaround?
Michael's answer is great. do.call is indeed needed to call CJ flexibly in that way, afaik.
To clear up on the expression building approach and starting with your code, but removing the df$ parts (not needed and not done in the linked answer, since i is evaluated within the scope of DT) :
str <- ""
for (i in idCols) {
str <- paste0(str, "unique(", i, "), ")
}
str <- paste0(str, "seq(from=min(variable), to=max(variable))")
str
[1] "unique(fundID), unique(cfType), seq(from=min(variable), to=max(variable))"
then it's :
expr <- parse(text=paste0("CJ(",str,")"))
DT[eval(expr),nomatch=NA]
or alternatively build and eval the whole query dynamically :
eval(parse(text=paste0("DT[CJ(",str,"),nomatch=NA")))
And if this is done a lot then it may be worth creating yourself a helper function :
E = function(...) eval(parse(text=paste0(...)))
to reduce it to :
E("DT[CJ(",str,"),nomatch=NA")
I've never used the data.table package, so forgive me if I miss the mark here, but I think I've got it. There's a lot going on here. Start by reading up on do.call, which allows you to evaluate any function in a sort of non-traditional manner where arguments are specified by a supplied list (where each element is in the list is positionally matched to the function arguments unless explicitly named). Also notice that I had to specify min(df$variable) instead of just min(variable). Read Hadley's page on scoping to get an idea of the issue here.
CJargs <- lapply(df[, idCols], unique)
names(CJargs) <- NULL
CJargs[[length(CJargs) +1]] <- seq(from=min(df$variable), to=max(df$variable))
DT[do.call("CJ", CJargs),nomatch=NA]

combination of expand.grid and mapply?

I am trying to come up with a variant of mapply (call it xapply for now) that combines the functionality (sort of) of expand.grid and mapply. That is, for a function FUN and a list of arguments L1, L2, L3, ... of unknown length, it should produce a list of length n1*n2*n3 (where ni is the length of list i) which is the result of applying FUN to all combinations of the elements of the list.
If expand.grid worked to generate lists of lists rather than data frames, one might be able to use it, but I have in mind that the lists may be lists of things that won't necessarily fit into a data frame nicely.
This function works OK if there are exactly three lists to expand, but I am curious about a more generic solution. (FLATTEN is unused, but I can imagine that FLATTEN=FALSE would generate nested lists rather than a single list ...)
xapply3 <- function(FUN,L1,L2,L3,FLATTEN=TRUE,MoreArgs=NULL) {
retlist <- list()
count <- 1
for (i in seq_along(L1)) {
for (j in seq_along(L2)) {
for (k in seq_along(L3)) {
retlist[[count]] <- do.call(FUN,c(list(L1[[i]],L2[[j]],L3[[k]]),MoreArgs))
count <- count+1
}
}
}
retlist
}
edit: forgot to return the result. One might be able to solve this by making a list of the indices with combn and going from there ...
I think I have a solution to my own question, but perhaps someone can do better (and I haven't implemented FLATTEN=FALSE ...)
xapply <- function(FUN,...,FLATTEN=TRUE,MoreArgs=NULL) {
L <- list(...)
inds <- do.call(expand.grid,lapply(L,seq_along)) ## Marek's suggestion
retlist <- list()
for (i in 1:nrow(inds)) {
arglist <- mapply(function(x,j) x[[j]],L,as.list(inds[i,]),SIMPLIFY=FALSE)
if (FLATTEN) {
retlist[[i]] <- do.call(FUN,c(arglist,MoreArgs))
}
}
retlist
}
edit: I tried #baptiste's suggestion, but it's not easy (or wasn't for me). The closest I got was
xapply2 <- function(FUN,...,FLATTEN=TRUE,MoreArgs=NULL) {
L <- list(...)
xx <- do.call(expand.grid,L)
f <- function(...) {
do.call(FUN,lapply(list(...),"[[",1))
}
mlply(xx,f)
}
which still doesn't work. expand.grid is indeed more flexible than I thought (although it creates a weird data frame that can't be printed), but enough magic is happening inside mlply that I can't quite make it work.
Here is a test case:
L1 <- list(data.frame(x=1:10,y=1:10),
data.frame(x=runif(10),y=runif(10)),
data.frame(x=rnorm(10),y=rnorm(10)))
L2 <- list(y~1,y~x,y~poly(x,2))
z <- xapply(lm,L2,L1)
xapply(lm,L2,L1)
#ben-bolker, I had a similar desire and think I have a preliminary solution worked out, that I've also tested to work in parallel. The function, which I somewhat confusingly called gmcmapply (g for grid) takes an arbitrarily large named list mvars (that gets expand.grid-ed within the function) and a FUN that utilizes the list names as if they were arguments to the function itself (gmcmapply will update the formals of FUN so that by the time FUN is passed to mcmapply it's arguments reflect the variables that the user would like to iterate over (which would be layers in a nested for loop)). mcmapply then dynamically updates the values of these formals as it cycles over the expanded set of variables in mvars.
I've posted the preliminary code as a gist (reprinted with an example below) and would be curious to get your feedback on it. I'm a grad student, that is self-described as an intermediately-skilled R enthusiast, so this is pushing my R skills for sure. You or other folks in the community may have suggestions that would improve on what I have. I do think even as it stands, I'll be coming to this function quite a bit in the future.
gmcmapply <- function(mvars, FUN, SIMPLIFY = FALSE, mc.cores = 1, ...){
require(parallel)
FUN <- match.fun(FUN)
funArgs <- formals(FUN)[which(names(formals(FUN)) != "...")] # allow for default args to carry over from FUN.
expand.dots <- list(...) # allows for expanded dot args to be passed as formal args to the user specified function
# Implement non-default arg substitutions passed through dots.
if(any(names(funArgs) %in% names(expand.dots))){
dot_overwrite <- names(funArgs[which(names(funArgs) %in% names(expand.dots))])
funArgs[dot_overwrite] <- expand.dots[dot_overwrite]
#for arg naming and matching below.
expand.dots[dot_overwrite] <- NULL
}
## build grid of mvars to loop over, this ensures that each combination of various inputs is evaluated (equivalent to creating a structure of nested for loops)
grid <- expand.grid(mvars,KEEP.OUT.ATTRS = FALSE, stringsAsFactors = FALSE)
# specify formals of the function to be evaluated by merging the grid to mapply over with expanded dot args
argdefs <- rep(list(bquote()), ncol(grid) + length(expand.dots) + length(funArgs) + 1)
names(argdefs) <- c(colnames(grid), names(funArgs), names(expand.dots), "...")
argdefs[which(names(argdefs) %in% names(funArgs))] <- funArgs # replace with proper dot arg inputs.
argdefs[which(names(argdefs) %in% names(expand.dots))] <- expand.dots # replace with proper dot arg inputs.
formals(FUN) <- argdefs
if(SIMPLIFY) {
#standard mapply
do.call(mcmapply, c(FUN, c(unname(grid), mc.cores = mc.cores))) # mc.cores = 1 == mapply
} else{
#standard Map
do.call(mcmapply, c(FUN, c(unname(grid), SIMPLIFY = FALSE, mc.cores = mc.cores)))
}
}
example code below:
# Example 1:
# just make sure variables used in your function appear as the names of mvars
myfunc <- function(...){
return_me <- paste(l3, l1^2 + l2, sep = "_")
return(return_me)
}
mvars <- list(l1 = 1:10,
l2 = 1:5,
l3 = letters[1:3])
### list output (mapply)
lreturns <- gmcmapply(mvars, myfunc)
### concatenated output (Map)
lreturns <- gmcmapply(mvars, myfunc, SIMPLIFY = TRUE)
## N.B. This is equivalent to running:
lreturns <- c()
for(l1 in 1:10){
for(l2 in 1:5){
for(l3 in letters[1:3]){
lreturns <- c(lreturns,myfunc(l1,l2,l3))
}
}
}
### concatenated outout run on 2 cores.
lreturns <- gmcmapply(mvars, myfunc, SIMPLIFY = TRUE, mc.cores = 2)
Example 2. Pass non-default args to FUN.
## Since the apply functions dont accept full calls as inputs (calls are internal), user can pass arguments to FUN through dots, which can overwrite a default option for FUN.
# e.g. apply(x,1,FUN) works and apply(x,1,FUN(arg_to_change= not_default)) does not, the correct way to specify non-default/additional args to FUN is:
# gmcmapply(mvars, FUN, arg_to_change = not_default)
## update myfunc to have a default argument
myfunc <- function(rep_letters = 3, ...){
return_me <- paste(rep(l3, rep_letters), l1^2 + l2, sep = "_")
return(return_me)
}
lreturns <- gmcmapply(mvars, myfunc, rep_letters = 1)
A bit of additional functionality I would like to add but am still trying to work out is
cleaning up the output to be a pretty nested list with the names of mvars (normally, I'd create multiple lists within a nested for loop and tag lower-level lists onto higher level lists all the way up until all layers of the gigantic nested loop were done). I think using some abstracted variant of the solution provided here will work, but I haven't figured out how to make the solution flexible to the number of columns in the expand.grid-ed data.frame.
I would like an option to log the outputs of the child processesthat get called in mcmapply in a user-specified directory. So you could look at .txt outputs from every combination of variables generated by expand.grid (i.e. if the user prints model summaries or status messages as a part of FUN as I often do). I think a feasible solution is to use the substitute() and body() functions, described here to edit FUN to open a sink() at the beginning of FUN and close it at the end if the user specifies a directory to write to. Right now, I just program it right into FUN itself, but later it would be nice to just pass gmcmapply an argument called something like log_children = "path_to_log_dir. and then editing the body of the function to (pseudocode) sink(file = file.path(log_children, paste0(paste(names(mvars), sep = "_"), ".txt")
Let me know what you think!
-Nate

Resources