Short Version
An expression with two variables, x and y, where x is contained in environment 1
and y is contained in a second environment. How does the programmer evaluate
the expression?
Detailed Version
I have a function that takes a formula and data.frame as arguments. On the
the right hand side of the formula is a call to the function splines::bs to
generate a B-spline basis. The workhorse function does a few things, one of
which requires extracting the bs call from the formula and evaluating it. The
problem I am trying to solve involves evaluating the bs call when argument
values are contained in different environments.
Here are the functions needed to recreate the issue I am working on
library(splines)
extract_bmat <- function(form) {
B <- NULL
rr <- function(x) {
if (is.call(x) && grepl("bs", deparse(x[[1]]))) {
B <<- x
} else if (is.recursive(x)) {
as.call(lapply(as.list(x), rr))
} else {
x
}
}
z <- lapply(as.list(form), rr)
B
}
some_workhorse <- function(formula, data) {
# ... lots of cool stuff ...
# fit <- lm(formula, data)
bmat <- eval(extract_bmat(formula), data)
bmat
}
# The following works when evaluated in the .GlobalEnv
# The eval(extract_bmat(formula), data) call within the some_workhorse
# function works without errors
xi <- c(3, 4.5)
eg_data <- data.frame(x = 1:10, y = sin(1:10))
some_workhorse(y ~ bs(x, knots = xi), data = eg_data)
Now, if the function some_workhorse and the xi vector and eg_data
data.frame are generated within a function environment causes an error.
foo <- function() {
xi_in_foo <- c(2, 3)
eg_data_in_foo <- data.frame(x = 1:10, y = sin(1:10))
some_workhorse(y ~ bs(x, knots = xi_in_foo), data = eg_data_in_foo)
}
foo()
# Error in sort(c(rep(Boundary.knots, ord), knots)) :
# object 'xi_in_foo' not found
The location of the error is within the splines::bs call, but that is not the
important part; xi_in_foo not found is the important issue to address.
I know the issue is related to my poor handling of environments in R. My
primary question is
How should the call eval(extract_bmat(formula), data) within the
some_workhorse function be written so that it works correctly when called in
the .GlobalEnv or when called within a function environment?
Secondary question:
Within the extract_bmat function, I would prefer to define an environment
for B and use assign instead of <<-. I suspect that <<- is the best
option because of the uncertainty in the levels of recursion taking place.
That said, I would like to see other solutions.
Thanks for the help.
You should define your function as
some_workhorse <- function(formula, data) {
# ... lots of cool stuff ...
# fit <- lm(formula, data)
bmat <- eval(extract_bmat(formula), data, environment(formula))
bmat
}
Note that formulas in R capture the environment in which they were created. As long as xi_in_foo exists in the environment where the formula was defined, this should work. Variables will first be looked up in the data list/data.frame and then the formula environment would be used as the enclosing environment. If you weren't using formula,s sometimes people use parent.frame() as the enclos= parameter so that variables are looked for in the environment in which the function was called, rather than were the function was defined as is the default with R's lexical scoping.
Related
I'm trying to understand why
foo = function(d,y,x) {
fit = with(d, lm(y ~ x))
}
foo(myData, Y, X)
won't work, where for instance
myData = data.frame(Y=rnorm(50), X=runif(50))
The bit that seems tricky to me is passing the arguments x and y to a formula, as in lm(y ~ x).
#DMT's answer explains what's going on nicely.
Here are the hoops to jump through if you want things to work as you expect:
lmwrap <- function(d,y,x) {
ys <- deparse(substitute(y))
xs <- deparse(substitute(x))
f <- reformulate(xs,response=ys)
return(lm(f,data=d))
}
mydata <- data.frame(X=1:10,Y=rnorm(10))
lmwrap(mydata,Y, X)
Or it can be simplified a bit if you pass the column names as strings rather than symbols.
lmwrap <- function(d,y,x) {
f <- reformulate(xs, response=ys)
return(lm(f, data=d))
}
lmwrap(mydata, "Y", "X")
This approach will be a bit fragile, e.g. if you pass arguments through another function. Also, getting the "Call" part of the formula to read Y~X takes more trickery ...
Y and X are your column names, not variables. They wouldn't in this case, be arguments to your function unless you passed them in as strings and essentially call
lm(mydata[,"Y"]~ mydata[,"X"])
If you were to run ls() on your console, Y and X would most likely not be there, so the function won't work. Print both x and y prior to the fit = call, and you'll likely see NULLs, which won't fly in lm.
One way to do this in your form is the following
lmwrap<-function(df, yname, xname){
fit=lm(d[,yname] ~ d[,xname])
}
lmwrap(mydata,"Y", "X")
But you could just make the lm call like regular
I'm trying to write a set of functions where the first function fits a cox model (via coxph in the survival package in R), and the second function gets estimated survival for a new dataset, given the fitted model object from the first function. I'm running into some sort of scoping issue that I don't quite know how to solve without substantially re-factoring my code (the only way I could think to do it would be much less general and much harder to read).
I have a very similar set of functions that are based on the glm function that do not run into the same issue and give me the answers I would expect. I've included a short worked example below that demonstrates the issue. The glue.cox and glue.glm are functions that have the basic functionality I am trying to get. glue.glm works as expected (yielding the same values from a calculation in the global environment), but the glue.cox complains that it can't find the data that was used to fit the cox model and ends with an error. I don't understand how to do this with substitute but I suspect that is the way forward. I've hit a wall with experimenting.
library(survival)
data.global = data.frame(time=runif(20), x=runif(20))
newdata.global = data.frame(x=c(0,1))
f1 = Surv(time) ~ x # this is the part that messes it up!!!!! Surv gets eval
f2 = time ~ x # this is the part that messes it up!!!!! Surv gets eval
myfit.cox.global = coxph(f1, data=data.global)
myfit.glm.global = glm(f2, data=data.global)
myfit.glm.global2 = glm(time ~ x, data=data.global)
myfit.cox <- function(f, dat.local){
coxph(f, data=dat.local)
}
myfit.glm <- function(f, dat.local){
glm(f, data=dat.local)
}
mypredict.cox <- function(ft, dat.local){
newdata = data.frame(x=c(0,1))
tail(survfit(ft, newdata)$surv, 1)
}
mypredict.glm <- function(ft, dat.local){
newdata = data.frame(x=c(0,1))
predict(ft, newdata)
}
glue.cox <- function(f, dat.local){
fit = myfit.cox(f, dat.local)
mypredict.cox(fit, dat.local)
}
glue.glm <- function(f, dat.local){
fit = myfit.glm(f, dat.local)
mypredict.glm(fit, dat.local)
}
# these numbers are the goal for non-survival data
predict(myfit.glm.global, newdata = newdata.global)
0.5950440 0.4542248
glue.glm(f2, data.global)
0.5950440 0.4542248 # this works
# these numbers are the goal for survival data
tail(survfit(myfit.cox.global, newdata = newdata.global)$surv, 1)
[20,] 0.02300798 0.03106081
glue.cox(f1, data.global)
Error in eval(predvars, data, env) : object 'dat.local' not found
This appears to work, at least in the narrow sense of making glue.cox() work as desired:
myfit.cox <- function(f, dat.local){
environment(f) <- list2env(list(dat.local=dat.local))
coxph(f, data=dat.local)
}
The trick here is that most R modeling/model-processing functions look for data in the environment associated with the formula.
I don't know why glue.glm works without doing more digging, except for the general statement that [g]lm objects store more of the information needed for downstream processing internally (e.g. in the $qr element) than other model types.
I am trying to find parameters using the nls2 package. As the formula I am trying to optimize parameters for is quite complex, I try to use functions that I call within the formula I optimize with the nls2 command:
library(nls2)
set.seed(20160227)
x <- seq(0,50,1)
y <- ((runif(1,10,20)*x)/(runif(1,0,10)+x))+rnorm(51,0,1)
a <- function(){
d+1
}
f1 <- function(){
y <- a()*x/(b+x)
}
st <- data.frame(d = c(-100,100),
b = c(-100,100))
nls2(f1,start = st, algorithm = "brute-force")
Currently, this throws the error
Error: object of type 'closure' is not subsettable
I found this error here, however when I assign values to b and d this works:
a()*x/(b+x)
I assume the problem is that I try to find b and d using functions that already have them inside?
What is the best way to do this? Is it even possible or do I need to define the entire complex formula within nls2?
Neither f1 nor a here have any parameters, so it's not really surprising that it's having some difficulty understanding how you want to optimize f1.
nls2::nls2 (like stats::nls) expects a formula as the first parameter. That formula can be built from any function you want, and does not have to be written down entirely in the call. You could do the following:
a <- function(d){
d+1
}
f1 <- function(b,d,x){
y <- a(d)*x/(b+x)
}
And then fit the model like this:
nls2(y~f1(b,d,x), start = st, algorithm = "brute-force")
Because no start value is provided for x, and because its actual value can be found in the environment, it won't optimize over x, just b and d.
I'm using a function from the library leaps within another function. The last two rows of the leaps function in question goes:
rval$call <- sys.call(sys.parent())
rval
This apparently causes the call to the outer function to be passed to rval$call. And the actual call to the regsubsets function is needed as an argument later on.
Below an example to illustrate:
library(leaps)
#Create some sample data to perform a regression on
inda <- rnorm(100)
indb <- rnorm(100)
dep <- 2 + 0.1*inda + 0.2*indb + rnorm(100, sd = 0.3)
dfk <- data.frame(dep=dep, inda = inda, indb = indb)
#Create some arbitrary outer function
test <- function(dependent, data){
best.fit <- regsubsets(as.formula(paste0(dependent, " ~ .")), data = data, nvmax = 2)
return(best.fit)
}
#Call outer function
best <- test("dep", dfk)
best$call #Returns "test("dep", dfk)"
So best$call will contain the call to the outer function (test), and not the call to the inner (regsubsets) function. As it's not really an option to change the inner function, is there any way of avoiding this problem?
EDIT:
One way around the problem could be something like this:
test <- function(dependent, data){
thecall <- 'regsubsets(as.formula(paste0(dependent, " ~ .")), data = data, nvmax = 2)'
best.fit <- eval(parse(text = thecall))
#best.fit$call <- [some transformation of thecall
return(best.fit)
}
EDIT2:
The reason I need to access what's inside $call is that it's needed in a predict function that I copied from Introduction to statitical learning:
predict.regsubsets <- function(regsubset_model, newdata, id, ...){
form <- as.formula(regsubset_model$call[[2]])
mat <- model.matrix(form, newdata)
coefi <- coef(regsubset_model, id = id)
xvars <- names(coefi)
mat[, xvars] %*% coefi
}
In the second line it uses $call
I’m still not entirely clear on how this is going to be used but in the case of your test function, you could write the following code:
test = function (dependent, data) {
regsubsets_call = bquote(regsubsets(.(as.formula(paste0(dependent, " ~ ."))),
data = .(substitute(data)), nvmax = 2))
best_fit = eval(regsubsets_call)
best_fit$call = regsubsets_call
best_fit
}
However, the result may not work with downstream functions the package provides (though, realistically, it probably will; I’m guessing summary.regsubsets only uses it to print the call).
What’s going on here?
bquote constructs an unevaluated R expression; it’s similar to quote but it allows you to interpolate values (similar to substitute). substitute(data) means that, rather than putting the actual data.frame into the call (which would lead to a very unwieldy output, it puts the variable name (or expression) the user passed to test. So if the user called it as test('mpg', mtcars), then the resulting expression would be
regsubsets(mpg ~ ., data = mtcars, nvmax = 2)
The resulting call object is then (a) evaluated via eval, and (b) stored in the resulting $call.
Incidentally, the formula can (and, as far as I’m concerned, should) be constructed in the same way; no need to parse a string:
as.formula(bquote(.(as.name(dependent)) ~ .))
Taken together, the whole expression would then become:
formula = as.formula(bquote(.(as.name(dependent)) ~ .))
regsubsets_call = bquote(regsubsets(.(formula), data = .(substitute(data)), nvmax = 2))
I have a weird problem with R that I can't seem to work out.
I've tried to write a function that performs K-fold cross validation for a model chosen by the stepwise procedure in R. (I'm aware of the issues with stepwise procedures, it's purely for comparison purposes) :)
Now the issue is, that if I define the function parameters (linmod,k,direction) and run the contents of the function, it works flawlessly. BUT, if I run it as a function, I get an error saying the datas.train object can't be found.
I've tried stepping through the function with debug() and the object clearly exists, but R says it doesn't when I actually run the function. If I just fit a model using lm() it works fine, so I believe it's a problem with the step function in the loop, while inside a function. (try commenting out the step command, and set the predictions to those from the ordinary linear model.)
#CREATE A LINEAR MODEL TO TEST FUNCTION
lm.cars <- lm(mpg~.,data=mtcars,x=TRUE,y=TRUE)
#THE FUNCTION
cv.step <- function(linmod,k=10,direction="both"){
response <- linmod$y
dmatrix <- linmod$x
n <- length(response)
datas <- linmod$model
form <- formula(linmod$call)
# generate indices for cross validation
rar <- n/k
xval.idx <- list()
s <- sample(1:n, n) # permutation of 1:n
for (i in 1:k) {
xval.idx[[i]] <- s[(ceiling(rar*(i-1))+1):(ceiling(rar*i))]
}
#error calculation
errors <- R2 <- 0
for (j in 1:k){
datas.test <- datas[xval.idx[[j]],]
datas.train <- datas[-xval.idx[[j]],]
test.idx <- xval.idx[[j]]
#THE MODELS+
lm.1 <- lm(form,data= datas.train)
lm.step <- step(lm.1,direction=direction,trace=0)
step.pred <- predict(lm.step,newdata= datas.test)
step.error <- sum((step.pred-response[test.idx])^2)
errors[j] <- step.error/length(response[test.idx])
SS.tot <- sum((response[test.idx] - mean(response[test.idx]))^2)
R2[j] <- 1 - step.error/SS.tot
}
CVerror <- sum(errors)/k
CV.R2 <- sum(R2)/k
res <- list()
res$CV.error <- CVerror
res$CV.R2 <- CV.R2
return(res)
}
#TESTING OUT THE FUNCTION
cv.step(lm.cars)
Any thoughts?
When you created your formula, lm.cars, in was assigned its own environment. This environment stays with the formula unless you explicitly change it. So when you extract the formula with the formula function, the original environment of the model is included.
I don't know if I'm using the correct terminology here, but I think you need to explicitly change the environment for the formula inside your function:
cv.step <- function(linmod,k=10,direction="both"){
response <- linmod$y
dmatrix <- linmod$x
n <- length(response)
datas <- linmod$model
.env <- environment() ## identify the environment of cv.step
## extract the formula in the environment of cv.step
form <- as.formula(linmod$call, env = .env)
## The rest of your function follows
Another problem that can cause this is if one passes a character (string vector) to lm instead of a formula. vectors have no environment, and so when lm converts the character to a formula, it apparently also has no environment instead of being automatically assigned the local environment. If one then uses an object as weights that is not in the data argument data.frame, but is in the local function argument, one gets a not found error. This behavior is not very easy to understand. It is probably a bug.
Here's a minimal reproducible example. This function takes a data.frame, two variable names and a vector of weights to use.
residualizer = function(data, x, y, wtds) {
#the formula to use
f = "x ~ y"
#residualize
resid(lm(formula = f, data = data, weights = wtds))
}
residualizer2 = function(data, x, y, wtds) {
#the formula to use
f = as.formula("x ~ y")
#residualize
resid(lm(formula = f, data = data, weights = wtds))
}
d_example = data.frame(x = rnorm(10), y = rnorm(10))
weightsvar = runif(10)
And test:
> residualizer(data = d_example, x = "x", y = "y", wtds = weightsvar)
Error in eval(expr, envir, enclos) : object 'wtds' not found
> residualizer2(data = d_example, x = "x", y = "y", wtds = weightsvar)
1 2 3 4 5 6 7 8 9 10
0.8986584 -1.1218003 0.6215950 -0.1106144 0.1042559 0.9997725 -1.1634717 0.4540855 -0.4207622 -0.8774290
It is a very subtle bug. If one goes into the function environment with browser, one can see the weights vector just fine, but it somehow is not found in the lm call!
The bug becomes even harder to debug if one used the name weights for the weights variable. In this case, since lm can't find the weights object, it defaults to the function weights() from base thus throwing an even stranger error:
Error in model.frame.default(formula = f, data = data, weights = weights, :
invalid type (closure) for variable '(weights)'
Don't ask me how many hours it took me to figure this out.