I have a function that I am optimizing using the optimx function in R (I'm also open to using optim, since I'm not sure it will make a difference for what I'm trying to do). I have a gradient that I am passing to optimx for (hopefully) faster convergence compared to not using a gradient. Both the function and the gradient use many of the same quantities that are computed from each new parameter set. One of these quantities in particular is very computationally costly, and it's redundant to have to compute this quantity twice for each iteration - once for the function, and again for the gradient. I'm trying to find a way to compute this quantity once, then pass it to the function and the gradient.
So here is what I am doing. So far this works, but it is inefficient:
optfunc<-function(paramvec){
quant1<-costlyfunction(paramvec)
#costlyfunction is a separate function that takes a while to run
loglikelihood<-sum(quant1)**2
#not really squared, but the log likelihood uses quant1 in its calculation
return(loglikelihood)
}
optgr<-function(paramvec){
quant1<-costlyfunction(paramvec)
mygrad<-sum(quant1) #again not the real formula, just for illustration
return(mygrad)
}
optimx(par=paramvec,fn=optfunc,gr=optgr,method="BFGS")
I am trying to find a way to calculate quant1 only once with each iteration of optimx. It seems the first step would be to combine fn and gr into a single function. I thought the answer to this question may help me, and so I recoded the optimization as:
optfngr<-function(){
quant1<-costlyfunction(paramvec)
optfunc<-function(paramvec){
loglikelihood<-sum(quant1)**2
return(loglikelihood)
}
optgr<-function(paramvec){
mygrad<-sum(quant1)
return(mygrad)
}
return(list(fn = optfunc, gr = optgr))
}
do.call(optimx, c(list(par=paramvec,method="BFGS",optfngr() )))
Here, I receive the error: "Error in optimx.check(par, optcfg$ufn, optcfg$ugr, optcfg$uhess, lower, : Cannot evaluate function at initial parameters." Of course, there are obvious problems with my code here. So, I'm thinking answering any or all of the following questions may shed some light:
I passed paramvec as the only arguments to optfunc and optgr so that optimx knows that paramvec is what needs to be iterated over. However, I don't know how to pass quant1 to optfunc and optgr. Is it true that if I try to pass quant1, then optimx will not properly identify the parameter vector?
I wrapped optfunc and optgr into one function, so that the quantity quant1 will exist in the same function space as both functions. Perhaps I can avoid this if I can find a way to return quant1 from optfunc, and then pass it to optgr. Is this possible? I'm thinking it's not, since the documentation for optimx is pretty clear that the function needs to return a scalar.
I'm aware that I might be able to use the dots arguments to optimx as extra parameter arguments, but I understand that these are for fixed parameters, and not arguments that will change with each iteration. Unless there is also a way to manipulate this?
Thanks in advance!
Your approach is close to what you want, but not quite right. You want to call costlyfunction(paramvec) from within optfn(paramvec) or optgr(paramvec), but only when paramvec has changed. Then you want to save its value in the enclosing frame, as well as the value of paramvec that was used to do it. That is, something like this:
optfngr<-function(){
quant1 <- NULL
prevparam <- NULL
updatecostly <- function(paramvec) {
if (!identical(paramvec, prevparam)) {
quant1 <<- costlyfunction(paramvec)
prevparam <<- paramvec
}
}
optfunc<-function(paramvec){
updatecostly(paramvec)
loglikelihood<-sum(quant1)**2
return(loglikelihood)
}
optgr<-function(paramvec){
updatecostly(paramvec)
mygrad<-sum(quant1)
return(mygrad)
}
return(list(fn = optfunc, gr = optgr))
}
do.call(optimx, c(list(par=paramvec,method="BFGS"),optfngr() ))
I used <<- to make assignments to the enclosing frame, and fixed up your do.call second argument.
Doing this is called "memoization" (or "memoisation" in some locales; see http://en.wikipedia.org/wiki/Memoization), and there's a package called memoise that does it. It keeps track of lots of (or all of?) the previous results of calls to costlyfunction, so would be especially good if paramvec only takes on a small number of values. But I think it won't be so good in your situation because you'll likely only make a small number of repeated calls to costlyfunction and then never use the same paramvec again.
I am creating two functions f(i) and f(j) and I want to find the value of i and j simultaneously such that the difference f(i) -f(j) is minimized.
However on running the code below I am getting an error.
I have two functions with parameter i and j as below
bu1<- function(j){
sum(linkinc_lev1$gdp*(1/(1+ (linkinc_lev1$use_gro*(1+j/100))))
}
bu1<- function(j){
sum(linkinc_lev2$gdp*(1/(1+ (linkinc_lev2$use_gro*(1+i/100))))
}
Now I need to find the value of i and j simultaneously such that difference of above functions minimized.
I was trying like
f1<- function(j,i) abs(bu1(j)-td1(i))
ans_lev1<-optimize(f1, lower=-100, upper=100),
but getting error Error in td1(i) : argument "i" is missing, with no default
Is there any way in R to minimize functions based on two parameters?
Yes, there is, almost all optimizers work on parameter vectors. You should modify your function to something like
f1<- function(param) abs(bu1(param[2])-td1(param[1]))
i.e. the function takes a single argument "param", and inside the function you fetch the values of interest out of it.
A note: if you use abs() you end up with a non-differentiable objective function. You have to select an optimizer that can handle it (SANN and Nelder-Mead can, for instance). I would rather do
f1 <- function(param) (bu1(param[2])-td1(param[1]))^2
still the same solution but now differentiable, and you can feed it to most optimizers.
I would like to use optim() to optimize a cost function (fn argument), and I will be providing a gradient (gr argument). I can write separate functions for fn and gr. However, they have a lot of code in common and I don't want the optimizer to waste time repeating those calculations. So is it possible to provide one function that computes both the cost and the gradient? If so, what would be the calling syntax to optim()?
As an example, suppose the function I want to minimize is
cost <- function(x) {
x*exp(x)
}
Obviously, this is not the function I'm trying to minimize. That's too complicated to list here, but the example serves to illustrate the issue. Now, the gradient would be
grad <- function(x) {
(x+1)*exp(x)
}
So as you can see, the two functions, if called separately, would repeat some of the work (in this case, the exponential function). However, since optim() takes two separate arguments (fn and gr), it appears there is no way to avoid this inefficiency, unless there is a way to define a function like
costAndGrad <- function(x) {
ex <- exp(x)
list(cost=x*ex, grad=(x+1)*ex)
}
and then pass that function to optim(), which would need to know how to extract the cost and gradient.
Hope that explains the problem. Like I said my function is much more complicated, but the idea is the same: there is considerable code that goes into both calculations (cost and gradient), which I don't want to repeat unnecessarily.
By the way, I am an R novice, so there might be something simple that I'm missing!
Thanks very much
The nlm function does optimization and it expects the gradient information to be returned as an attribute to the value returned as the original function value. That is similar to what you show above. See the examples in the help for nlm.
When playing with large objects the memory and speed implications of pass-by-value can be substantial.
R has several ways to pass-by-reference:
Reference Classes
R.oo
C/C++/other external languages
Environments
However, many of them require considerable overhead (in terms of code complexity and programmer time).
In particular, I'm envisioning something like what you could use constant references for in C++ : pass a large object, compute on it without modifying that, and return the results of that computation.
Since R does not have a concept of constants, I suspect if this happens anywhere, it's in compiled R functions, where the compiler could see that the formal argument was not modified anywhere in the code and pass it by reference.
Does the R compiler pass-by-reference if an argument is not modified? If not, are there any technical barriers to it doing so or has it just not been implemented yet?
Example code:
n <- 10^7
bigdf <- data.frame( x=runif(n), y=rnorm(n), z=rt(n,5) )
myfunc <- function(dat) invisible(with( dat, x^2+mean(y)+sqrt(exp(z)) ))
library(compiler)
mycomp <- compile(myfunc)
tracemem(bigdf)
> myfunc(bigdf)
> # No object was copied! Question is not necessary
This may be way off base for what you need, but what about wrapping the object in a closure? This function makes a function that knows about the object given to its parent, here I use the tiny volcano to do a very simple job.
mkFun <- function(x) {
function(rownumbers) {
rowSums(x[rownumbers , , drop = FALSE])
}
}
fun <- mkFun(volcano)
fun(2) ##1] 6493
fun(2:3) ##[1] 6493 6626
Now fun can get passed around by worker functions to do its job as it likes.
In recent conversations with fellow students, I have been advocating for avoiding globals except to store constants. This is a sort of typical applied statistics-type program where everyone writes their own code and project sizes are on the small side, so it can be hard for people to see the trouble caused by sloppy habits.
In talking about avoidance of globals, I'm focusing on the following reasons why globals might cause trouble, but I'd like to have some examples in R and/or Stata to go with the principles (and any other principles you might find important), and I'm having a hard time coming up with believable ones.
Non-locality: Globals make debugging harder because they make understanding the flow of code harder
Implicit coupling: Globals break the simplicity of functional programming by allowing complex interactions between distant segments of code
Namespace collisions: Common names (x, i, and so forth) get re-used, causing namespace collisions
A useful answer to this question would be a reproducible and self-contained code snippet in which globals cause a specific type of trouble, ideally with another code snippet in which the problem is corrected. I can generate the corrected solutions if necessary, so the example of the problem is more important.
Relevant links:
Global Variables are Bad
Are global variables bad?
I also have the pleasure of teaching R to undergraduate students who have no experience with programming. The problem I found was that most examples of when globals are bad, are rather simplistic and don't really get the point across.
Instead, I try to illustrate the principle of least astonishment. I use examples where it is tricky to figure out what was going on. Here are some examples:
I ask the class to write down what they think the final value of i will be:
i = 10
for(i in 1:5)
i = i + 1
i
Some of the class guess correctly. Then I ask should you ever write code like this?
In some sense i is a global variable that is being changed.
What does the following piece of code return:
x = 5:10
x[x=1]
The problem is what exactly do we mean by x
Does the following function return a global or local variable:
z = 0
f = function() {
if(runif(1) < 0.5)
z = 1
return(z)
}
Answer: both. Again discuss why this is bad.
Oh, the wonderful smell of globals...
All of the answers in this post gave R examples, and the OP wanted some Stata examples, as well. So let me chime in with these.
Unlike R, Stata does take care of locality of its local macros (the ones that you create with local command), so the issue of "Is this this a global z or a local z that is being returned?" never comes up. (Gosh... how can you R guys write any code at all if locality is not enforced???) Stata has a different quirk, though, namely that a non-existent local or global macro is evaluated as an empty string, which may or may not be desirable.
I have seen globals used for several main reasons:
Globals are often used as shortcuts for variable lists, as in
sysuse auto, clear
regress price $myvars
I suspect that the main usage of such construct is for someone who switches between interactive typing and storing the code in a do-file as they try multiple specifications. Say they try regression with homoskedastic standard errors, heteroskedastic standard errors, and median regression:
regress price mpg foreign
regress price mpg foreign, robust
qreg price mpg foreign
And then they run these regressions with another set of variables, then with yet another one, and finally they give up and set this up as a do-file myreg.do with
regress price $myvars
regress price $myvars, robust
qreg price $myvars
exit
to be accompanied with an appropriate setting of the global macro. So far so good; the snippet
global myvars mpg foreign
do myreg
produces the desirable results. Now let's say they email their famous do-file that claims to produce very good regression results to collaborators, and instruct them to type
do myreg
What will their collaborators see? In the best case, the mean and the median of mpg if they started a new instance of Stata (failed coupling: myreg.do did not really know you meant to run this with a non-empty variable list). But if the collaborators had something in the works, and too had a global myvars defined (name collision)... man, would that be a disaster.
Globals are used for directory or file names, as in:
use $mydir\data1, clear
God only knows what will be loaded. In large projects, though, it does come handy. You would want to define global mydir somewhere in your master do-file, may be even as
global mydir `c(pwd)'
Globals can be used to store an unpredictable crap, like a whole command:
capture $RunThis
God only knows what will be executed; let's just hope it is not ! format c:\. This is the worst case of implicit strong coupling, but since I am not even sure that RunThis will contain anything meaningful, I put a capture in front of it, and will be prepared to treat the non-zero return code _rc. (See, however, my example below.)
Stata's own use of globals is for God settings, like the type I error probability/confidence level: the global $S_level is always defined (and you must be a total idiot to redefine this global, although of course it is technically doable). This is, however, mostly a legacy issue with code of version 5 and below (roughly), as the same information can be obtained from less fragile system constant:
set level 90
display $S_level
display c(level)
Thankfully, globals are quite explicit in Stata, and hence are easy to debug and remove. In some of the above situations, and certainly in the first one, you'd want to pass parameters to do-files which are seen as the local `0' inside the do-file. Instead of using globals in the myreg.do file, I would probably code it as
unab varlist : `0'
regress price `varlist'
regress price `varlist', robust
qreg price `varlist'
exit
The unab thing will serve as an element of protection: if the input is not a legal varlist, the program will stop with an error message.
In the worst cases I've seen, the global was used only once after having been defined.
There are occasions when you do want to use globals, because otherwise you'd have to pass the bloody thing to every other do-file or a program. One example where I found the globals pretty much unavoidable was coding a maximum likelihood estimator where I did not know in advance how many equations and parameters I would have. Stata insists that the (user-supplied) likelihood evaluator will have specific equations. So I had to accumulate my equations in the globals, and then call my evaluator with the globals in the descriptions of the syntax that Stata would need to parse:
args lf $parameters
where lf was the objective function (the log-likelihood). I encountered this at least twice, in the normal mixture package (denormix) and confirmatory factor analysis package (confa); you can findit both of them, of course.
One R example of a global variable that divides opinion is the stringsAsFactors issue on reading data into R or creating a data frame.
set.seed(1)
str(data.frame(A = sample(LETTERS, 100, replace = TRUE),
DATES = as.character(seq(Sys.Date(), length = 100, by = "days"))))
options("stringsAsFactors" = FALSE)
set.seed(1)
str(data.frame(A = sample(LETTERS, 100, replace = TRUE),
DATES = as.character(seq(Sys.Date(), length = 100, by = "days"))))
options("stringsAsFactors" = TRUE) ## reset
This can't really be corrected because of the way options are implemented in R - anything could change them without you knowing it and thus the same chunk of code is not guaranteed to return exactly the same object. John Chambers bemoans this feature in his recent book.
A pathological example in R is the use of one of the globals available in R, pi, to compute the area of a circle.
> r <- 3
> pi * r^2
[1] 28.27433
>
> pi <- 2
> pi * r^2
[1] 18
>
> foo <- function(r) {
+ pi * r^2
+ }
> foo(r)
[1] 18
>
> rm(pi)
> foo(r)
[1] 28.27433
> pi * r^2
[1] 28.27433
Of course, one can write the function foo() defensively by forcing the use of base::pi but such recourse may not be available in normal user code unless packaged up and using a NAMESPACE:
> foo <- function(r) {
+ base::pi * r^2
+ }
> foo(r = 3)
[1] 28.27433
> pi <- 2
> foo(r = 3)
[1] 28.27433
> rm(pi)
This highlights the mess you can get into by relying on anything that is not solely in the scope of your function or passed in explicitly as an argument.
Here's an interesting pathological example involving replacement functions, the global assign, and x defined both globally and locally...
x <- c(1,NA,NA,NA,1,NA,1,NA)
local({
#some other code involving some other x begin
x <- c(NA,2,3,4)
#some other code involving some other x end
#now you want to replace NAs in the the global/parent frame x with 0s
x[is.na(x)] <<- 0
})
x
[1] 0 NA NA NA 0 NA 1 NA
Instead of returning [1] 1 0 0 0 1 0 1 0, the replacement function uses the index returned by the local value of is.na(x), even though you're assigning to the global value of x. This behavior is documented in the R Language Definition.
One quick but convincing example in R is to run the line like:
.Random.seed <- 'normal'
I chose 'normal' as something someone might choose, but you could use anything there.
Now run any code that uses generated random numbers, for example:
rnorm(10)
Then you can point out that the same thing could happen for any global variable.
I also use the example of:
x <- 27
z <- somefunctionthatusesglobals(5)
Then ask the students what the value of x is; the answer is that we don't know.
Through trial and error I've learned that I need to be very explicit in naming my function arguments (and ensure enough checks at the start and along the function) to make everything as robust as possible. This is especially true if you have variables stored in global environment, but then you try to debug a function with a custom valuables - and something doesn't add up! This is a simple example that combines bad checks and calling a global variable.
glob.arg <- "snake"
customFunction <- function(arg1) {
if (is.numeric(arg1)) {
glob.arg <- "elephant"
}
return(strsplit(glob.arg, "n"))
}
customFunction(arg1 = 1) #argument correct, expected results
customFunction(arg1 = "rubble") #works, but may have unexpected results
An example sketch that came up while trying to teach this today. Specifically, this focuses on trying to give intuition as to why globals can cause problems, so it abstracts away as much as possible in an attempt to state what can and cannot be concluded just from the code (leaving the function as a black box).
The set up
Here is some code. Decide whether it will return an error or not based on only the criteria given.
The code
stopifnot( all( x!=0 ) )
y <- f(x)
5/x
The criteria
Case 1: f() is a properly-behaved function, which uses only local variables.
Case 2: f() is not necessarily a properly-behaved function, which could potentially use global assignment.
The answer
Case 1: The code will not return an error, since line one checks that there are no x's equal to zero and line three divides by x.
Case 2: The code could potentially return an error, since f() could e.g. subtract 1 from x and assign it back to the x in the parent environment, where any x element equal to 1 could then be set to zero and the third line would return a division by zero error.
Here's one attempt at an answer that would make sense to statisticsy types.
Namespace collisions: Common names (x, i, and so forth) get re-used, causing namespace collisions
First we define a log likelihood function,
logLik <- function(x) {
y <<- x^2+2
return(sum(sqrt(y+7)))
}
Now we write an unrelated function to return the sum of squares of an input. Because we're lazy we'll do this passing it y as a global variable,
sumSq <- function() {
return(sum(y^2))
}
y <<- seq(5)
sumSq()
[1] 55
Our log likelihood function seems to behave exactly as we'd expect, taking an argument and returning a value,
> logLik(seq(12))
[1] 88.40761
But what's up with our other function?
> sumSq()
[1] 633538
Of course, this is a trivial example, as will be any example that doesn't exist in a complex program. But hopefully it'll spark a discussion about how much harder it is to keep track of globals than locals.
In R you may also try to show them that there is often no need to use globals as you may access the variables defined in the function scope from within the function itself by only changing the enviroment. For example the code below
zz="aaa"
x = function(y) {
zz="bbb"
cat("value of zz from within the function: \n")
cat(zz , "\n")
cat("value of zz from the function scope: \n")
with(environment(x),cat(zz,"\n"))
}